Nothing Special   »   [go: up one dir, main page]

US20200154047A1 - Systems and methods for image-based light output - Google Patents

Systems and methods for image-based light output Download PDF

Info

Publication number
US20200154047A1
US20200154047A1 US16/741,525 US202016741525A US2020154047A1 US 20200154047 A1 US20200154047 A1 US 20200154047A1 US 202016741525 A US202016741525 A US 202016741525A US 2020154047 A1 US2020154047 A1 US 2020154047A1
Authority
US
United States
Prior art keywords
human
light
server
coordinates
over
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Abandoned
Application number
US16/741,525
Inventor
Konstantin CHERNIKOV
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Intelligent Security Systems Corp
Original Assignee
Intelligent Security Systems Corp
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Intelligent Security Systems Corp filed Critical Intelligent Security Systems Corp
Priority to US16/741,525 priority Critical patent/US20200154047A1/en
Publication of US20200154047A1 publication Critical patent/US20200154047A1/en
Abandoned legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G08SIGNALLING
    • G08BSIGNALLING OR CALLING SYSTEMS; ORDER TELEGRAPHS; ALARM SYSTEMS
    • G08B13/00Burglar, theft or intruder alarms
    • G08B13/18Actuation by interference with heat, light, or radiation of shorter wavelength; Actuation by intruding sources of heat, light, or radiation of shorter wavelength
    • G08B13/189Actuation by interference with heat, light, or radiation of shorter wavelength; Actuation by intruding sources of heat, light, or radiation of shorter wavelength using passive radiation detection systems
    • G08B13/194Actuation by interference with heat, light, or radiation of shorter wavelength; Actuation by intruding sources of heat, light, or radiation of shorter wavelength using passive radiation detection systems using image scanning and comparing systems
    • G08B13/196Actuation by interference with heat, light, or radiation of shorter wavelength; Actuation by intruding sources of heat, light, or radiation of shorter wavelength using passive radiation detection systems using image scanning and comparing systems using television cameras
    • G08B13/19602Image analysis to detect motion of the intruder, e.g. by frame subtraction
    • H04N5/23238
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S3/00Direction-finders for determining the direction from which infrasonic, sonic, ultrasonic, or electromagnetic waves, or particle emission, not having a directional significance, are being received
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T1/00General purpose image data processing
    • G06T1/0007Image acquisition
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T15/003D [Three Dimensional] image rendering
    • G06T15/10Geometric effects
    • G06T15/20Perspective computation
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/66Remote control of cameras or camera parts, e.g. by remote control devices
    • H04N23/661Transmitting camera control signals through networks, e.g. control via the Internet
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/698Control of cameras or camera modules for achieving an enlarged field of view, e.g. panoramic image capture
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/70Circuitry for compensating brightness variation in the scene
    • H04N23/74Circuitry for compensating brightness variation in the scene by influencing the scene brightness using illuminating means
    • H04N5/232
    • H04N5/23206
    • H04N5/2354
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S3/00Direction-finders for determining the direction from which infrasonic, sonic, ultrasonic, or electromagnetic waves, or particle emission, not having a directional significance, are being received
    • G01S3/78Direction-finders for determining the direction from which infrasonic, sonic, ultrasonic, or electromagnetic waves, or particle emission, not having a directional significance, are being received using electromagnetic waves other than radio waves
    • G01S3/782Systems for determining direction or deviation from predetermined direction
    • G01S3/785Systems for determining direction or deviation from predetermined direction using adjustment of orientation of directivity characteristics of a detector or detector system to give a desired condition of signal derived from that detector or detector system
    • G01S3/786Systems for determining direction or deviation from predetermined direction using adjustment of orientation of directivity characteristics of a detector or detector system to give a desired condition of signal derived from that detector or detector system the desired condition being maintained automatically
    • G01S3/7864T.V. type tracking systems

Definitions

  • this disclosure relates to image processing. More particularly, this disclosure relates to image-based light output.
  • the industrial setting hosts a set of signs or surface markers, inclusive of a paint or a tape, such as a floor sign or marker, a wall sign or marker, a ceiling sign or marker, or others, that notify the user where the first zone ends and the second zone begins or vice versa.
  • the set of signs or surface markers often break, fade, or get erased, whether chemically, such as due to chemicals present in air, or mechanically, such as via a clothing item, a shoe sole, a tire, a continuous track, or others.
  • the set of signs or markers is often difficult to recognize or see, such as under a poor lighting condition, a bad weather environment, a dusty environment, or others. Resultantly, the user has difficult time identifying where the first zone ends and the second zone begins or vice versa.
  • the set of signs or markers is constantly present, a human user can get used to the set of signs or markers with time and thereby ignore the set of signs or markers.
  • a vehicular driver whether a human or a machine, may need to be notified of a presence of an object in a vehicular path, especially in a poorly illuminated environment, such as a tunnel or an overpass, in a bad weather environment, or at nighttime.
  • the object can include an animate object, such as a human or an animal, or an inanimate object, such as a trash item, a pile of snow, a fallen tree, a crate, or others.
  • the vehicular path is not regulated or unmarked, the vehicular driver may be even further in need of being notified of the presence of the object.
  • a user may need to detect a suspicious object and notify at least some of the people or a security personnel of the suspicious object.
  • the suspicious object can include an unattended bag/suitcase/container, a suspicious human movement, a suspicious vehicular movement, or others.
  • this disclosure discloses various computing technologies that address various problems identified above.
  • These computing technologies include a processor in communication with an imaging device, such as a video camera, and with an illumination device, such as a laser projector.
  • the processor receives a set of imaging data from the imaging device such that the processor is able to identify an object within the set of imaging data and determine a set of coordinates.
  • the processor then sends the set of coordinates to the illumination device such that the illumination device can output an illumination based on the set of coordinates.
  • a method comprising: receiving, by a server, a video stream from a camera, wherein the video stream depicts an object that is moving along a current path; detecting, by the server, the object in the video stream; determining, by the server, a projected path of the object based on the current path; generating, by the server, a set of coordinates based on the projected path; sending, by the server, the set of coordinates to a laser projector such that the laser projector can project a laser beam based on the set of coordinates as the object is moving in the projected path.
  • a system comprising: a light source; and a server programmed to: receive a video stream from a camera, wherein the video stream depicts an object that is moving along a current path, detect the object in the video stream, determine a projected path of the object based on the current path, generate a set of coordinates based on the projected path, and send the set of coordinates to the light source such that the light source can source a light based on the set of coordinates as the object is moving in the projected path.
  • a method comprising: receiving, by a server, a video stream from a camera, wherein the video stream depicts an object that is moving along a current path; detecting, by the server, the object in the video stream; determining, by the server, a projected path of the object based on the current path; generating, by the server, a set of coordinates based on the projected path; sending, by the server, the set of coordinates to a light source such that the light source can source a light based on the set of coordinates as the object is moving in the projected path.
  • a method comprising: receiving, by a processor, a set of imaging data from an imaging device, wherein the set of imaging data depicts an object; identifying, by the processor, the object in the set of imaging data; generating, by the processor, a set of coordinates based on the object; and sending, by the processor, the set of coordinates to an illumination device such that the illumination device can output an illumination based on the set of coordinates.
  • FIG. 1 shows a schematic diagram of an embodiment of a system for an image-based light output according to this disclosure.
  • FIG. 2 shows a schematic diagram of an embodiment of a system for an image-based light output according to this disclosure.
  • FIG. 3 shows a flowchart of an embodiment of a process for an image-based light output according to this disclosure.
  • this disclosure discloses various computing technologies that include a processor in communication with an imaging device, such as a video camera, and with an illumination device, such as a laser projector.
  • the processor receives a set of imaging data from the imaging device such that the processor is able to identify an object within the set of imaging data and determine a set of coordinates.
  • the processor then sends the set of coordinates to the illumination device such that the illumination device can output an illumination based on the set of coordinates.
  • imaging device can capture optically
  • these computing technologies can be applied to non-optical capture as well, such as via an ultrasound machine, an X-ray machine, an magnetic resonance imaging (MRI) machine, a radar, a LIDAR, a laser, a heat sensor, or any other form of image formation through any input means or modality.
  • MRI magnetic resonance imaging
  • LIDAR laser
  • heat sensor heat sensor
  • the illumination device can be laser-based
  • these computing technologies can be applied to non-laser illumination as well, such as via a flashlight, a spotlight, a searchlight, a flash illumination device, a lamp projector, a light emitting diode (LED), an LED bulb, an LED lamp, an incandescent bulb or lamp, a halogen bulb or lamp, a gas discharge bulb or lamp, or any other form of light output, whether in an electromagnetic spectrum visible or invisible to a human eye, through any output means or modality.
  • LED light emitting diode
  • FIGS. 1-4 In which some embodiments of this disclosure are shown. This disclosure may, however, be embodied in many different forms and should not be construed as necessarily being limited to the embodiments disclosed herein. Rather, these embodiments are provided so that this disclosure is thorough and complete, and fully conveys various concepts of this disclosure to skilled artisans.
  • a term “or” is intended to mean an inclusive “or” rather than an exclusive “or.” That is, unless specified otherwise, or clear from context, “X employs A or B” is intended to mean any of the natural inclusive permutations. That is, if X employs A; X employs B; or X employs both A and B, then “X employs A or B” is satisfied under any of the foregoing instances.
  • FIG. 1 shows a schematic diagram of an embodiment of a system for an image-based light output according to this disclosure.
  • a system 100 includes a video camera 102 , a server 104 , and a laser projector 106 , where the server 104 is in communication with the video camera 102 to receive a video stream 108 and the laser projector 106 to send a set of coordinates 110 .
  • the video camera 102 includes an optical instrument for capturing and recording images, which may be stored locally, transmitted to another location, or both.
  • the images may be individual still photographs or sequences of images constituting videos.
  • the images can be analog or digital, whether color or grayscale.
  • the video camera 102 can comprise any type of lens, such as convex, concave, fisheye, or others.
  • the video camera 102 comprise any focal length, such as wide angle or standard.
  • the video camera 102 can be stationary or movable such that the video camera 102 is directed to a zone of observation.
  • the video camera 102 is powered via a mains electricity source, such as via a power cable or a data cable.
  • the video camera 102 is powered via at least one of an onboard rechargeable battery, such as a lithium-ion battery, which may be charged via an onboard renewable energy source, such as a photovoltaic cell, a wind turbine, or a hydropower turbine.
  • the video camera 102 can be configured for geo-tagging, such as via modifying an image file with a geolocation or a set of coordinates data.
  • the video camera 102 can include or be coupled to a microphone.
  • the video camera 102 can be a pan-tilt-zoom (PTZ) camera, which can be a virtual PTZ camera.
  • PTZ pan-tilt-zoom
  • the video camera 102 can be configured for a name-stamping, date-stamping, or time-stamping the image, such as what date or time the image was captured or naming the image with a naming convention, as pre-programmed in advance.
  • the video camera 102 can comprise a flash illumination output device or another artificial illumination output device.
  • the video camera 102 can comprise an infrared illumination output device. Note that although the video camera 102 can capture optically, this disclosure supports non-optical capture as well, such as via an ultrasound machine, an X-ray machine, an MRI machine, a radar, a LIDAR, a laser, a heat sensor, or any other form of image formation through any input means or modality.
  • the server 104 runs an operating system, such as MacOS®, Windows®, or others, and an application, such as a video analytics application, on the operating system.
  • an operating system such as MacOS®, Windows®, or others
  • an application such as a video analytics application
  • at least one of the video camera 102 , the server 104 , or the laser projector 106 can be components of a video management system (VMS).
  • the server 104 can include or be coupled to an input device, such as a mouse, a keyboard, a camera (whether forward-facing or back-facing), an accelerometer, a touchscreen, a biometric reader, a clicker, a microphone, a stylus, or any other suitable input device.
  • the server 104 can include or be coupled to, an output device, such as a display, a speaker, a headphone, a printer, or any other suitable output device.
  • the input device and the output device can be embodied in one unit, such as a touch-enabled display, which can be haptic.
  • the server 104 hosts a software module configured to perform a set of situational video analytics on the video stream 108 .
  • the server 104 can be in communication with a client that runs an operating system, such as MacOS®, Windows®, or others, and an application, such as an administrator application, on the operating system.
  • the client can be embodied as a workstation, a terminal, a laptop, a tablet, a smartphone, a vehicle, whether manned or unmanned, whether land, aerial, or marine, or others.
  • the client can include or be coupled to an input device, such as a mouse, a keyboard, a camera (whether forward-facing or back-facing), an accelerometer, a touchscreen, a biometric reader, a clicker, a microphone, a stylus, or any other suitable input device.
  • the client can include or be coupled to an output device, such as a display, a speaker, a headphone, a printer, or any other suitable output device.
  • the input device and the output device can be embodied in one unit, such as a touch-enabled display, which can be haptic.
  • the application can present an administrator graphical user interface (GUI) configured to depict a page, where the page can include a visual element, such as a menu, enabling a control of an aspect of the system 100 , such as any hardware, software, or any other logic or functionality.
  • GUI administrator graphical user interface
  • the client can be configured to control at least one of the video camera 102 , the server 104 , or the laser projector 106 .
  • the client can receive a set of imagery, whether photos or videos, from the camera 102 , whether from the server 104 or the camera 102 .
  • the laser projector 106 is configured to project a laser beam, such as a red beam, a blue beam, or others, such as to provide an optical guidance, whether in an electromagnetic spectrum visible or invisible to a human eye.
  • a laser beam such as a red beam, a blue beam, or others, such as to provide an optical guidance, whether in an electromagnetic spectrum visible or invisible to a human eye.
  • the laser projector 106 can project the laser beam without a diffraction grating.
  • the laser projector 106 includes a housing that contains a laser source, a mirror, and other optical components.
  • the laser projector 106 can contain one laser light source for a single-color projection or three sources for a red, green, and blue (RGB) full color projection.
  • RGB red, green, and blue
  • the laser projector 106 can include a laser diode configuration (direct injection), a solid state configuration (diode-pumped, frequency-doubled), a gas laser configuration, or others.
  • the laser projector 106 can provide a fast and stable project with a high repetition rate of about 50 hertz (Hz) and a wide optical angle of about 80 degrees by about 80 degrees.
  • the laser beam can depict a dot, a line, whether solid or broken, a shape, whether open or closed, an image, a video, or others.
  • the laser projector 106 can create an image projection based on the set of coordinates 110 received from the server 104 .
  • the laser projector 106 can be stationary or movable.
  • the laser projector 106 is powered via a mains electricity source, such as via a power cable or a data cable.
  • the laser projector 106 is powered via at least one of an onboard rechargeable battery, such as a lithium-ion battery, which may be charged via an onboard renewable energy source, such as a photovoltaic cell, a wind turbine, or a hydropower turbine.
  • the laser projector 106 can include or be coupled to a microphone or the video camera 102 , which can be embodied as a single unit.
  • the laser projector 106 can project the laser beam
  • this disclosure supports non-laser illumination as well, such as via a flashlight, a spotlight, searchlight, a flash illumination device, a lamp projector, an LED, an LED bulb, an LED lamp, an incandescent bulb or lamp, a halogen bulb or lamp, a gas discharge bulb or lamp, or any other form of light output, whether in an electromagnetic spectrum visible or invisible to a human eye, through any output means or modality.
  • the server 104 can process the video stream 108 to detect an event or an object, as depicted via the video stream 108 , classify the event or the object, and act accordingly.
  • the server 104 can detect the event or the object, as disclosed in U.S. Pat. No. 9,609,197, which is herein incorporated by reference for at least these purposes and all other purposes relevant to this disclosure, as disclosed herein.
  • the server 104 can act through the laser projector 106 by sending the set of coordinates 110 to the laser projector 106 such that the laser projector 106 can visibly outline or visibly accompany the object that moves into an optical view of the video camera 102 or vice versa or a specified zone.
  • the server 104 receives the video stream 108 from the video camera 102 , where the video stream depicts an object, such as a human, that is moving along a current path, such as walking along a rectilinear plane.
  • the server 104 can perform a dewarping process as disclosed in U.S. Pat. No. 9,547,883 or 9,609,197, both of which are incorporated by reference herein for all purposes.
  • the server 104 processes the video stream 108 such that the server 104 detects the object in the video stream 108 , determines a projected path of the object based on the current path, and generates the set of coordinates 110 , such as for the object itself or for a specified zone, based on the projected path, which can be rectilinear, arcuate, sinusoidal, circular, oval, pentagonal, triangular, or others.
  • the server 104 then sends the set of coordinates 110 to the laser projector 106 such that the laser projector 106 can project a laser beam based on the set of coordinates 110 as the object is moving in the projected path, whether tracing the object, illuminating the object, tracing another object in proximity of the object, illuminating another object in proximity of the object, delineate an area proximal or distal to the object, or other forms of light output.
  • the laser projector 106 can move to project the laser beam based on the set of coordinates 110 as the object is moving in the projected path or the laser projector 106 can remain stationary to project the laser beam based on the set of coordinates 110 as the object is moving in the projected path.
  • the laser beam can be in an electromagnetic spectrum visible to a human eye based on the set of coordinates 110 as the object is moving in the projected path.
  • the laser projector 106 can project the laser beam in an image form based on the set of coordinates 110 as the object is moving in the projected path.
  • the laser beam can be in a single color, as selected through an identifier by the server 104 based on processing the video stream 108 , or a set of multiple colors, as selected through a set of identifiers by the server 104 based on processing the video stream 108 .
  • the server 104 can process the video stream 108 such that the server 104 is able to estimate a size of the object, a generate a value corresponding to the size of the object, and send the value to the laser projector 106 such that the laser projector 106 can project the laser beam based on the set of coordinates 110 and the value as the object is moving in the projected path.
  • the server 104 can process the video stream 108 such that the server 104 is able to determine a first characteristic of a surface onto which the laser beam will be projected, such as a color of the surface or a texture of the surface, from the video stream 108 .
  • the server 104 then can determine a second characteristic of the laser projector 106 , such as a color ability of the laser projector 106 or a visual pattern ability of the laser projector 106 , based on the first characteristic, where the second characteristic is associated with the laser beam. Subsequently, the server 104 can send the second characteristic to the laser projector 106 such that the laser projector 106 can project the laser beam onto the surface based on the second characteristic, which would allow for suitable contrast or visibility of the laser beam, as the object is moving in the projected path.
  • a second characteristic of the laser projector 106 such as a color ability of the laser projector 106 or a visual pattern ability of the laser projector 106
  • FIG. 2 shows a schematic diagram of an embodiment of a system for an image-based light output according to this disclosure.
  • a system 200 includes a camera 202 , a server 204 , a laser projector 206 , and a network 212 .
  • the camera 202 can be configured as the camera 102 .
  • the server 204 can be configured as the server 104 .
  • the laser projector 206 can be configured as the laser projector 106 .
  • the network 212 includes a plurality of nodes that allow for sharing of resources or information.
  • the network 212 can be wired or wireless.
  • the network 212 can be a local area network (LAN), a wide area network (WAN), a cellular network, or others.
  • the server 204 is in communication with the camera 202 over the network 212 such that the server 204 can receive a video stream 208 from the camera 202 .
  • the server 204 is in communication with the laser projector 206 over the network 212 such that the server 204 can send a set of coordinates 210 to the laser projector 206 .
  • the video stream 208 depicts an object 214 , such as a human walking in a path, which can be rectilinear.
  • the server 204 receives the video stream 208 from the video camera 202 over the network 212 , where the video stream 208 depicts the object 214 moving along a current path.
  • the server 204 can perform a dewarping process as disclosed in U.S. Pat. No. 9,547,883 or 9,609,197, both of which are incorporated by reference herein for all purposes.
  • the server 204 processes the video stream 208 such that the server 204 detects the object 214 in the video stream 208 , determines a projected path of the object 214 based on the current path, and generates the set of coordinates 210 , such as for the object 214 , based on the projected path.
  • the server 204 then sends the set of coordinates 210 to the laser projector 206 such that the laser projector 206 can project a laser beam 218 based on the set of coordinates 210 as the object 214 is moving in the projected path and thereby form a trace 216 of the object 214 .
  • other forms of illumination are possible, whether illuminating the object 214 , tracing another object in proximity of the object 214 , illuminating another object in proximity of the object 214 , delineate an area proximal or distal to the object 214 , or other forms of light output.
  • the laser projector 206 can move to project the laser beam 218 based on the set of coordinates 210 as the object 214 is moving in the projected path or the laser projector 206 can remain stationary to project the laser beam based on the set of coordinates 210 as the object 214 is moving in the projected path.
  • the laser beam 218 can be in an electromagnetic spectrum visible to a human eye based on the set of coordinates 210 as the object 214 is moving in the projected path.
  • the laser projector 206 can project the laser beam in an image form based on the set of coordinates 210 as the object 214 is moving in the projected path.
  • the laser beam 218 can be in a single color, as selected through an identifier by the server 204 based on processing the video stream 208 , or a set of multiple colors, as selected through a set of identifiers by the server 204 based on processing the video stream 208 .
  • the server 204 can process the video stream 208 such that the server 204 is able to estimate a size of the object 214 , a generate a value corresponding to the size of the object 214 , and send the value to the laser projector 206 such that the laser projector 206 can project the laser beam 218 based on the set of coordinates 210 and the value as the object 214 is moving in the projected path.
  • the server 204 can process the video stream 208 such that the server 204 is able to determine a first characteristic of a surface onto which the laser beam 218 will be projected, such as a color of the surface or a texture of the surface, from the video stream 208 .
  • the server 204 then can determine a second characteristic of the laser projector 206 , such as a color ability of the laser projector 206 or a visual pattern ability of the laser projector 106 , based on the first characteristic, where the second characteristic is associated with the laser beam 214 .
  • the server 204 can send the second characteristic to the laser projector 206 such that the laser projector 206 can project the laser beam 218 onto the surface, such as to form the trace 216 , based on the second characteristic, which would allow for suitable contrast or visibility of the laser beam 218 , as the object 214 is moving in the projected path.
  • FIG. 3 shows a flowchart of an embodiment of a process for an image-based light output according to this disclosure.
  • a process 300 can be performed via the system 100 or the system 200 .
  • the process 300 includes a set of blocks 302 - 314 .
  • a camera such as the camera 102 or 202 , captured a video of an object, such as the video stream 108 or 208 depicting the object 214 .
  • the camera sends the video to a server, such as the server 104 or 204 , which can be over the network 212 .
  • the server identifies the object in the video, which can be via various computer vision techniques.
  • the server can identify the object in the video through uses of different algorithms of processing of the video, which can include a separation of mobile objects based on integral and differential methods.
  • the server determines a set of characteristics of the object.
  • the set of characteristics can include a size of the object, a shape of the object, an orientation of the object, a perimeter of the object, a volume of the object, a speed of the object, an acceleration of the object, a moving direction of the object, a classification of the object, or others.
  • the server can determine the set of characteristics through trainees (neural networks), or use different templates on which to illuminate certain signs.
  • the server generates a set of coordinates based on the set of characteristics.
  • the set of coordinates can include a X-axis component, a Y-axis component, a Z-axis component, or others.
  • the set of coordinates can include an addressable component, such as an array element corresponding to a specified portion of a defined area.
  • the server generates the set of coordinates when the object is identified in the video, as per block 306 .
  • the server sends the set of coordinates to a laser project, such as the laser projector 106 or 206 , which can over the network 212 .
  • the laser projector projects a laser beam based on the set of coordinates.
  • the laser projector can move, such as rotate, tilt, or travel, based on the set of coordinates and then project the laser beam.
  • the laser projector can create a vector image using a trigonometric function, which allows the laser projector to transform the set of coordinates into a set of points based on which the laser beam will move in a pattern, such as in a line, a triangle, a rectangle, a polygon, or another shape, whether open or closed, to illustrate the vector image, such as via using the set of coordinates to represent peaks or vertices.
  • FIG. 4 shows a screenshot of an embodiment of an image-based light output according to this disclosure.
  • the system 100 or 200 can dynamically indicate, whether on a single or cumulative basis, an object, a zone, or a border, such as via highlighting, marking, tracing, outlining, or others, as selected via the server 104 or 204 , based on a set of criteria, in real-time, as the object is optically captured via the video camera 102 or the camera 202 .
  • the laser projector 106 or 206 can trace the object thereby creating a clear delineation or border, which are easily recognizable by a human bystander.
  • the server 104 or 204 can identify, classify, and make a decision about an appearance of a situation, which necessitates a supplemental illumination via the laser projector 106 or 206 of a border of a zone or an identified stationary or moving object.
  • the server 104 or 204 can be configured to classify an object and forecast a path of the object, such as via generating the set of coordinates 110 or 210 based on a size of the object, as determined via the server 104 or 204 .
  • the laser projector 106 or 206 which can formulate a projection of an image on a surface based on the set of coordinates 110 or 210 or other relevant data received from the server 104 or 204 .
  • this disclosure can be applied to a situation where there is a need for a user, such as a human, to clearly recognize a border of an area or an object to focus on or pay attention to because the laser projector 106 or 206 enables a formation of a contrast image to get attention from the user, raise awareness or vigilance, precisely determine a location of the object in a poor visual with the object to the area, such as a bad weather environment, a dusty environment, a polluted environment, a poorly illuminated environment, at nighttime, or others.
  • the laser beam can assist a user when a set of signs or surface markers brakes, fades, or gets erased by supplementing or replacing the set of signs or surface markers with the laser beam.
  • the laser beam as projected via the laser projector 106 or 206 , can be easier for the user to recognize or see, such as under a poor lighting condition, a bad weather environment, a dusty environment, or others. Resultantly, the user has easier time identifying where a first zone ends and a second zone begins or vice versa.
  • the laser beam as projected via the laser projector 106 or 206 , can minimize a human user from getting used to the set of signs or markers with time and ignoring the set of signs or markers by providing a non-familiar visual demarcation, thereby minimizing an inefficient use of resources or a dangerous situation. Furthermore, the laser beam, as projected via the laser projector 106 or 206 , can notify the user of a movement of an object that can cause an injury or a damage upon impact.
  • the laser beam can aid a vehicular driver in being notified of a presence of an object in a vehicular path, especially in a poorly illuminated environment, in a bad weather environment, or at nighttime.
  • the laser beam as projected via the laser projector 106 or 206 , can aid the vehicular driver in being notified of the presence of the object when the vehicular path is not regulated or unmarked.
  • a screenshot 400 depicts a marked crosswalk 402 where a human user 408 is walking in a rectilinear path.
  • the server 104 or 204 extracts the screenshot 400 from the video stream 108 or 208 , as explained above.
  • the server 104 or 204 identifies the human user 408 in the screenshot 400 via a computer vision technique, such as an object recognition technique, and positions a bounding box 406 , which can be of any open or closed shape, such as a U-shape, a D-shape, a square, shape, a rectangular shape, or others, such that the bounding box 406 encloses the human user 408 and the human user 408 can be computationally tracked thereby.
  • a computer vision technique such as an object recognition technique
  • the server 104 or 204 then performs various video analytics on the video stream 108 or 208 , inclusive of the human user 408 within the bounding box 406 , such that the set of coordinates 110 or 210 for the human user 408 in the video stream 108 or 208 is generated, such as a set of object coordinates.
  • the server 104 or 204 then sends the set of coordinates 110 or 210 to the laser projector 106 or 206 such that the laser projector 106 or 206 can project the laser beam (at least two red laser beams are shown in FIG. 4 although less or more is possible) onto the marked crosswalk 402 in proximity of the human user 408 . For example, as shown in FIG.
  • the human user 408 is positioned between the two red laser beams, whether projected via a single or multiple laser projectors 106 or 206 , and the two red laser beams are illuminating the marked crosswalk 402 along a plurality of lateral edges thereof, along the rectilinear path of the human user 408 .
  • the laser projector 106 or 206 outputs another laser beam, which can correspond to the bounding box 406 , in order to provide a dynamic tracing around the human user 408 , such via the laser beam tracing the bounding box 406 .
  • the video stream 108 or 208 which sources the screenshot 400 and is displayed on a display of a client in communication with the server 104 or 204 , while the human user 408 is walking in the rectilinear path on the marked crosswalk 402 , can be augmented by depicting, such as via overlaying or insertion, a marking over or in the video stream 108 or 208 , with the marking corresponding to a projected path of the laser beam based on the set of coordinates.
  • the display of the client displays, such as via overlying or insertion, at least two red lines corresponding to at least two projected paths of the at least two laser beams, which can be displayed as the human user 408 is walking in the rectilinear path on the marked crosswalk 402 and the at least two laser beams are beaming into the marked crosswalk 402 , as shown in FIG. 4 .
  • the display of the client displays, such as via overlying or insertion, a green shape, whether closed or open, corresponding to a projected trace of the laser beam, which can be displayed as the human user 408 is walking in the rectilinear path on the marked crosswalk 402 and the laser beam is tracing or outlining the human user 408 , as shown in FIG. 4 .
  • the at least two red lines can be non-red, such as one can be green and another one can be orange. Also, note that the at least two red lines can be different from or identical to each other in color, shape, size or other characteristics. Further, note that the at least two red lines can be displayed on the display, while being identical to or different from what is projected by the laser projectors 106 or 206 in color, size, shape, or other characteristics. For example, the display can display three lines in yellow color and broken structure and the laser projectors 106 or 206 can project three lines in red color and solid in structure. Likewise, the green shape can vary in color, size, shape, and other characteristics.
  • the laser beam can notify the vehicular driver about the human user 408 on the marked crosswalk 402 , while also being visibly forecasted on the display of the client, i.e., the laser beam providing a dynamic tracing around the human user 408 , such via the laser beam tracing the bounding box 406 , or the laser beam illuminating the marked crosswalk 402 along lateral edges thereof (vehicular driver notice), and displaying the marking in the video stream 108 or 208 corresponding to the projected path of the laser beam, whether via dynamic or static illumination (client notice).
  • the laser beam providing a dynamic tracing around the human user 408 , such via the laser beam tracing the bounding box 406 , or the laser beam illuminating the marked crosswalk 402 along lateral edges thereof (vehicular driver notice), and displaying the marking in the video stream 108 or 208 corresponding to the projected path of the laser beam, whether via dynamic or static illumination (client notice).
  • the laser beam can help a user to detect a suspicious object and notify at least some of the people or a security personnel of the suspicious object.
  • a VMS system which has the camera 102 or 202 , the server 104 or 204 , and the laser projector 106 or 206 , can be configured to analyze situation analytics and perform an analysis for a large amount of information.
  • the VMS can be configured to request the laser projector 106 or 206 to point and thereby localize the suspicious object for at least some of the people or the security personnel or a sensor, which shortens a search for the suspicious object in large areas.
  • the client such as a workstation, a laptop, a tablet, or others, can communicate with the server 104 or 204 and set an action setting, such as an alarm output, a communication message output, a control of a physical object, such as a gate, an engine, a motor, a vehicle, or others, when the object 214 in the video stream 108 or 208 is computationally detected, whether through image processing or a sensor, to cross over or pass a line or a demarcation, inclusive of entry or exit, formed by the laser beam of the laser projector 106 or 206 .
  • the client can save the action setting onto the server 104 or 204 .
  • the action setting can include the server 104 or 204 requesting that an audio, vibratory, or light alarm device be activated, whether such device is digital or analog or local to or remote from the object 214 or the client.
  • the audio alarm device can include a bell, a siren, or a horn.
  • the action setting can include the server 104 or 204 sending or requesting that a text, email, social networking, or chat message be sent to a predefined or dynamically determined phone number or email address, which can be associated with the object 214 , based on recognition and identification of the object 214 against a database of objects, which can be relational, in-memory, or others.
  • the server 104 or 204 can request that an action in accordance with the action setting can be taken, such as an alarm message is generated and sent or an alarm device, such as a bell, a siren, or a horn, is activated.
  • an alarm device such as a bell, a siren, or a horn, is activated.
  • the alarm device can output in other ways, such as via an illumination, a vibration, or others, whether analog or digital.
  • Various embodiments of the present disclosure may be implemented in a data processing system suitable for storing and/or executing program code that includes at least one processor coupled directly or indirectly to memory elements through a system bus.
  • the memory elements include, for instance, local memory employed during actual execution of the program code, bulk storage, and cache memory which provide temporary storage of at least some program code in order to reduce the number of times code must be retrieved from bulk storage during execution.
  • I/O devices can be coupled to the system either directly or through intervening I/O controllers.
  • Network adapters may also be coupled to the system to enable the data processing system to become coupled to other data processing systems or remote printers or storage devices through intervening private or public networks. Modems, cable modems, and Ethernet cards are just a few of the available types of network adapters.
  • the present disclosure may be embodied in a system, a method, and/or a computer program product.
  • the computer program product may include a computer readable storage medium (or media) having computer readable program instructions thereon for causing a processor to carry out aspects of the present disclosure.
  • the computer readable storage medium can be a tangible device that can retain and store instructions for use by an instruction execution device.
  • the computer readable storage medium may be, for example, but is not limited to, an electronic storage device, a magnetic storage device, an optical storage device, an electromagnetic storage device, a semiconductor storage device, or any suitable combination of the foregoing.
  • a non-exhaustive list of more specific examples of the computer readable storage medium includes the following: a portable computer diskette, a hard disk, a random access memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or Flash memory), a static random access memory (SRAM), a portable compact disc read-only memory (CD-ROM), a digital versatile disk (DVD), a memory stick, a floppy disk, a mechanically encoded device such as punch-cards or raised structures in a groove having instructions recorded thereon, and any suitable combination of the foregoing.
  • RAM random access memory
  • ROM read-only memory
  • EPROM or Flash memory erasable programmable read-only memory
  • SRAM static random access memory
  • CD-ROM compact disc read-only memory
  • DVD digital versatile disk
  • memory stick a floppy disk
  • a mechanically encoded device such as punch-cards or raised structures in a groove having instructions recorded thereon
  • Computer readable program instructions described herein can be downloaded to respective computing/processing devices from a computer readable storage medium or to an external computer or external storage device via a network, for example, the Internet, a local area network, a wide area network and/or a wireless network.
  • the network may comprise copper transmission cables, optical transmission fibers, wireless transmission, routers, firewalls, switches, gateway computers and/or edge servers.
  • a network adapter card or network interface in each computing/processing device receives computer readable program instructions from the network and forwards the computer readable program instructions for storage in a computer readable storage medium within the respective computing/processing device.
  • Computer readable program instructions for carrying out operations of the present disclosure may be assembler instructions, instruction-set-architecture (ISA) instructions, machine instructions, machine dependent instructions, microcode, firmware instructions, state-setting data, or either source code or object code written in any combination of one or more programming languages, including an object oriented programming language such as Smalltalk, C++ or the like, and conventional procedural programming languages, such as the “C” programming language or similar programming languages.
  • a code segment or machine-executable instructions may represent a procedure, a function, a subprogram, a program, a routine, a subroutine, a module, a software package, a class, or any combination of instructions, data structures, or program statements.
  • a code segment may be coupled to another code segment or a hardware circuit by passing and/or receiving information, data, arguments, parameters, or memory contents.
  • Information, arguments, parameters, data, etc. may be passed, forwarded, or transmitted via any suitable means including memory sharing, message passing, token passing, network transmission, among others.
  • the computer readable program instructions may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer or entirely on the remote computer or server. In the latter scenario, the remote computer may be connected to the user's computer through any type of network, including a LAN or a WAN, or the connection may be made to an external computer (for example, through the Internet using an Internet Service Provider).
  • electronic circuitry including, for example, programmable logic circuitry, field-programmable gate arrays (FPGA), or programmable logic arrays (PLA) may execute the computer readable program instructions by utilizing state information of the computer readable program instructions to personalize the electronic circuitry, in order to perform aspects of the present disclosure.
  • FPGA field-programmable gate arrays
  • PLA programmable logic arrays
  • each block in the flowchart or block diagrams may represent a module, segment, or portion of instructions, which comprises one or more executable instructions for implementing the specified logical function(s).
  • the functions noted in the block may occur out of the order noted in the figures.
  • two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved.
  • Words such as “then,” “next,” etc. are not intended to limit the order of the steps; these words are simply used to guide the reader through the description of the methods.
  • process flow diagrams may describe the operations as a sequential process, many of the operations can be performed in parallel or concurrently.
  • the order of the operations may be re-arranged.
  • a process may correspond to a method, a function, a procedure, a subroutine, a subprogram, etc.
  • its termination may correspond to a return of the function to the calling function or the main function.
  • first, second, etc. can be used herein to describe various elements, components, regions, layers and/or sections, these elements, components, regions, layers and/or sections should not necessarily be limited by such terms. These terms are used to distinguish one element, component, region, layer or section from another element, component, region, layer or section. Thus, a first element, component, region, layer, or section discussed below could be termed a second element, component, region, layer, or section without departing from the teachings of the present disclosure.
  • relative terms such as “below,” “lower,” “above,” and “upper” can be used herein to describe one element's relationship to another element as illustrated in the accompanying drawings. Such relative terms are intended to encompass different orientations of illustrated technologies in addition to the orientation depicted in the accompanying drawings. For example, if a device in the accompanying drawings were turned over, then the elements described as being on the “lower” side of other elements would then be oriented on “upper” sides of the other elements. Similarly, if the device in one of the figures were turned over, elements described as “below” or “beneath” other elements would then be oriented “above” the other elements. Therefore, the example terms “below” and “lower” can encompass both an orientation of above and below.
  • the term “about” and/or “substantially” refers to a +/ ⁇ 10% variation from the nominal value/term. Such variation is always included in any given.

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Computing Systems (AREA)
  • Geometry (AREA)
  • Computer Graphics (AREA)
  • Remote Sensing (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Controls And Circuits For Display Device (AREA)
  • Length Measuring Devices By Optical Means (AREA)
  • Traffic Control Systems (AREA)
  • User Interface Of Digital Computer (AREA)

Abstract

Broadly, this disclosure discloses various computing technologies that include a processor in communication with an imaging device and with an illumination device. The processor receives a set of imaging data from the imaging device such that the processor is able to identify an object within the set of imaging data and determine a set of coordinates. The processor then sends the set of coordinates to the illumination device such that the illumination device can output an illumination based on the set of coordinates.

Description

    CROSS-REFERENCE TO RELATED PATENT APPLICATIONS
  • This patent application is a continuation of U.S. patent application Ser. No. 16/246,059, filed on Jan. 11, 2019, which claims a benefit of priority to U.S. Provisional Patent Application Ser. No. 62/616,771, filed on Jan. 12, 2018, which are herein incorporated by reference in their entirety for all purposes.
  • TECHNICAL FIELD
  • Generally, this disclosure relates to image processing. More particularly, this disclosure relates to image-based light output.
  • BACKGROUND
  • In an industrial setting, whether indoors or outdoors, such as a warehouse, a factory, a dock, an energy plant, or others, a user, whether a human or a machine, may need to know where a first zone ends and a second zone begins or vice versa. For example, the first zone can include a safe zone, a safety equipment required zone, or others. Likewise, the second zone can include a hazardous zone, a no mobile device zone, or others. As such, the industrial setting hosts a set of signs or surface markers, inclusive of a paint or a tape, such as a floor sign or marker, a wall sign or marker, a ceiling sign or marker, or others, that notify the user where the first zone ends and the second zone begins or vice versa. However, the set of signs or surface markers often break, fade, or get erased, whether chemically, such as due to chemicals present in air, or mechanically, such as via a clothing item, a shoe sole, a tire, a continuous track, or others. Similarly, the set of signs or markers is often difficult to recognize or see, such as under a poor lighting condition, a bad weather environment, a dusty environment, or others. Resultantly, the user has difficult time identifying where the first zone ends and the second zone begins or vice versa. Furthermore, in the industrial setting, since the set of signs or markers is constantly present, a human user can get used to the set of signs or markers with time and thereby ignore the set of signs or markers. As such, this state of being can lead to an inefficient use of resources or a dangerous situation. Furthermore, in the industrial setting, there can arise a need to notify the user of a movement of an object, whether dangerous or non-dangerous, that can cause an injury or a damage upon impact, whether physical or non-physical, whether with the user, another user, or another object. These technical problems become even more acute cumulatively.
  • In a transportation setting, whether indoors or outdoors, such as on a road, a rail track, or others, a vehicular driver, whether a human or a machine, may need to be notified of a presence of an object in a vehicular path, especially in a poorly illuminated environment, such as a tunnel or an overpass, in a bad weather environment, or at nighttime. For example, the object can include an animate object, such as a human or an animal, or an inanimate object, such as a trash item, a pile of snow, a fallen tree, a crate, or others. Likewise, when the vehicular path is not regulated or unmarked, the vehicular driver may be even further in need of being notified of the presence of the object. These technical problems become even more acute cumulatively.
  • In a public space setting having a large number of people, whether indoors or outdoors, such as an airport, a train station, a bus station, a park, a museum, a mall, a school, a stadium, a concert hall, or others, a user, whether a human or a machine, may need to detect a suspicious object and notify at least some of the people or a security personnel of the suspicious object. For example, the suspicious object can include an unattended bag/suitcase/container, a suspicious human movement, a suspicious vehicular movement, or others. These technical problems become even more acute cumulatively.
  • SUMMARY
  • Broadly, this disclosure discloses various computing technologies that address various problems identified above. These computing technologies include a processor in communication with an imaging device, such as a video camera, and with an illumination device, such as a laser projector. The processor receives a set of imaging data from the imaging device such that the processor is able to identify an object within the set of imaging data and determine a set of coordinates. The processor then sends the set of coordinates to the illumination device such that the illumination device can output an illumination based on the set of coordinates.
  • In an embodiment, there is provided a method comprising: receiving, by a server, a video stream from a camera, wherein the video stream depicts an object that is moving along a current path; detecting, by the server, the object in the video stream; determining, by the server, a projected path of the object based on the current path; generating, by the server, a set of coordinates based on the projected path; sending, by the server, the set of coordinates to a laser projector such that the laser projector can project a laser beam based on the set of coordinates as the object is moving in the projected path.
  • In an embodiment, there is provided a system comprising: a light source; and a server programmed to: receive a video stream from a camera, wherein the video stream depicts an object that is moving along a current path, detect the object in the video stream, determine a projected path of the object based on the current path, generate a set of coordinates based on the projected path, and send the set of coordinates to the light source such that the light source can source a light based on the set of coordinates as the object is moving in the projected path.
  • In an embodiment, there is provided a method comprising: receiving, by a server, a video stream from a camera, wherein the video stream depicts an object that is moving along a current path; detecting, by the server, the object in the video stream; determining, by the server, a projected path of the object based on the current path; generating, by the server, a set of coordinates based on the projected path; sending, by the server, the set of coordinates to a light source such that the light source can source a light based on the set of coordinates as the object is moving in the projected path.
  • In an embodiment, there is provided a method comprising: receiving, by a processor, a set of imaging data from an imaging device, wherein the set of imaging data depicts an object; identifying, by the processor, the object in the set of imaging data; generating, by the processor, a set of coordinates based on the object; and sending, by the processor, the set of coordinates to an illumination device such that the illumination device can output an illumination based on the set of coordinates.
  • DESCRIPTION OF DRAWINGS
  • FIG. 1 shows a schematic diagram of an embodiment of a system for an image-based light output according to this disclosure.
  • FIG. 2 shows a schematic diagram of an embodiment of a system for an image-based light output according to this disclosure.
  • FIG. 3 shows a flowchart of an embodiment of a process for an image-based light output according to this disclosure.
  • FIG. 4 shows a screenshot of an embodiment of an image-based light output according to this disclosure.
  • DETAILED DESCRIPTION
  • Broadly, this disclosure discloses various computing technologies that include a processor in communication with an imaging device, such as a video camera, and with an illumination device, such as a laser projector. The processor receives a set of imaging data from the imaging device such that the processor is able to identify an object within the set of imaging data and determine a set of coordinates. The processor then sends the set of coordinates to the illumination device such that the illumination device can output an illumination based on the set of coordinates. Note that although the imaging device can capture optically, these computing technologies can be applied to non-optical capture as well, such as via an ultrasound machine, an X-ray machine, an magnetic resonance imaging (MRI) machine, a radar, a LIDAR, a laser, a heat sensor, or any other form of image formation through any input means or modality. Likewise, note that although the illumination device can be laser-based, these computing technologies can be applied to non-laser illumination as well, such as via a flashlight, a spotlight, a searchlight, a flash illumination device, a lamp projector, a light emitting diode (LED), an LED bulb, an LED lamp, an incandescent bulb or lamp, a halogen bulb or lamp, a gas discharge bulb or lamp, or any other form of light output, whether in an electromagnetic spectrum visible or invisible to a human eye, through any output means or modality.
  • This disclosure is now described more fully with reference to FIGS. 1-4, in which some embodiments of this disclosure are shown. This disclosure may, however, be embodied in many different forms and should not be construed as necessarily being limited to the embodiments disclosed herein. Rather, these embodiments are provided so that this disclosure is thorough and complete, and fully conveys various concepts of this disclosure to skilled artisans.
  • Various terminology used herein can imply direct or indirect, full or partial, temporary or permanent, action or inaction. For example, when an element is referred to as being “on,” “connected” or “coupled” to another element, then the element can be directly on, connected or coupled to the other element and/or intervening elements can be present, including indirect and/or direct variants. In contrast, when an element is referred to as being “directly connected” or “directly coupled” to another element, there are no intervening elements present.
  • The terminology used herein is for describing particular example embodiments and is not intended to be necessarily limiting of the present disclosure. As used herein, the singular forms “a,” “an” and “the” are intended to include the plural forms as well, unless the context clearly indicates otherwise. Also, as used herein, the term “a” and/or “an” shall mean “one or more,” even though the phrase “one or more” is also used herein. The terms “comprises,” “includes” and/or “comprising,” “including” when used in this specification, specify the presence of stated features, integers, steps, operations, elements, and/or components, but do not preclude the presence and/or addition of one or more other features, integers, steps, operations, elements, components, and/or groups thereof. Furthermore, when the present disclosure states herein that something is “based on” something else, then such statement refers to a basis which may be based on one or more other things as well. In other words, unless expressly indicated otherwise, as used herein “based on” inclusively means “based at least in part on” or “based at least partially on.”
  • As used herein, a term “or” is intended to mean an inclusive “or” rather than an exclusive “or.” That is, unless specified otherwise, or clear from context, “X employs A or B” is intended to mean any of the natural inclusive permutations. That is, if X employs A; X employs B; or X employs both A and B, then “X employs A or B” is satisfied under any of the foregoing instances.
  • FIG. 1 shows a schematic diagram of an embodiment of a system for an image-based light output according to this disclosure. In particular, a system 100 includes a video camera 102, a server 104, and a laser projector 106, where the server 104 is in communication with the video camera 102 to receive a video stream 108 and the laser projector 106 to send a set of coordinates 110.
  • The video camera 102 includes an optical instrument for capturing and recording images, which may be stored locally, transmitted to another location, or both. The images may be individual still photographs or sequences of images constituting videos. The images can be analog or digital, whether color or grayscale. The video camera 102 can comprise any type of lens, such as convex, concave, fisheye, or others. The video camera 102 comprise any focal length, such as wide angle or standard. The video camera 102 can be stationary or movable such that the video camera 102 is directed to a zone of observation.
  • The video camera 102 is powered via a mains electricity source, such as via a power cable or a data cable. In some embodiments, the video camera 102 is powered via at least one of an onboard rechargeable battery, such as a lithium-ion battery, which may be charged via an onboard renewable energy source, such as a photovoltaic cell, a wind turbine, or a hydropower turbine. The video camera 102 can be configured for geo-tagging, such as via modifying an image file with a geolocation or a set of coordinates data. The video camera 102 can include or be coupled to a microphone. The video camera 102 can be a pan-tilt-zoom (PTZ) camera, which can be a virtual PTZ camera. The video camera 102 can be configured for a name-stamping, date-stamping, or time-stamping the image, such as what date or time the image was captured or naming the image with a naming convention, as pre-programmed in advance. The video camera 102 can comprise a flash illumination output device or another artificial illumination output device. The video camera 102 can comprise an infrared illumination output device. Note that although the video camera 102 can capture optically, this disclosure supports non-optical capture as well, such as via an ultrasound machine, an X-ray machine, an MRI machine, a radar, a LIDAR, a laser, a heat sensor, or any other form of image formation through any input means or modality.
  • The server 104 runs an operating system, such as MacOS®, Windows®, or others, and an application, such as a video analytics application, on the operating system. For example, at least one of the video camera 102, the server 104, or the laser projector 106 can be components of a video management system (VMS). The server 104 can include or be coupled to an input device, such as a mouse, a keyboard, a camera (whether forward-facing or back-facing), an accelerometer, a touchscreen, a biometric reader, a clicker, a microphone, a stylus, or any other suitable input device. The server 104 can include or be coupled to, an output device, such as a display, a speaker, a headphone, a printer, or any other suitable output device. In some embodiments, the input device and the output device can be embodied in one unit, such as a touch-enabled display, which can be haptic. In some embodiments, the server 104 hosts a software module configured to perform a set of situational video analytics on the video stream 108.
  • The server 104 can be in communication with a client that runs an operating system, such as MacOS®, Windows®, or others, and an application, such as an administrator application, on the operating system. For example, the client can be embodied as a workstation, a terminal, a laptop, a tablet, a smartphone, a vehicle, whether manned or unmanned, whether land, aerial, or marine, or others. The client can include or be coupled to an input device, such as a mouse, a keyboard, a camera (whether forward-facing or back-facing), an accelerometer, a touchscreen, a biometric reader, a clicker, a microphone, a stylus, or any other suitable input device. The client can include or be coupled to an output device, such as a display, a speaker, a headphone, a printer, or any other suitable output device. In some embodiments, the input device and the output device can be embodied in one unit, such as a touch-enabled display, which can be haptic. As such, the application can present an administrator graphical user interface (GUI) configured to depict a page, where the page can include a visual element, such as a menu, enabling a control of an aspect of the system 100, such as any hardware, software, or any other logic or functionality. For example, the client can be configured to control at least one of the video camera 102, the server 104, or the laser projector 106. For example, the client can receive a set of imagery, whether photos or videos, from the camera 102, whether from the server 104 or the camera 102.
  • The laser projector 106 is configured to project a laser beam, such as a red beam, a blue beam, or others, such as to provide an optical guidance, whether in an electromagnetic spectrum visible or invisible to a human eye. For example, the laser projector 106 can project the laser beam without a diffraction grating. The laser projector 106 includes a housing that contains a laser source, a mirror, and other optical components. For example, the laser projector 106 can contain one laser light source for a single-color projection or three sources for a red, green, and blue (RGB) full color projection. For example, the laser projector 106 can include a laser diode configuration (direct injection), a solid state configuration (diode-pumped, frequency-doubled), a gas laser configuration, or others. For example, the laser projector 106 can provide a fast and stable project with a high repetition rate of about 50 hertz (Hz) and a wide optical angle of about 80 degrees by about 80 degrees. For example, the laser beam can depict a dot, a line, whether solid or broken, a shape, whether open or closed, an image, a video, or others. For example, the laser projector 106 can create an image projection based on the set of coordinates 110 received from the server 104. The laser projector 106 can be stationary or movable. The laser projector 106 is powered via a mains electricity source, such as via a power cable or a data cable. In some embodiments, the laser projector 106 is powered via at least one of an onboard rechargeable battery, such as a lithium-ion battery, which may be charged via an onboard renewable energy source, such as a photovoltaic cell, a wind turbine, or a hydropower turbine. The laser projector 106 can include or be coupled to a microphone or the video camera 102, which can be embodied as a single unit. Note that although the laser projector 106 can project the laser beam, this disclosure supports non-laser illumination as well, such as via a flashlight, a spotlight, searchlight, a flash illumination device, a lamp projector, an LED, an LED bulb, an LED lamp, an incandescent bulb or lamp, a halogen bulb or lamp, a gas discharge bulb or lamp, or any other form of light output, whether in an electromagnetic spectrum visible or invisible to a human eye, through any output means or modality.
  • Resultantly, the server 104 can process the video stream 108 to detect an event or an object, as depicted via the video stream 108, classify the event or the object, and act accordingly. For example, the server 104 can detect the event or the object, as disclosed in U.S. Pat. No. 9,609,197, which is herein incorporated by reference for at least these purposes and all other purposes relevant to this disclosure, as disclosed herein. Further for example, the server 104 can act through the laser projector 106 by sending the set of coordinates 110 to the laser projector 106 such that the laser projector 106 can visibly outline or visibly accompany the object that moves into an optical view of the video camera 102 or vice versa or a specified zone.
  • In one mode of operation, the server 104 receives the video stream 108 from the video camera 102, where the video stream depicts an object, such as a human, that is moving along a current path, such as walking along a rectilinear plane. For example, when the video stream 108 is distorted, the server 104 can perform a dewarping process as disclosed in U.S. Pat. No. 9,547,883 or 9,609,197, both of which are incorporated by reference herein for all purposes. The server 104 processes the video stream 108 such that the server 104 detects the object in the video stream 108, determines a projected path of the object based on the current path, and generates the set of coordinates 110, such as for the object itself or for a specified zone, based on the projected path, which can be rectilinear, arcuate, sinusoidal, circular, oval, pentagonal, triangular, or others. The server 104 then sends the set of coordinates 110 to the laser projector 106 such that the laser projector 106 can project a laser beam based on the set of coordinates 110 as the object is moving in the projected path, whether tracing the object, illuminating the object, tracing another object in proximity of the object, illuminating another object in proximity of the object, delineate an area proximal or distal to the object, or other forms of light output. The laser projector 106 can move to project the laser beam based on the set of coordinates 110 as the object is moving in the projected path or the laser projector 106 can remain stationary to project the laser beam based on the set of coordinates 110 as the object is moving in the projected path. The laser beam can be in an electromagnetic spectrum visible to a human eye based on the set of coordinates 110 as the object is moving in the projected path. The laser projector 106 can project the laser beam in an image form based on the set of coordinates 110 as the object is moving in the projected path. The laser beam can be in a single color, as selected through an identifier by the server 104 based on processing the video stream 108, or a set of multiple colors, as selected through a set of identifiers by the server 104 based on processing the video stream 108. The server 104 can process the video stream 108 such that the server 104 is able to estimate a size of the object, a generate a value corresponding to the size of the object, and send the value to the laser projector 106 such that the laser projector 106 can project the laser beam based on the set of coordinates 110 and the value as the object is moving in the projected path. Where visibility of the laser beam is desired, the server 104 can process the video stream 108 such that the server 104 is able to determine a first characteristic of a surface onto which the laser beam will be projected, such as a color of the surface or a texture of the surface, from the video stream 108. The server 104 then can determine a second characteristic of the laser projector 106, such as a color ability of the laser projector 106 or a visual pattern ability of the laser projector 106, based on the first characteristic, where the second characteristic is associated with the laser beam. Subsequently, the server 104 can send the second characteristic to the laser projector 106 such that the laser projector 106 can project the laser beam onto the surface based on the second characteristic, which would allow for suitable contrast or visibility of the laser beam, as the object is moving in the projected path.
  • FIG. 2 shows a schematic diagram of an embodiment of a system for an image-based light output according to this disclosure. In particular, a system 200 includes a camera 202, a server 204, a laser projector 206, and a network 212. The camera 202 can be configured as the camera 102. The server 204 can be configured as the server 104. The laser projector 206 can be configured as the laser projector 106.
  • The network 212 includes a plurality of nodes that allow for sharing of resources or information. The network 212 can be wired or wireless. For example, the network 212 can be a local area network (LAN), a wide area network (WAN), a cellular network, or others. The server 204 is in communication with the camera 202 over the network 212 such that the server 204 can receive a video stream 208 from the camera 202. The server 204 is in communication with the laser projector 206 over the network 212 such that the server 204 can send a set of coordinates 210 to the laser projector 206. The video stream 208 depicts an object 214, such as a human walking in a path, which can be rectilinear. Accordingly, the server 204 receives the video stream 208 from the video camera 202 over the network 212, where the video stream 208 depicts the object 214 moving along a current path. For example, when the video stream 208 is distorted, the server 204 can perform a dewarping process as disclosed in U.S. Pat. No. 9,547,883 or 9,609,197, both of which are incorporated by reference herein for all purposes. The server 204 processes the video stream 208 such that the server 204 detects the object 214 in the video stream 208, determines a projected path of the object 214 based on the current path, and generates the set of coordinates 210, such as for the object 214, based on the projected path. The server 204 then sends the set of coordinates 210 to the laser projector 206 such that the laser projector 206 can project a laser beam 218 based on the set of coordinates 210 as the object 214 is moving in the projected path and thereby form a trace 216 of the object 214. However, note that other forms of illumination are possible, whether illuminating the object 214, tracing another object in proximity of the object 214, illuminating another object in proximity of the object 214, delineate an area proximal or distal to the object 214, or other forms of light output. The laser projector 206 can move to project the laser beam 218 based on the set of coordinates 210 as the object 214 is moving in the projected path or the laser projector 206 can remain stationary to project the laser beam based on the set of coordinates 210 as the object 214 is moving in the projected path. The laser beam 218 can be in an electromagnetic spectrum visible to a human eye based on the set of coordinates 210 as the object 214 is moving in the projected path. The laser projector 206 can project the laser beam in an image form based on the set of coordinates 210 as the object 214 is moving in the projected path. The laser beam 218 can be in a single color, as selected through an identifier by the server 204 based on processing the video stream 208, or a set of multiple colors, as selected through a set of identifiers by the server 204 based on processing the video stream 208. The server 204 can process the video stream 208 such that the server 204 is able to estimate a size of the object 214, a generate a value corresponding to the size of the object 214, and send the value to the laser projector 206 such that the laser projector 206 can project the laser beam 218 based on the set of coordinates 210 and the value as the object 214 is moving in the projected path. Where visibility of the laser beam 218 is desired, the server 204 can process the video stream 208 such that the server 204 is able to determine a first characteristic of a surface onto which the laser beam 218 will be projected, such as a color of the surface or a texture of the surface, from the video stream 208. The server 204 then can determine a second characteristic of the laser projector 206, such as a color ability of the laser projector 206 or a visual pattern ability of the laser projector 106, based on the first characteristic, where the second characteristic is associated with the laser beam 214. Subsequently, the server 204 can send the second characteristic to the laser projector 206 such that the laser projector 206 can project the laser beam 218 onto the surface, such as to form the trace 216, based on the second characteristic, which would allow for suitable contrast or visibility of the laser beam 218, as the object 214 is moving in the projected path.
  • FIG. 3 shows a flowchart of an embodiment of a process for an image-based light output according to this disclosure. A process 300 can be performed via the system 100 or the system 200. The process 300 includes a set of blocks 302-314.
  • In block 302, a camera, such as the camera 102 or 202, captured a video of an object, such as the video stream 108 or 208 depicting the object 214.
  • In block 304, the camera sends the video to a server, such as the server 104 or 204, which can be over the network 212.
  • In block 306, the server identifies the object in the video, which can be via various computer vision techniques. For example, the server can identify the object in the video through uses of different algorithms of processing of the video, which can include a separation of mobile objects based on integral and differential methods.
  • In block 308, the server determines a set of characteristics of the object. For example, the set of characteristics can include a size of the object, a shape of the object, an orientation of the object, a perimeter of the object, a volume of the object, a speed of the object, an acceleration of the object, a moving direction of the object, a classification of the object, or others. For example, the server can determine the set of characteristics through trainees (neural networks), or use different templates on which to illuminate certain signs.
  • In block 310, the server generates a set of coordinates based on the set of characteristics. For example, the set of coordinates can include a X-axis component, a Y-axis component, a Z-axis component, or others. For example, the set of coordinates can include an addressable component, such as an array element corresponding to a specified portion of a defined area. For example, the server generates the set of coordinates when the object is identified in the video, as per block 306.
  • In block 312, the server sends the set of coordinates to a laser project, such as the laser projector 106 or 206, which can over the network 212.
  • In block 314, the laser projector projects a laser beam based on the set of coordinates. For example, the laser projector can move, such as rotate, tilt, or travel, based on the set of coordinates and then project the laser beam. For example, when the laser projector obtains the set of coordinates, the laser projector can create a vector image using a trigonometric function, which allows the laser projector to transform the set of coordinates into a set of points based on which the laser beam will move in a pattern, such as in a line, a triangle, a rectangle, a polygon, or another shape, whether open or closed, to illustrate the vector image, such as via using the set of coordinates to represent peaks or vertices.
  • FIG. 4 shows a screenshot of an embodiment of an image-based light output according to this disclosure. As explained above, the system 100 or 200 can dynamically indicate, whether on a single or cumulative basis, an object, a zone, or a border, such as via highlighting, marking, tracing, outlining, or others, as selected via the server 104 or 204, based on a set of criteria, in real-time, as the object is optically captured via the video camera 102 or the camera 202. For example, the laser projector 106 or 206 can trace the object thereby creating a clear delineation or border, which are easily recognizable by a human bystander. Likewise, the server 104 or 204 can identify, classify, and make a decision about an appearance of a situation, which necessitates a supplemental illumination via the laser projector 106 or 206 of a border of a zone or an identified stationary or moving object. For example, the server 104 or 204 can be configured to classify an object and forecast a path of the object, such as via generating the set of coordinates 110 or 210 based on a size of the object, as determined via the server 104 or 204. For example, the laser projector 106 or 206 which can formulate a projection of an image on a surface based on the set of coordinates 110 or 210 or other relevant data received from the server 104 or 204. As such, this disclosure can be applied to a situation where there is a need for a user, such as a human, to clearly recognize a border of an area or an object to focus on or pay attention to because the laser projector 106 or 206 enables a formation of a contrast image to get attention from the user, raise awareness or vigilance, precisely determine a location of the object in a poor visual with the object to the area, such as a bad weather environment, a dusty environment, a polluted environment, a poorly illuminated environment, at nighttime, or others.
  • For example, in industrial setting embodiments, as explained above, the laser beam, whether visible or invisible, as projected via the laser projector 106 or 206, can assist a user when a set of signs or surface markers brakes, fades, or gets erased by supplementing or replacing the set of signs or surface markers with the laser beam. Likewise, the laser beam, as projected via the laser projector 106 or 206, can be easier for the user to recognize or see, such as under a poor lighting condition, a bad weather environment, a dusty environment, or others. Resultantly, the user has easier time identifying where a first zone ends and a second zone begins or vice versa. Furthermore, the laser beam, as projected via the laser projector 106 or 206, can minimize a human user from getting used to the set of signs or markers with time and ignoring the set of signs or markers by providing a non-familiar visual demarcation, thereby minimizing an inefficient use of resources or a dangerous situation. Furthermore, the laser beam, as projected via the laser projector 106 or 206, can notify the user of a movement of an object that can cause an injury or a damage upon impact.
  • For example, in transportation setting embodiments, as explained above, the laser beam, whether visible or invisible, as projected via the laser projector 106 or 206, can aid a vehicular driver in being notified of a presence of an object in a vehicular path, especially in a poorly illuminated environment, in a bad weather environment, or at nighttime. Likewise, the laser beam, as projected via the laser projector 106 or 206, can aid the vehicular driver in being notified of the presence of the object when the vehicular path is not regulated or unmarked.
  • As shown in FIG. 4, a screenshot 400 depicts a marked crosswalk 402 where a human user 408 is walking in a rectilinear path. The server 104 or 204 extracts the screenshot 400 from the video stream 108 or 208, as explained above. The server 104 or 204 identifies the human user 408 in the screenshot 400 via a computer vision technique, such as an object recognition technique, and positions a bounding box 406, which can be of any open or closed shape, such as a U-shape, a D-shape, a square, shape, a rectangular shape, or others, such that the bounding box 406 encloses the human user 408 and the human user 408 can be computationally tracked thereby. The server 104 or 204 then performs various video analytics on the video stream 108 or 208, inclusive of the human user 408 within the bounding box 406, such that the set of coordinates 110 or 210 for the human user 408 in the video stream 108 or 208 is generated, such as a set of object coordinates. The server 104 or 204 then sends the set of coordinates 110 or 210 to the laser projector 106 or 206 such that the laser projector 106 or 206 can project the laser beam (at least two red laser beams are shown in FIG. 4 although less or more is possible) onto the marked crosswalk 402 in proximity of the human user 408. For example, as shown in FIG. 4, the human user 408 is positioned between the two red laser beams, whether projected via a single or multiple laser projectors 106 or 206, and the two red laser beams are illuminating the marked crosswalk 402 along a plurality of lateral edges thereof, along the rectilinear path of the human user 408. Also, the laser projector 106 or 206 outputs another laser beam, which can correspond to the bounding box 406, in order to provide a dynamic tracing around the human user 408, such via the laser beam tracing the bounding box 406.
  • Correspondingly, the video stream 108 or 208, which sources the screenshot 400 and is displayed on a display of a client in communication with the server 104 or 204, while the human user 408 is walking in the rectilinear path on the marked crosswalk 402, can be augmented by depicting, such as via overlaying or insertion, a marking over or in the video stream 108 or 208, with the marking corresponding to a projected path of the laser beam based on the set of coordinates. For example, as shown in the screenshot 400, the display of the client displays, such as via overlying or insertion, at least two red lines corresponding to at least two projected paths of the at least two laser beams, which can be displayed as the human user 408 is walking in the rectilinear path on the marked crosswalk 402 and the at least two laser beams are beaming into the marked crosswalk 402, as shown in FIG. 4. Likewise, as shown in the screenshot 400, the display of the client displays, such as via overlying or insertion, a green shape, whether closed or open, corresponding to a projected trace of the laser beam, which can be displayed as the human user 408 is walking in the rectilinear path on the marked crosswalk 402 and the laser beam is tracing or outlining the human user 408, as shown in FIG. 4.
  • Note that the at least two red lines can be non-red, such as one can be green and another one can be orange. Also, note that the at least two red lines can be different from or identical to each other in color, shape, size or other characteristics. Further, note that the at least two red lines can be displayed on the display, while being identical to or different from what is projected by the laser projectors 106 or 206 in color, size, shape, or other characteristics. For example, the display can display three lines in yellow color and broken structure and the laser projectors 106 or 206 can project three lines in red color and solid in structure. Likewise, the green shape can vary in color, size, shape, and other characteristics.
  • Resultantly, the laser beam, whether visible or invisible, can notify the vehicular driver about the human user 408 on the marked crosswalk 402, while also being visibly forecasted on the display of the client, i.e., the laser beam providing a dynamic tracing around the human user 408, such via the laser beam tracing the bounding box 406, or the laser beam illuminating the marked crosswalk 402 along lateral edges thereof (vehicular driver notice), and displaying the marking in the video stream 108 or 208 corresponding to the projected path of the laser beam, whether via dynamic or static illumination (client notice).
  • For example, in public space setting embodiments having a large number of people, as explained above, the laser beam, whether visible or invisible, as projected via the laser projector 106 or 206, can help a user to detect a suspicious object and notify at least some of the people or a security personnel of the suspicious object. Resultantly, a VMS system, which has the camera 102 or 202, the server 104 or 204, and the laser projector 106 or 206, can be configured to analyze situation analytics and perform an analysis for a large amount of information. When the suspicious object is identified, the VMS can be configured to request the laser projector 106 or 206 to point and thereby localize the suspicious object for at least some of the people or the security personnel or a sensor, which shortens a search for the suspicious object in large areas.
  • In some embodiments, the client, such as a workstation, a laptop, a tablet, or others, can communicate with the server 104 or 204 and set an action setting, such as an alarm output, a communication message output, a control of a physical object, such as a gate, an engine, a motor, a vehicle, or others, when the object 214 in the video stream 108 or 208 is computationally detected, whether through image processing or a sensor, to cross over or pass a line or a demarcation, inclusive of entry or exit, formed by the laser beam of the laser projector 106 or 206. The client can save the action setting onto the server 104 or 204. For example, the action setting can include the server 104 or 204 requesting that an audio, vibratory, or light alarm device be activated, whether such device is digital or analog or local to or remote from the object 214 or the client. For example, the audio alarm device can include a bell, a siren, or a horn. For example, the action setting can include the server 104 or 204 sending or requesting that a text, email, social networking, or chat message be sent to a predefined or dynamically determined phone number or email address, which can be associated with the object 214, based on recognition and identification of the object 214 against a database of objects, which can be relational, in-memory, or others. As such, using such processing, when the object 214 in the video stream 108 or 208 is detected to cross over or pass the line or the demarcation, inclusive of entry or exit, formed by the laser beam of the laser projector 106 or 206, then the server 104 or 204 can request that an action in accordance with the action setting can be taken, such as an alarm message is generated and sent or an alarm device, such as a bell, a siren, or a horn, is activated. Note that the alarm device can output in other ways, such as via an illumination, a vibration, or others, whether analog or digital.
  • Various embodiments of the present disclosure may be implemented in a data processing system suitable for storing and/or executing program code that includes at least one processor coupled directly or indirectly to memory elements through a system bus. The memory elements include, for instance, local memory employed during actual execution of the program code, bulk storage, and cache memory which provide temporary storage of at least some program code in order to reduce the number of times code must be retrieved from bulk storage during execution.
  • I/O devices (including, but not limited to, keyboards, displays, pointing devices, DASD, tape, CDs, DVDs, thumb drives and other memory media, etc.) can be coupled to the system either directly or through intervening I/O controllers. Network adapters may also be coupled to the system to enable the data processing system to become coupled to other data processing systems or remote printers or storage devices through intervening private or public networks. Modems, cable modems, and Ethernet cards are just a few of the available types of network adapters.
  • The present disclosure may be embodied in a system, a method, and/or a computer program product. The computer program product may include a computer readable storage medium (or media) having computer readable program instructions thereon for causing a processor to carry out aspects of the present disclosure. The computer readable storage medium can be a tangible device that can retain and store instructions for use by an instruction execution device. The computer readable storage medium may be, for example, but is not limited to, an electronic storage device, a magnetic storage device, an optical storage device, an electromagnetic storage device, a semiconductor storage device, or any suitable combination of the foregoing. A non-exhaustive list of more specific examples of the computer readable storage medium includes the following: a portable computer diskette, a hard disk, a random access memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or Flash memory), a static random access memory (SRAM), a portable compact disc read-only memory (CD-ROM), a digital versatile disk (DVD), a memory stick, a floppy disk, a mechanically encoded device such as punch-cards or raised structures in a groove having instructions recorded thereon, and any suitable combination of the foregoing.
  • Computer readable program instructions described herein can be downloaded to respective computing/processing devices from a computer readable storage medium or to an external computer or external storage device via a network, for example, the Internet, a local area network, a wide area network and/or a wireless network. The network may comprise copper transmission cables, optical transmission fibers, wireless transmission, routers, firewalls, switches, gateway computers and/or edge servers. A network adapter card or network interface in each computing/processing device receives computer readable program instructions from the network and forwards the computer readable program instructions for storage in a computer readable storage medium within the respective computing/processing device.
  • Computer readable program instructions for carrying out operations of the present disclosure may be assembler instructions, instruction-set-architecture (ISA) instructions, machine instructions, machine dependent instructions, microcode, firmware instructions, state-setting data, or either source code or object code written in any combination of one or more programming languages, including an object oriented programming language such as Smalltalk, C++ or the like, and conventional procedural programming languages, such as the “C” programming language or similar programming languages. A code segment or machine-executable instructions may represent a procedure, a function, a subprogram, a program, a routine, a subroutine, a module, a software package, a class, or any combination of instructions, data structures, or program statements. A code segment may be coupled to another code segment or a hardware circuit by passing and/or receiving information, data, arguments, parameters, or memory contents. Information, arguments, parameters, data, etc. may be passed, forwarded, or transmitted via any suitable means including memory sharing, message passing, token passing, network transmission, among others. The computer readable program instructions may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer or entirely on the remote computer or server. In the latter scenario, the remote computer may be connected to the user's computer through any type of network, including a LAN or a WAN, or the connection may be made to an external computer (for example, through the Internet using an Internet Service Provider). In some embodiments, electronic circuitry including, for example, programmable logic circuitry, field-programmable gate arrays (FPGA), or programmable logic arrays (PLA) may execute the computer readable program instructions by utilizing state information of the computer readable program instructions to personalize the electronic circuitry, in order to perform aspects of the present disclosure.
  • Aspects of the present disclosure are described herein with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems), and computer program products according to embodiments of the disclosure. It will be understood that each block of the flowchart illustrations and/or block diagrams, and combinations of blocks in the flowchart illustrations and/or block diagrams, can be implemented by computer readable program instructions. The various illustrative logical blocks, modules, circuits, and algorithm steps described in connection with the embodiments disclosed herein may be implemented as electronic hardware, computer software, or combinations of both. To clearly illustrate this interchangeability of hardware and software, various illustrative components, blocks, modules, circuits, and steps have been described above generally in terms of their functionality. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the overall system. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present disclosure.
  • The flowchart and block diagrams in the Figures illustrate the architecture, functionality, and operation of possible implementations of systems, methods, and computer program products according to various embodiments of the present disclosure. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of instructions, which comprises one or more executable instructions for implementing the specified logical function(s). In some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware-based systems that perform the specified functions or acts or carry out combinations of special purpose hardware and computer instructions.
  • Words such as “then,” “next,” etc. are not intended to limit the order of the steps; these words are simply used to guide the reader through the description of the methods. Although process flow diagrams may describe the operations as a sequential process, many of the operations can be performed in parallel or concurrently. In addition, the order of the operations may be re-arranged. A process may correspond to a method, a function, a procedure, a subroutine, a subprogram, etc. When a process corresponds to a function, its termination may correspond to a return of the function to the calling function or the main function.
  • Features or functionality described with respect to certain example embodiments may be combined and sub-combined in and/or with various other example embodiments. Also, different aspects and/or elements of example embodiments, as disclosed herein, may be combined and sub-combined in a similar manner as well. Further, some example embodiments, whether individually and/or collectively, may be components of a larger system, wherein other procedures may take precedence over and/or otherwise modify their application. Additionally, a number of steps may be required before, after, and/or concurrently with example embodiments, as disclosed herein. Note that any and/or all methods and/or processes, at least as disclosed herein, can be at least partially performed via at least one entity or actor in any manner.
  • Although the terms first, second, etc. can be used herein to describe various elements, components, regions, layers and/or sections, these elements, components, regions, layers and/or sections should not necessarily be limited by such terms. These terms are used to distinguish one element, component, region, layer or section from another element, component, region, layer or section. Thus, a first element, component, region, layer, or section discussed below could be termed a second element, component, region, layer, or section without departing from the teachings of the present disclosure.
  • Furthermore, relative terms such as “below,” “lower,” “above,” and “upper” can be used herein to describe one element's relationship to another element as illustrated in the accompanying drawings. Such relative terms are intended to encompass different orientations of illustrated technologies in addition to the orientation depicted in the accompanying drawings. For example, if a device in the accompanying drawings were turned over, then the elements described as being on the “lower” side of other elements would then be oriented on “upper” sides of the other elements. Similarly, if the device in one of the figures were turned over, elements described as “below” or “beneath” other elements would then be oriented “above” the other elements. Therefore, the example terms “below” and “lower” can encompass both an orientation of above and below.
  • Unless otherwise defined, all terms (including technical and scientific terms) used herein have the same meaning as commonly understood by one of ordinary skill in the art to which this disclosure belongs. The terms, such as those defined in commonly used dictionaries, should be interpreted as having a meaning that is consistent with their meaning in the context of the relevant art and should not be interpreted in an idealized and/or overly formal sense unless expressly so defined herein.
  • As used herein, the term “about” and/or “substantially” refers to a +/−10% variation from the nominal value/term. Such variation is always included in any given.
  • If any disclosures are incorporated herein by reference and such disclosures conflict in part and/or in whole with the present disclosure, then to the extent of conflict, and/or broader disclosure, and/or broader definition of terms, the present disclosure controls. If such disclosures conflict in part and/or in whole with one another, then to the extent of conflict, the later-dated disclosure controls.
  • Although preferred embodiments have been depicted and described in detail herein, those skilled in the relevant art will appreciate that various modifications, additions, substitutions and the like can be made without departing from the spirit of the disclosure, and these are, therefore, considered to be within the scope of the disclosure, as defined in the following claims.

Claims (21)

1-30. (canceled)
31. A method comprising:
receiving, by a server, a video stream from a camera imaging a marked crosswalk over a road, wherein the video stream depicts a human crossing the road over the marked crosswalk;
detecting, by the server, the human in the video stream;
determining, by the server, a projected path of the human over the marked crosswalk based on the current path;
generating, by the server, a set of coordinates based on the projected path; and
sending, by the server, the set of coordinates to a light source such that the light source sources a light based on the set of coordinates as the human is moving in the projected path over the marked crosswalk, wherein the light accompanies the human crossing the road over the marked crosswalk and thereby visibly notifies a vehicular driver on the road of the human.
32. The method of claim 31, wherein the light traces the human over the marked crosswalk.
33. The method of claim 31, wherein the light illuminates the human over the marked crosswalk.
34. The method of claim 31, wherein the light source is a single light emitting diode sourcing the light.
35. The method of claim 31, wherein the light source is a plurality of light emitting diodes sourcing the light.
36. The method of claim 31, wherein the light source sources the light while the light source is stationary.
37. The method of claim 31, wherein the light is of a single color.
38. The method of claim 31, wherein the light is of a plurality of colors.
39. The method of claim 31, wherein the projected path is rectilinear.
40. The method of claim 31, further comprising:
performing, by the server, a classification of the human after the human is detected, wherein the projected path is determined based on the classification.
41. A system comprising:
a light source; and
a server programmed to:
receive a video stream from a camera imaging a marked crosswalk over a road, wherein the video stream depicts a human crossing the road over the marked crosswalk;
detect the human in the video stream;
determine a projected path of the human over the marked crosswalk based on the current path;
generate a set of coordinates based on the projected path; and
send the set of coordinates to the light source such that the light source sources a light based on the set of coordinates as the human is moving in the projected path over the marked crosswalk, wherein the light accompanies the human crossing the road over the marked crosswalk and thereby visibly notifies a vehicular driver on the road of the human.
42. The system of claim 41, wherein the light traces the human over the marked crosswalk.
43. The system of claim 41, wherein the light illuminates the human over the marked crosswalk.
44. The system of claim 41, wherein the light source is a single light emitting diode sourcing the light.
45. The system of claim 41, wherein the light source is a plurality of light emitting diodes sourcing the light.
46. The system of claim 41, wherein the light source sources the light while the light source is stationary.
47. The system of claim 41, wherein the light is of a single color.
48. The system of claim 41, wherein the light is of a plurality of colors.
49. The system of claim 41, wherein the projected path is rectilinear.
50. The system of claim 41, wherein the server is further programmed to:
perform a classification of the human after the human is detected, wherein the projected path is determined based on the classification.
US16/741,525 2018-01-12 2020-01-13 Systems and methods for image-based light output Abandoned US20200154047A1 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
US16/741,525 US20200154047A1 (en) 2018-01-12 2020-01-13 Systems and methods for image-based light output

Applications Claiming Priority (3)

Application Number Priority Date Filing Date Title
US201862616771P 2018-01-12 2018-01-12
US16/246,059 US10536635B2 (en) 2018-01-12 2019-01-11 Systems and methods for image-based light output
US16/741,525 US20200154047A1 (en) 2018-01-12 2020-01-13 Systems and methods for image-based light output

Related Parent Applications (1)

Application Number Title Priority Date Filing Date
US16/246,059 Continuation US10536635B2 (en) 2018-01-12 2019-01-11 Systems and methods for image-based light output

Publications (1)

Publication Number Publication Date
US20200154047A1 true US20200154047A1 (en) 2020-05-14

Family

ID=67214450

Family Applications (2)

Application Number Title Priority Date Filing Date
US16/246,059 Active US10536635B2 (en) 2018-01-12 2019-01-11 Systems and methods for image-based light output
US16/741,525 Abandoned US20200154047A1 (en) 2018-01-12 2020-01-13 Systems and methods for image-based light output

Family Applications Before (1)

Application Number Title Priority Date Filing Date
US16/246,059 Active US10536635B2 (en) 2018-01-12 2019-01-11 Systems and methods for image-based light output

Country Status (4)

Country Link
US (2) US10536635B2 (en)
EP (1) EP3738316A4 (en)
RU (1) RU2729196C1 (en)
WO (1) WO2019139959A1 (en)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
RU2759475C2 (en) * 2020-04-24 2021-11-15 Общество с ограниченной ответственностью «Смарт Си» System for ensuring safety of a pedestrian crossing and method for implemention of the system
EP4177172A1 (en) 2021-11-03 2023-05-10 Sony Group Corporation Illumination-based assistance during extravehicular activity

Family Cites Families (32)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2000318513A (en) 1999-05-17 2000-11-21 Mitsubishi Electric Corp Obstacle detection device for vehicle
US6281806B1 (en) 2000-10-12 2001-08-28 Ford Global Technologies, Inc. Driver road hazard warning and illumination system
JP3908137B2 (en) * 2002-09-18 2007-04-25 株式会社日立製作所 Information display method and system
JP2005205950A (en) 2004-01-20 2005-08-04 Nissan Motor Co Ltd Night pedestrian notifying device
JP2006127055A (en) 2004-10-27 2006-05-18 Denso Corp Information presentation device for vehicle
EP1876829A4 (en) 2005-04-28 2010-06-09 Aisin Seiki Vehicle vicinity monitoring system
JP2006341713A (en) 2005-06-08 2006-12-21 Sumitomo Electric Ind Ltd System and method for changing irradiation direction of head lamp
JP4840203B2 (en) 2007-03-15 2011-12-21 マツダ株式会社 Vehicle driving support device
JP4928372B2 (en) 2007-07-12 2012-05-09 株式会社小糸製作所 Vehicle lighting device
US7914187B2 (en) 2007-07-12 2011-03-29 Magna Electronics Inc. Automatic lighting system with adaptive alignment function
JP2009036571A (en) * 2007-07-31 2009-02-19 Toshiba Corp Position measuring system utilizing visible light communication system, position measuring device, and position measuring method
US8423431B1 (en) * 2007-12-20 2013-04-16 Amazon Technologies, Inc. Light emission guidance
US8170739B2 (en) * 2008-06-20 2012-05-01 GM Global Technology Operations LLC Path generation algorithm for automated lane centering and lane changing control system
JPWO2010044204A1 (en) * 2008-10-15 2012-03-08 パナソニック株式会社 Light projection device
US8446521B2 (en) * 2009-11-16 2013-05-21 Honeywell International Inc. Distributed agile illumination system and method
JP2011170658A (en) * 2010-02-19 2011-09-01 Seiko Epson Corp Image forming apparatus
US9834153B2 (en) * 2011-04-25 2017-12-05 Magna Electronics Inc. Method and system for dynamically calibrating vehicular cameras
JP5942840B2 (en) * 2012-12-21 2016-06-29 ソニー株式会社 Display control system and recording medium
WO2014209328A1 (en) * 2013-06-27 2014-12-31 Intel Corporation Device for adaptive projection
RU2541505C2 (en) * 2013-07-09 2015-02-20 Открытое акционерное общество "Научно-исследовательский институт оптико-электронного приборостроения" (ОАО "НИИ ОЭП") Method of delivering laser radiation to moving object and apparatus therefor
DE202013007109U1 (en) 2013-08-09 2013-11-28 Gunter Tannhäuser Warning system for danger spots
US9349293B2 (en) * 2014-02-07 2016-05-24 Here Global B.V Method and apparatus for providing vehicle synchronization to facilitate a crossing
US9508137B2 (en) * 2014-05-02 2016-11-29 Cisco Technology, Inc. Automated patron guidance
DE102014011915A1 (en) * 2014-08-12 2016-02-18 Man Truck & Bus Ag A method of warning road users of potential danger areas caused by a vehicle performing or wishing to carry out a driving maneuver
US10122976B2 (en) * 2014-12-25 2018-11-06 Panasonic Intellectual Property Management Co., Ltd. Projection device for controlling a position of an image projected on a projection surface
US9969326B2 (en) 2016-02-22 2018-05-15 Uber Technologies, Inc. Intention signaling for an autonomous vehicle
WO2017165627A1 (en) * 2016-03-23 2017-09-28 Netradyne Inc. Advanced path prediction
US20170327035A1 (en) * 2016-05-10 2017-11-16 Ford Global Technologies, Llc Methods and systems for beyond-the-horizon threat indication for vehicles
US9609197B1 (en) 2016-08-19 2017-03-28 Intelligent Security Systems Corporation Systems and methods for dewarping images
US9547883B1 (en) 2016-08-19 2017-01-17 Intelligent Security Systems Corporation Systems and methods for dewarping images
US20180089749A1 (en) * 2016-09-29 2018-03-29 Wal-Mart Stores, Inc. Micro store location projector
JP6766720B2 (en) * 2017-03-27 2020-10-14 カシオ計算機株式会社 Image processing device, image processing method and image processing program

Also Published As

Publication number Publication date
EP3738316A1 (en) 2020-11-18
US10536635B2 (en) 2020-01-14
RU2729196C1 (en) 2020-08-05
WO2019139959A1 (en) 2019-07-18
EP3738316A4 (en) 2021-12-22
US20190222753A1 (en) 2019-07-18

Similar Documents

Publication Publication Date Title
CN112287960B (en) Automatic generation of ground truth data for training or retraining machine learning models
US11989861B2 (en) Deep learning-based real-time detection and correction of compromised sensors in autonomous machines
US11373067B2 (en) Parametric top-view representation of scenes
Møgelmose et al. Trajectory analysis and prediction for improved pedestrian safety: Integrated framework and evaluations
US11468285B1 (en) Analysis of objects of interest in sensor data using deep neural networks
US9104919B2 (en) Multi-cue object association
US10823877B2 (en) Devices, systems, and methods for under vehicle surveillance
US8761436B2 (en) Device for identifying and/or classifying movement patterns in an image sequence of a surveillance scene, method and computer program
KR101344034B1 (en) Prevention of accident system based on pedestrian recognition and method for controlling thereof
US11574462B1 (en) Data augmentation for detour path configuring
US20200154047A1 (en) Systems and methods for image-based light output
US20240005642A1 (en) Data Augmentation for Vehicle Control
Hosseinyalamdary et al. A Bayesian approach to traffic light detection and mapping
Oh et al. Monocular UAV localisation with deep learning and uncertainty propagation
Ghaziamin et al. A Privacy-Preserving Edge Computing Solution for Real-Time Passenger Counting at Bus Stops using Overhead Fisheye Camera
RU2744510C2 (en) System and the method of luminous flux formation based on images
Kanellakis et al. Open space attraction based navigation in dark tunnels for MAVs
US11702011B1 (en) Data augmentation for driver monitoring
Saika et al. Real-Time Vehicle and Lane Detection using Modified OverFeat CNN: A Comprehensive Study on Robustness and Performance in Autonomous Driving
US20210318221A1 (en) Airborne particulate density determination using standard user equipment
Bhusal Object detection and tracking in wide area surveillance using thermal imagery
Srikanteswara et al. Object detection and voice guidance for the visually impaired using a smart app
WO2021048765A1 (en) Scene content and attention system
Litoriya et al. Integrated and deep learning–based social surveillance system: a novel approach
Castagno et al. Realtime rooftop landing site identification and selection in urban city simulation

Legal Events

Date Code Title Description
STPP Information on status: patent application and granting procedure in general

Free format text: RESPONSE TO NON-FINAL OFFICE ACTION ENTERED AND FORWARDED TO EXAMINER

STPP Information on status: patent application and granting procedure in general

Free format text: FINAL REJECTION MAILED

STPP Information on status: patent application and granting procedure in general

Free format text: DOCKETED NEW CASE - READY FOR EXAMINATION

STPP Information on status: patent application and granting procedure in general

Free format text: NON FINAL ACTION MAILED

STCB Information on status: application discontinuation

Free format text: ABANDONED -- FAILURE TO RESPOND TO AN OFFICE ACTION