Nothing Special   »   [go: up one dir, main page]

WO2020175845A1 - 디스플레이 장치 및 그의 동작 방법 - Google Patents

디스플레이 장치 및 그의 동작 방법 Download PDF

Info

Publication number
WO2020175845A1
WO2020175845A1 PCT/KR2020/002399 KR2020002399W WO2020175845A1 WO 2020175845 A1 WO2020175845 A1 WO 2020175845A1 KR 2020002399 W KR2020002399 W KR 2020002399W WO 2020175845 A1 WO2020175845 A1 WO 2020175845A1
Authority
WO
WIPO (PCT)
Prior art keywords
text
node
display device
document
content item
Prior art date
Application number
PCT/KR2020/002399
Other languages
English (en)
French (fr)
Inventor
김정민
정영태
안광림
Original Assignee
엘지전자 주식회사
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 엘지전자 주식회사 filed Critical 엘지전자 주식회사
Priority to US17/428,798 priority Critical patent/US11978448B2/en
Publication of WO2020175845A1 publication Critical patent/WO2020175845A1/ko

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/16Sound input; Sound output
    • G06F3/167Audio in a user interface, e.g. using voice commands for navigating, audio feedback
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L15/00Speech recognition
    • G10L15/22Procedures used during a speech recognition process, e.g. man-machine dialogue
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/90Details of database functions independent of the retrieved data types
    • G06F16/903Querying
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/90Details of database functions independent of the retrieved data types
    • G06F16/903Querying
    • G06F16/90335Query processing
    • G06F16/90344Query processing by using string matching techniques
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/90Details of database functions independent of the retrieved data types
    • G06F16/906Clustering; Classification
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/03Arrangements for converting the position or the displacement of a member into a coded form
    • G06F3/033Pointing devices displaced or positioned by the user, e.g. mice, trackballs, pens or joysticks; Accessories therefor
    • G06F3/0346Pointing devices displaced or positioned by the user, e.g. mice, trackballs, pens or joysticks; Accessories therefor with detection of the device orientation or free movement in a 3D space, e.g. 3D mice, 6-DOF [six degrees of freedom] pointers using gyroscopes, accelerometers or tilt-sensors
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0484Interaction techniques based on graphical user interfaces [GUI] for the control of specific functions or operations, e.g. selecting or manipulating an object, an image or a displayed text element, setting a parameter value or selecting a range
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0487Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser
    • G06F3/0488Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser using a touch-screen or digitiser, e.g. input of commands through traced gestures
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/41Structure of client; Structure of client peripherals
    • H04N21/422Input-only peripherals, i.e. input devices connected to specially adapted client devices, e.g. global positioning system [GPS]
    • H04N21/42203Input-only peripherals, i.e. input devices connected to specially adapted client devices, e.g. global positioning system [GPS] sound input device, e.g. microphone
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/41Structure of client; Structure of client peripherals
    • H04N21/422Input-only peripherals, i.e. input devices connected to specially adapted client devices, e.g. global positioning system [GPS]
    • H04N21/42204User interfaces specially adapted for controlling a client device through a remote control device; Remote control devices therefor
    • H04N21/42206User interfaces specially adapted for controlling a client device through a remote control device; Remote control devices therefor characterized by hardware details
    • H04N21/4221Dedicated function buttons, e.g. for the control of an EPG, subtitles, aspect ratio, picture-in-picture or teletext
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/41Structure of client; Structure of client peripherals
    • H04N21/422Input-only peripherals, i.e. input devices connected to specially adapted client devices, e.g. global positioning system [GPS]
    • H04N21/42204User interfaces specially adapted for controlling a client device through a remote control device; Remote control devices therefor
    • H04N21/42206User interfaces specially adapted for controlling a client device through a remote control device; Remote control devices therefor characterized by hardware details
    • H04N21/42222Additional components integrated in the remote control device, e.g. timer, speaker, sensors for detecting position, direction or movement of the remote control, microphone or battery charging device
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/47End-user applications
    • H04N21/472End-user interface for requesting content, additional data or services; End-user interface for interacting with content, e.g. for content reservation or setting reminders, for requesting event notification, for manipulating displayed content
    • H04N21/4722End-user interface for requesting content, additional data or services; End-user interface for interacting with content, e.g. for content reservation or setting reminders, for requesting event notification, for manipulating displayed content for requesting additional data associated with the content
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/47End-user applications
    • H04N21/482End-user interface for program selection
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L15/00Speech recognition
    • G10L15/22Procedures used during a speech recognition process, e.g. man-machine dialogue
    • G10L2015/223Execution procedure of a spoken command

Definitions

  • This disclosure relates to display devices, more specifically, web voice matching
  • Digital TV services using wired or wireless communication networks are becoming more common. Digital TV services can provide a variety of services that could not be provided by conventional analog broadcasting services.
  • IPTV Internet Protocol Television
  • smart TV services which are the types of digital TV services
  • the user can actively select the type of watching program and the viewing time.
  • IPTV, smart TV services which are the types of digital TV services
  • TV service can also provide a variety of additional services, such as internet search, home shopping, and online games, based on this bidirectionality.
  • This disclosure relates to a display device capable of classifying clickable contents within a web application screen.
  • the display device performs primary classification for all nodes existing in the Document Object Model (DOM), and
  • Secondary classification can be performed whether it is a node.
  • a display device monitors DOM changes
  • FIG. 1 is a block diagram showing the configuration of a display device according to an embodiment of the present disclosure.
  • FIG. 2 is a block diagram of a remote control device according to an embodiment of the present disclosure.
  • FIG. 3 shows an example of an actual configuration of a remote control device according to an embodiment of the present disclosure.
  • FIG. 4 shows an example of utilizing a remote control device according to an embodiment of the present disclosure.
  • FIG. 5 is a flowchart illustrating a method of operating a display device according to an embodiment of the present disclosure.
  • FIG. 6 is an example of an execution screen of a web application according to an embodiment of the present disclosure.
  • FIG. 7 is a diagram illustrating an example of an HTML document obtained through the document object model.
  • FIG. 11 is a diagram for explaining a configuration of an application manager according to an embodiment of the present disclosure.
  • Figures 12 and 13 illustrate a user scenario according to an embodiment of the present disclosure
  • FIG. 14 is a flow chart for explaining a method of handling when a plurality of texts identical to the text of the voice uttered by the user exist.
  • 15 is a diagram illustrating a user scenario for processing when duplicate texts are included in the execution screen of a web application.
  • 16 is a diagram for explaining another user scenario for processing when duplicate texts are included in the execution screen of the web application.
  • Figure 17 shows the configuration of an application manager according to another embodiment of the present disclosure
  • the display device is, for example, a broadcast receiving function.
  • an intelligent display device As an intelligent display device with a computer support function added, it is faithful to the broadcasting reception function, while Internet functions are added, so that it can have an interface that is more convenient to use than a handwritten input device, a touch screen, or a spatial remote control.
  • the wireless Internet function With the support of the wireless Internet function, it is possible to connect to the Internet and a computer to perform functions such as e-mail, web browsing, banking or games.
  • functions such as e-mail, web browsing, banking or games.
  • standardized general-purpose examples can be used.
  • the display device described in this disclosure can perform various user-friendly functions because, for example, on a general-purpose OS kernel, various applications can be freely added or deleted.
  • the display device is more than
  • it can be network TV, HBBTV, smart TV, LED TV, OLED TV, etc., and in some cases, it can also be applied to smartphones.
  • FIG. 1 is a block diagram showing a configuration of a display device according to an embodiment of the present disclosure.
  • the display device 100 includes a broadcast receiving unit 130, an external device
  • Interface unit 135, storage unit 140, user input interface unit 150, control unit 170, wireless communication unit 173, display unit 180, audio output unit 185, power supply unit 190 Can include
  • the broadcast receiving unit 130 may include a tuner 131, a demodulation unit 132, and a network interface unit 133.
  • the tuner 131 can tune into a specific broadcast channel according to the channel tuning command.
  • the tuner 131 can receive a broadcast signal for the tuned specific broadcast channel.
  • the demodulation unit 132 can separate the received broadcast signal into a video signal, an audio signal, and a data signal related to a broadcast program, and restore the separated video signal, audio signal and data signal to a form capable of outputting. .
  • the external device interface unit 135 may receive an application or an application list in an adjacent external device and transmit it to the control unit 170 or the storage unit 140.
  • the external device interface unit 135 may provide a connection path between the display device 100 and the external device.
  • the external device interface unit 135 is output from an external device connected to the display device 100 by wireless or wired.
  • One or more of the video and audio can be received and transmitted to the control unit 170.
  • the external device interface unit 135 may include a plurality of external input terminals.
  • the plurality of external input terminals are RGB terminals, one or more HDMI terminals. (High Definition Multimedia Interface) terminal, Component terminal can be included.
  • the audio signal of an external device input through the external device interface unit 135 may be output through the audio output unit 185.
  • 2020/175845 1 (:1 ⁇ 1 ⁇ 2020/002399
  • External devices that can be connected to the external device interface unit 135 include set-top boxes, Blu-ray players, players, game consoles, sound bars, smartphones, It can be either memory or home theater, but this is just an example.
  • the network interface unit 133 may provide an interface for connecting the display device 100 to a wired/wireless network including an Internet network.
  • the network interface unit 133 is connected to the connected network or the connected network. Data can be transmitted or received with other users or other electronic devices over other linked networks.
  • the network interface unit 133 is connected to the connected network or the connected network.
  • the network interface unit 133 may receive content or data provided by a content provider or a network operator. That is, the network
  • the interface unit 133 may receive content such as movies, advertisements, games, ⁇ 1), broadcast signals, and related information provided from a content provider or a network provider through a network.
  • the network interface unit 133 may receive update information and update files of firmware provided by the network operator, and transmit data to the Internet or content provider or network operator.
  • the network interface unit 133 is open to the public through the network.
  • the storage unit 140 includes a program for processing and controlling each signal in the control unit 170
  • It can store and store the signal processed video, audio or data signals.
  • the storage unit 140 is an external device interface unit 135 or a network
  • the function for temporary storage of video, audio, or data signals input from the interface unit 133 may be performed, and information about a predetermined image may be stored through a channel memory function.
  • the storage unit 140 is an external device interface unit 135 or a network
  • the application or application list input from the interface unit 133 may be stored.
  • the display device 100 can play content files (movie files, still image files, music files, document files, application files, etc.) stored in the storage unit 140 and provide them to the user.
  • content files moving files, still image files, music files, document files, application files, etc.
  • This user input interface unit 150 transmits a signal input by the user to the control unit 170 2020/175845 1» (:1 ⁇ 1 ⁇ 2020/002399 can be transmitted or a signal from the control unit 170 can be transmitted to the user.
  • the user input interface unit 150 is
  • control signals such as power on/off, channel selection, and screen setting from the remote control device 200 according to various communication methods such as method, 1 (1 ⁇ (11 0 ⁇ ) communication method or infrared (13 ⁇ 4 communication method). It may be processed or processed to transmit a control signal from the control unit 170 to the remote control device 200.
  • the user input interface unit 150 may transmit a control signal input from a local key (not shown) such as a power key, a channel key, a volume key, and a set value to the control unit 170.
  • a local key such as a power key, a channel key, a volume key, and a set value
  • the image signal processed by the control unit 170 may be input to the display unit 180 and displayed as an image corresponding to the corresponding image signal.
  • the image signal processed by the control unit 170 is an external device interface. It can be input to an external output device through the unit 135.
  • the audio signal processed by the control unit 170 may be output to the audio output unit 185.
  • the audio signal processed by the control unit 170 is transmitted to an external output device through the external device interface unit 135. Can be entered.
  • control unit 170 can control the overall operation of the display device 100.
  • control unit 170 can control the display device 00) by a user command or an internal program input through the user input interface unit 150, and displays a list of applications or applications desired by the user by accessing the network. It can be downloaded into the device 100.
  • the control unit 170 enables the channel information selected by the user to be output through the display unit 180 or the audio output unit 185 together with the processed image or audio signal.
  • control unit 170 is an external device input through the external device interface unit 135, for example, a camera or a camcorder according to an external device image playback command received through the user input interface unit 150 From, a video signal or an audio signal can be output through the display unit 180 or the audio output unit 185.
  • control unit 170 can control the display unit 180 to display an image.
  • a broadcast video input through the tuner 131 an external input video input through the external device interface unit 135, or a network
  • An image input through the interface unit or an image stored in the storage unit 140 can be controlled to be displayed on the display unit 180.
  • an image input through the interface unit or an image stored in the storage unit 140 can be controlled to be displayed on the display unit 180.
  • the image displayed on the display unit 180 may be a still image or a moving image, and may be 20 images or 30 images.
  • control unit 170 can control the content stored in the display device 100, the received broadcast content, the external input content input from the outside to be played back, and the content is a broadcast video, an external input video, and audio File, 2020/175845 1»(:1 ⁇ 1 ⁇ 2020/002399 It can be in various forms such as the accessed web screen and document files.
  • This wireless communication unit 173 can communicate with external devices through wired or wireless communication.
  • the wireless communication unit 173 can perform short range communication with external devices.
  • the wireless communication unit 173 is Bluetooth,
  • RFID Radio Frequency Identification
  • IrDA Infrared Data Association
  • UWB Ultra Wideband
  • ZigBee Ultra Wideband
  • NFC Near Field Communication
  • the wireless communication unit 173 can support near-field communication. Area Networks) between the display device (W0) and the wireless communication system, between the display device (W0) and another display device (100), or between the display device (100) and the network where the display device (100, or an external server) is located. It can support compulsory wireless communication.
  • the local wireless communication network may be Wireless Personal Area Networks.
  • the other display device (W0) is the display according to this disclosure.
  • a wearable device capable of exchanging (or interlocking with) data with the device 100, for example, a smartwatch, a smart
  • the wireless communication unit 173 can detect (or recognize) a wearable device capable of communicating around the display device 100. Further, the control unit 170 detects the wearable device and the display device (W0)
  • the display device 100 In the case of a device authorized to communicate, at least a part of data processed by the display device 100 can be transmitted to the wearable device through the wireless communication unit 173. Therefore, the user of the wearable device, the display device (W0) The data processed in the device can be used through the wearable device.
  • the display unit 180 converts the video signal, data signal, OSD signal, or video signal, data signal received from the external device interface unit 135, processed by the control unit 170 into R, G, B signals, respectively. It can be converted to generate a drive signal.
  • the display device 100 shown in FIG. 1 is a display device 100 shown in FIG. 1 .
  • the display apparatus 100 unlike the one shown in FIG. 1, does not include a tuner 131 and a demodulation unit 132, and
  • the image may be received and reproduced through the interface unit 133 or the external device interface unit 135.
  • the display device 100 includes an image processing device such as a set-top box, such as a broadcast signal or a set-top box for receiving contents according to various network services, and a content reproducing device for reproducing content input from the image processing device It can be implemented separately.
  • an image processing device such as a set-top box, such as a broadcast signal or a set-top box for receiving contents according to various network services
  • a content reproducing device for reproducing content input from the image processing device It can be implemented separately.
  • the operation method of the display device according to the embodiment of the present disclosure to be described below is not only the display device 100 as described with reference to FIG. 1, but also an image processing device or display such as the separated set-top box. It may be performed by any one of the content playback apparatus including the unit 180 and the audio output unit 185.
  • FIG. 2 is a block diagram of a remote control device according to an embodiment of the present disclosure
  • FIG. 3 is
  • the remote control device 200 fingerprint recognition unit 210
  • Wireless communication unit 220 user input unit 230, sensor unit 240, output unit 250,
  • a power supply unit 260, a storage unit 270, a control unit 280, and a voice acquisition unit 290 may be included.
  • the wireless communication unit 225 transmits and receives signals with any one of the display devices according to the embodiments of the present disclosure described above.
  • the remote control device 200 is equipped with an RF module 221 that can transmit and receive signals with the display device 100 according to the RF communication standard, and an IR module that can transmit and receive signals with the display device 100 according to the IR communication standard.
  • the remote control device 200 may be equipped with a Bluetooth module 225 capable of transmitting and receiving signals with the display device 100 according to the Bluetooth communication standard.
  • the remote control device 200 is equipped with a display device 100 and an NFC module 227 capable of transmitting signals according to the NFC (Near Field Communication) communication standard, and the display device 100 according to the WLAN (Wireless LAN) communication standard.
  • NFC Near Field Communication
  • WLAN Wireless LAN
  • a WLAN module (229) capable of transmitting and receiving signals to and from may be provided.
  • the remote control device 200 is a display device (W0) of the remote control device 200
  • a signal containing information on motion, etc. is transmitted through the wireless communication unit 220.
  • the remote control device 200 transmits the signal transmitted by the display device 100 to RF
  • the user input unit 230 is a keypad, button, touch pad, or touch screen.
  • the user can operate the user input unit 230 to
  • Commands related to the display device (W0) can be input to the remote control device (200).
  • the user input unit 230 is equipped with a hard key button
  • the user sends a command related to the display device 100 to the remote control device 200 through a push operation of the hard key button.
  • 2020/175845 1»(:1 ⁇ 1 ⁇ 2020/002399 can be entered. This will be explained with reference to Figure 3.
  • the remote control device 200 may include a plurality of buttons.
  • the plurality of buttons includes a fingerprint recognition button 212, a power button 231, a home button 232, and a live button 233. ), external input button 234, volume control button 235, voice recognition button 236, channel change button 237, confirmation button 238, and back button 239.
  • the fingerprint recognition button 212 may be a button for recognizing a user's fingerprint.
  • the fingerprint recognition button 212 is capable of a push operation, so that the push operation and the fingerprint recognition operation may be received.
  • the power button 231 may be a button for turning on/off the power of the display device 100.
  • the home button 232 may be a button for moving to the home screen of the display device 00.
  • the live button 233 may be a button for displaying a real-time broadcast program.
  • the external input button 234 is on the display device 00.
  • the volume control button 235 may be a button for adjusting the volume of the volume output by the display device 100.
  • the voice recognition button 236 receives the user's voice and receives the user's voice. , May be a button for recognizing the received voice.
  • the channel change button 237 may be a button for receiving a broadcast signal of a specific broadcasting channel.
  • the confirmation button 238 may be a button for selecting a specific function, and
  • the go button 239 may be a button for returning to the previous screen.
  • the remote control device 200 can input commands related to the display device 00.
  • the user input unit 230 can be equipped with various types of input means that the user can manipulate, such as scroll keys and jog keys. The present embodiment does not limit the scope of the present disclosure.
  • This sensor unit 240 can be equipped with a gyro sensor 241 or an acceleration sensor 243, and the gyro sensor 241 can sense information about the movement of the remote control device 200.
  • the gyro sensor 241 can sense information related to the motion of the remote control device 200 based on the axis, and the acceleration sensor 243 can detect the movement speed of the remote control device 200, etc. Information can be sensed.
  • the remote control unit 200 can further include a distance measurement sensor,
  • the distance to the display unit 180 can be sensed.
  • the output unit 250 responds to the operation of the user input unit 235 or displays
  • the output unit 250 is a user input unit 235 is operated or wireless
  • the first phase 0 module 251 When a signal is transmitted and received with the display device 100 through the communication unit 225, the first phase 0 module 251, a vibration module 253 that generates vibration, and a sound output that outputs sound 2020/175845 1»(:1 ⁇ 1 ⁇ 2020/002399 Module (255), or a display module (257) that outputs video can be provided.
  • the power supply unit 260 supplies power to the remote control device 200,
  • the remote control device 200 does not move for a predetermined period of time, power consumption can be reduced by stopping the power supply.
  • Power supply can be resumed when a predetermined key provided in the remote control device 200 is operated.
  • the storage unit 270 may store various types of programs, application data, etc. necessary for controlling or operating the remote control device 200. If the remote control device 200 is displayed, the remote control device 200 is displayed. When transmitting and receiving a signal wirelessly through the remote control device 200 and the display device 00, the signal is transmitted and received through a predetermined frequency band.
  • control unit 280 of the remote control device 200 is paired with the remote control device 200
  • Information about the display device 100 and a frequency band capable of wirelessly transmitting and receiving signals can be stored in the storage unit 270 and referenced.
  • the control unit 280 controls all matters related to the control of the remote control device 200.
  • the control unit 280 transmits a signal corresponding to a predetermined key operation of the user input unit 235 or a signal corresponding to the movement of the remote control device 200 sensed by the sensor unit 240 through the wireless communication unit 225 through the display device ( 100).
  • the audio acquisition unit 290 of the remote control device 200 can acquire audio.
  • the voice acquisition unit 290 may contain at least one microphone 291,
  • a pointer 205 corresponding to the remote control device 200 is
  • the user can move or rotate the remote control device 200 up and down, left and right.
  • the pointer 205 displayed on the display unit 180 of the display device 100 corresponds to the movement of the remote control device 200.
  • the remote control device 200 corresponds to the movement of 30 spaces. Since the pointer 205 is moved and displayed, it can be called a space remote control.
  • the display device 100 is the movement of the remote control device 200.
  • the coordinates of the pointer 205 can be calculated from the information about the pointer 205.
  • the display apparatus 100 can display the pointer 205 to correspond to the calculated coordinates. 2020/175845 1»(:1 ⁇ 1 ⁇ 2020/002399
  • FIG. 4 illustrates a case in which the user moves the remote control device 200 away from the display unit 180 while pressing a specific button in the remote control device 200. Thereby, The selected area in the display unit 180 corresponding to the pointer 205 can be zoomed in and displayed in an enlarged manner.
  • the user can connect the remote control device 200 to the display unit 180
  • the selection area in the display unit 180 corresponding to the pointer 205 may be zoomed out and displayed in a reduced size.
  • the selected area may be zoomed in.
  • the perception of movement can be excluded, i.e., the remote control device 200
  • the pointer in this specification refers to an object displayed on the display unit 180 in response to the operation of the remote control device 200. Therefore, the pointer 205 is used in addition to the arrow shape shown in the drawing. Objects of various shapes are possible. For example, it may be a concept including a point, a cursor, a prompt, a thick outline, etc. And, the pointer 205 is one of the horizontal and vertical axes on the display unit 180.
  • FIG. 5 is a flowchart illustrating a method of operating a display device according to an embodiment of the present disclosure.
  • FIG. 5 relates to a method of classifying items that can be selected or clicked by a user among items displayed on the screen.
  • control unit 170 may be configured with one or more processors.
  • control unit 170 of the display apparatus 100 displays an application execution screen on the display unit 180 (8501).
  • the application may be an application representing a content provider.
  • the content provider may provide media content such as movies, 1) ⁇ 0, music, etc. through the application.
  • the application may be a web application that provides content through the web.
  • the execution screen 600 of the web application may include a content list (0) provided by a content provider.
  • the content list 610 may include a plurality of content items 611 to 616.
  • Each of the plurality of content items 611 to 616 may be an item selectable or clickable by a user.
  • the plurality of content items 611 to 616 may include one or more of a thumbnail image representing the content and a title of the content.
  • the control unit 170 acquires a document corresponding to the application execution screen by using the document object model (1>0(:11111 ⁇ 2 Ah] 0(!, 1>0]) (8503).
  • the document object model can be an interface for accessing an XML document or an HTML document.
  • the document object model can define all elements in a document and provide a way to access each element.
  • the document object model is ⁇ 3 (it is his standard object model, and the document obtained through the document object model can be expressed in a hierarchical structure.
  • control unit 170 can obtain the document through the document object model in a state in which the voice recognition service can be provided through the display device 100.
  • control unit 170 may acquire a document corresponding to the execution screen of the application through the document object model.
  • control unit 170 when the control unit 170 recognizes the start word spoken by the user, it may acquire a document corresponding to the execution screen of the application through the document object model.
  • control unit 170 from an application provider that provides an application
  • Fig. 7 is a diagram illustrating an example of an HTML document obtained through the document object model.
  • the HTML document 700 may be organized in a hierarchical structure.
  • the HTML document 700 may include an element (or root element) that is a parent element.
  • the element may contain an intermediate element, 1 &(1 element and body element.
  • the sub-elements of the element can display the title of the content item in text form.
  • the sub-element of the body element is a link or paragraph of the content item in text form.
  • the control unit 170 acquires a plurality of selectable content items by using the document obtained through the document object model (8505).
  • the control unit 170 may analyze an XML document or an HTML document obtained through the document object model and extract a plurality of selectable or clickable content items.
  • FIG 8 shows a document obtained through a document object model according to an embodiment of the present disclosure.
  • FIG. 8 is a flow chart illustrating step 3505 of FIG. 5 in detail.
  • the control unit 170 performs primary classification for each of the plurality of nodes included in the document.
  • Multiple nodes may include an image node, an input node, and other nodes.
  • An image node may be a node that contains an image tag.
  • the control unit 170 is an image
  • an image node By passing through the classification, an image node can be acquired as a candidate content item.
  • control unit 170 may determine that the image tag is selectable.
  • control unit 170 may determine that the content item corresponding thereto has passed the primary classification.
  • Channel service screen (0) and personalization Each is a web
  • the channel service screen 910 may include a plurality of channels.
  • a specific channel 911 may include a text 3 ⁇ 4 replacing an image.
  • the specific channel 911 uses the text ⁇ M3 ⁇ 4> instead of an image that identifies the channel.
  • a specific channel 911 may be classified as a selectable candidate content item.
  • the personalized TV service screen 930 may include a recording list, and the recording list may include a plurality of recording items.
  • the specific recording item (931) is
  • a specific recording item (931) is called ⁇ NBA> instead of an image that identifies the recording item.
  • Text can be included, and this is an example of applying text with the alt attribute.
  • the specific recording item 931 may be classified as a selectable candidate content item.
  • an input node (or element) has a placeholder property value, the corresponding input node can be first classified as a candidate content item.
  • Fig. W shows the placeholder property value on the input node through the web browser screen.
  • FIG. W a web browser screen 1000 is shown, and a web browser
  • the screen 1000 may be an execution screen of a web application.
  • the web browser screen 1000 may include a navigation bar 1010 for searching content.
  • the search bar (1010) contains text as the value of the placeholder property called ⁇ Search>
  • control unit 170 may first classify the search bar 1010 corresponding to the input node as a candidate content item.
  • the other node may be a node where the text value of the child Text node exists.
  • control unit 170 may select a content item corresponding to the upper node and the lower node as a candidate content item.
  • control unit 170 controls the child nodes of all nodes in the HTML document.
  • control unit 170 can classify the content item corresponding to the child node and the parent node of the child node into candidate content items.
  • the control unit 170 selects a plurality of candidate content items that have passed the first classification.
  • the control unit 170 judges whether each of the acquired plurality of candidate content items satisfies the secondary classification condition (S805).
  • Secondary classification conditions may include the first classification conditions and the second classification conditions.
  • the first category condition may be a condition that each node must exist in the window.
  • the window can show an area corresponding to the area of the application execution screen.
  • the control unit 170 can obtain the coordinates of four vertices corresponding to each node by using the document object model. Referring to FIG. 7, coordinate information for one element selected in the HTML document 700 750 is shown.
  • the control unit 170 may acquire four vertex coordinates corresponding to the node using the coordinate information.
  • the control unit 170 may check whether at least one of the four vertex coordinates exists in the window.
  • control unit 170 may select a candidate content item corresponding to the node as the final content item.
  • the second classification condition may be a condition that either it is a top-level node, or there must be a transparent node that does not have text on top of it.
  • the control unit 170 has the position of the node, and can find out and check the top node through the elementFromPoint method of the document interface.
  • the elementFromPoint method can be an example of an API (Application Programming Interface) of the document object model.
  • control unit 170 selects a candidate content item corresponding to the node as the final content. Can be selected as an item.
  • a node with only a border effect is a node that can only be identified by a highlight box.
  • the control unit 170 has the position of the node and
  • control unit 170 may select a content item corresponding to the node as the final content item. 2020/175845 1»(:1 ⁇ 1 ⁇ 2020/002399
  • the control unit 170 is capable of selecting a candidate content item that satisfies the secondary classification condition.
  • control unit 170 excludes the candidate content item from the final content item (8809).
  • control unit 170 may determine a candidate content item that does not satisfy the secondary classification condition as a content item that cannot be selected.
  • control unit 170 receives the voice command uttered by the user (8507), and converts the received voice command into text (8509).
  • the voice command may not be a starting language, but may be a command for selecting a content item.
  • voice commands can be converted to text.
  • Step 3509 of converting the command to text may be performed prior to step 35()1.
  • the control unit 170 determines whether a content item matching the converted text exists among the plurality of content items (8511).
  • the control unit 170 may determine whether a content item matching the text exists among the plurality of content items acquired through the secondary classification process.
  • control unit 170 selects a matching content item (8513).
  • the control unit 170 compares the text data value included in the node corresponding to each of the plurality of content items with the converted text, and obtains a content item corresponding to the matched text data value.
  • the control unit 170 can reproduce the corresponding content item as the matching content item is selected.
  • the user can select the desired content by simply uttering the voice without a direct clean or selection operation.
  • control unit 170 is matched with the converted text among the plurality of content items
  • control unit 170 may provide a notification for notifying that a content item corresponding to the voice command uttered by the user does not exist, through the display unit 180 or the audio output unit 185.
  • 11 is a diagram for explaining the configuration of an application manager according to an embodiment of the present disclosure.
  • the application manager 1100 of FIG. 11 may be included in the control unit 170.
  • the application manager (1100) uses the candidate classifier (1110) and the matching machine (1130).
  • the candidate classifier 1110 may extract a plurality of candidate content items from an HTML document or an XML document obtained through the document object model.
  • Candidate Classifier (1110) provides multiple candidate content through primary and secondary classification.
  • the matching unit 1130 may determine whether there is a content item that matches the text of the voice command uttered by the user among a plurality of selectable or clickable content items.
  • the matching unit 1130 may select or click the corresponding content item.
  • an execution screen 600 of a web application is shown.
  • the display apparatus 100 may convert a voice command uttered by the user into text, and determine whether a content item matching the converted text exists among a plurality of selectable or clickable content items.
  • the display device 100 may output the same result as that the content item 611 is selected or clicked.
  • the display device 100 is a content
  • a content image 1300 corresponding to item 611 can be played.
  • the display device 100 displays detailed information of the content item 611
  • FIG. 14 is a flow chart for explaining a method of handling when a plurality of texts identical to the text of the voice uttered by the user exist.
  • FIG. 14 is a diagram for explaining an example of processing when an item that is not actually selectable or clickable among a plurality of content items determined to be finally selectable or clickable in step 3511 of FIG. 5 Can be
  • Items may include content items that are not real, selectable, or clickable.
  • the control unit 170 starts monitoring changes in the document object model (hereinafter, referred to as 1)01 ⁇ (81401).
  • the control unit 170 starts monitoring the change by using the [Yo] [(generated] Y0].
  • control unit 170 2020/175845 1»(:1/10 ⁇ 020/002399 Changes can be monitored.
  • control unit 170 may monitor changes in the DOM when receiving a voice command uttered by the user.
  • the control unit 170 determines whether a plurality of texts corresponding to the voice command uttered by the user exist in the execution screen of the application (S1403).
  • the control unit 170 responds to the voice command uttered by the user after starting to monitor the DOM change.
  • the control unit 170 may determine whether there are a plurality of content items that match the text corresponding to the voice command uttered by the user among the plurality of content items determined to be selectable or clickable.
  • the control unit 170 contains a plurality of texts corresponding to the voice command uttered by the user.
  • Priority can be determined according to the position of each of a plurality of duplicate texts within the application execution screen.
  • the control unit 170 can acquire the coordinates of each of a plurality of duplicate texts through the DOM.
  • control unit 170 based on the acquired coordinates, from the top to the bottom,
  • control unit 170 may sequentially select duplicate texts from left to right, based on the acquired coordinates.
  • the priority depends on the user's point of view, with the highest probability of choosing the first.
  • the control unit 170 determines whether a change in the DOM is detected according to the selection of the duplicate text.
  • the control unit 170 can monitor whether a change in the DOM occurs for a certain period of time after any one of the plurality of duplicate texts is selected.
  • the predetermined time may be 50 ms, but this is a value that is not sufficient as an example. .
  • control unit 170 returns to step S1405, and then 2020/175845 1»(:1 ⁇ 1 ⁇ 2020/002399 Select the duplicate text of the ranking.
  • control unit 170 may determine that selection or click of a node corresponding to the redundant text is impossible if a change in the DOM is not detected for a certain period of time.
  • control unit 170 determines that the duplicate text is selectable, and ends monitoring of 1)01 ⁇ change (81409).
  • control unit 170 may determine that the selection or click has been performed normally for the node corresponding to the duplicate text.
  • FIG. 15 is a diagram illustrating a user scenario for processing when duplicate texts are included in the execution screen of a web application.
  • an execution screen 1500 of a web application is shown.
  • the web application execution screen (1500) contains the text ⁇ Search>.
  • the display device 100 corresponds to the execution screen of the web application. Duplicate texts (1510, 1530) matching ⁇ search> can be extracted through the document.
  • the display device 100 may acquire the coordinates of each of the first duplicated text 1510 and the second duplicated text 1530 through the HTML document.
  • the display device 100 includes the coordinates of the first duplicate text 1510 and the second duplicate
  • the first redundant text 1510 can be selected first.
  • the display apparatus 100 may determine whether a change in the DOM occurs within a predetermined time.
  • the display apparatus 100 may finally judge that the first redundant text 1510 is not selectable or impossible if the change in the DOM does not occur within a certain time.
  • the display apparatus 100 may select the second duplicate text 1530 of the next priority.
  • the display apparatus 100 may output the result of selecting or clicking the second duplicate text 1510 when a change in the DOM occurs within a certain time after the selection of the second duplicate text 1530.
  • the display device 100 may perform a search for the command word input in the search window.
  • 16 is a diagram for explaining another user scenario for processing when duplicate texts are included in the execution screen of the web application.
  • the execution screen 1600 of the web application may include two duplicate texts 1610 and 1630 including the text ⁇ 1 ⁇ ) ( 3>).
  • the display device 100 retrieves duplicate texts (1610, 1630) matching ⁇ 1 ⁇ ) 0> through the HTML document corresponding to the execution screen of the web application. Can be extracted.
  • the display apparatus 100 may acquire the coordinates of each of the first duplicate text 1610 and the second duplicate text 1630 through the HTML document.
  • the display device 100 includes the coordinates of the first duplicate text 1610 and the second duplicate
  • the first redundant text 1610 can be selected first.
  • the display apparatus 100 may determine whether a change in the DOM occurs within a predetermined time.
  • the display device 100 may select the second duplicate text 1630 of the next priority.
  • the display apparatus 100 can provide a result such as selecting or clicking the second duplicate text 1610 when a change in the DOM occurs within a predetermined time after the selection of the second duplicate text 1630.
  • the display device 100 can log in to the corresponding web application.
  • a title such as a menu or category name and a corresponding sub
  • FIG. 17 shows a configuration of an application manager according to another embodiment of the present disclosure
  • the application manager 1700 is a candidate classifier (1st phase 0), matching
  • It may include an executor 1730 and a redundant text processor 1750.
  • the application manager 1700 may be included in the control unit 170.
  • Candidate classifier (1st phase 0) can extract multiple candidate content items from HTML documents or XML documents obtained through the document object model.
  • Candidate classifier (1st phase 0) provides multiple candidate content through primary and secondary classification.
  • the matching machine 1730 can determine whether there is a content item that matches the text of the voice command uttered by the user among the plurality of content items determined to be selectable or clickable.
  • the matching unit 1730 may select or click the corresponding content item.
  • the duplicate text handler 1750 is capable of extracting a plurality of duplicate texts determined to be selectable or clickable.
  • the duplicate text processor 1750 can determine whether the text corresponding to the voice command uttered by the user corresponds to the duplicate text.
  • the redundant text processor 1750 can determine whether there are multiple texts corresponding to the voice command uttered by the user in the execution screen of the application.
  • the redundant text processor 1750 can select the plurality of redundant texts in order according to priority when there are multiple texts corresponding to the voice command uttered by the user.
  • Duplicate text handler (1750) changes the DOM according to the selection of duplicate text.
  • the redundant text handler 1750 can select subordinated redundant text when a change in the DOM is not detected.
  • the duplicate text handler 1750 determines that the duplicate text is selectable and can terminate the monitoring of the DOM change.
  • the redundant text processor 1750 may transmit a signal indicating that a change in DOM has been detected to the matching unit 1730.
  • the matching operator 1730 can select the duplicate text that caused the DOM to change.
  • processor can read.
  • media that the processor can read include ROM, RAM, CD-ROM, magnetic tape, floppy disk, optical data storage device, etc., and carrier wave (e.g. It also includes implementations in the form of (transfer via the Internet).
  • the display device described above includes the configuration of the above-described embodiments and
  • the method is not limitedly applicable, and the above embodiments may be configured by selectively combining all or part of each of the embodiments so that various modifications can be made.

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Databases & Information Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Human Computer Interaction (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Data Mining & Analysis (AREA)
  • Computational Linguistics (AREA)
  • Health & Medical Sciences (AREA)
  • Audiology, Speech & Language Pathology (AREA)
  • Acoustics & Sound (AREA)
  • General Health & Medical Sciences (AREA)
  • User Interface Of Digital Computer (AREA)

Abstract

본 개시는 웹 어플리케이션 화면 내에 클릭 가능한 컨텐트들을 분류할 수 있는 디스플레이 장치에 관한 것으로, 디스플레이 장치는 Document Object Model(DOM) 상에 존재하는 모든 Node들에 대해 1차 분류 수행하고, 화면 내 존재하는 Node인지 2차 분류 수행할 수 있다.

Description

2020/175845 1»(:1/10公020/002399 명세서
발명의 명칭:디스플레이장치 및그의동작방법 기술분야
[1] 본개시는디스플레이장치에관한것으로,보다상세하게는,웹음성매칭
후보를분류하기위한디스플레이장치에관한것이다.
배경기술
[2] 유선또는무선통신망을이용한디지털 TV서비스가보편화되고있다.디지털 TV서비스는기존의아날로그방송서비스에서는제공할수없었던다양한 서비스를제공할수있다.
[3] 예를들어 ,디지털 TV서비스의종류인 IPTV(Internet Protocol Television), smart TV서비스의경우사용자로하여금시청프로그램의종류,시청시간등을 능동적으로선택할수있도록하는양방향성을제공한다. IPTV, smart
TV서비스는이러한양방향성을기반으로하여다양한부가서비스,예컨대 인터넷검색,홈쇼핑,온라인게임등을제공할수도있다.
[4] 또한,최근의 TV는사용자의음성을인식하고,그에대응하여,컨텐트를
검색하는등,다양한음성인식서비스를제공하고있다.
[5] 그러나,종래에는웹화면상에,다양한컨텐트들이있는경우,사용자가
보고자하는컨텐트를원격제어장치를통해직접선택 (또는클릭)해야하는 불편이 있었다.
발명의상세한설명
기술적과제
[6] 본개시는웹어플리케이션화면내에클릭가능한컨텐트들을분류할수있는 디스플레이장치에관한것이다.
[7] 본개시는웹어플리케이션화면내에,중복된텍스트가포함된경우,클릭
가능한컨텐트에대응하는텍스트를추출할수있는디스플레이장치에관한 것이다.
과제해결수단
[8] 본개시의일실시 예에따른디스플레이장치는 Document Object Model(DOM) 상에존재하는모든 Node들에대해 1차분류수행하고,화면내존재하는
Node인지 2차분류수행할수있다.
[9] 본개시의또다른실시예에따른디스플레이장치는 DOM변화감시
시작하고,우선순위에따른중복텍스트를순서에따라클릭하고,특정시간 동안 DOM변화체크하여변화가없는경우다음우선순위의중복 Text를 Click하고, DOM변화가발생하는경우 DOM변화감시종료할수있다.
발명의효과
[1이 본개시의실시예에따르면,사용자의음성발화만으로,웹어플리케이션화면 2020/175845 1»(:1^1{2020/002399 내에서,컨텐트를클릭할수있어,사용자의직접적인컨텐트의선택이 불필요하다.이에따라,사용자의컨텐트선택편의성이크게향상될수있다.
[11] 본개시의실시예에따르면,웹어플리케이션화면내에,중복된텍스트가
있더라도,사용자의음성발화만으로,클릭가능한텍스트만이선택되어, 사용자의클릭의도가정확히반영될수있다.
도면의간단한설명
[12] 도 1은본개시의일실시 예에따른디스플레이장치의구성을블록도로
도시한것이다.
[13] 도 2은본개시의일실시 예에따른원격제어장치의블록도이다.
[14] 도 3은본개시의일실시 예에따른원격제어장치의실제구성예를보여준다.
[15] 도 4는본개시의실시예에따라원격제어장치를활용하는예를보여준다.
[16] 도 5는본개시의일실시 예에따른디스플레이장치의동작방법을설명하는 흐름도이다.
[17] 도 6은본개시의실시예에따른웹어플리케이션의실행화면의예이다.
[18] 도 7은문서객체모델을통해,얻어진 HTML문서의예시를설명하는
도면이다.
[19] 의단계 3505를구체화하여설명하는흐름도이다.
[20] %는 1¾1 에서제공하는。1 111161에대한이미지들에 ,
를적용한서비스화면을보여준다.
[21]
Figure imgf000004_0001
브라우져화면을통
존재하는예를설명하는도면이
Figure imgf000004_0002
[22] 도 11은본개시의실시예에따른어플리케이션관리자의구성을설명하는 도면이다.
[23] 도 12및도 13은본개시의실시 예에따른사용자시나리오를설명하는
도면이다.
[24] 도 14는사용자가발화한음성의텍스트와동일한텍스트가복수개존재하는 경우,이를취급하는방법을설명하는흐름도이다.
[25] 도 15는웹어플리케이션의실행화면내에중복텍스트들이포함된경우,이를 처리하는사용자시나리오를설명하는도면이다.
[26] 도 16은웹어플리케이션의실행화면내에중복텍스트들이포함된경우,이를 처리하는또다른사용자시나리오를설명하는도면이다.
[27] 도 17은본개시의또다른실시예에따른어플리케이션관리자의구성을
설명하는도면이다.
발명의실시를위한최선의형태
[28] 이하,본개시와관련된실시 예에대하여도면을참조하여보다상세하게
설명한다.이하의설명에서사용되는구성요소에대한접미사 "모듈”및 "부”는 명세서작성의용이함만이고려되어부여되거나혼용되는것으로서,그자체로 2020/175845 1»(:1^1{2020/002399 서로구별되는의미또는역할을갖는것은아니다.
[29] 본개시의실시예에따른디스플레이장치는,예를들어방송수신기능에
컴퓨터지원기능을추가한지능형 디스플레이장치로서 ,방송수신기능에 충실하면서도인터넷기능등이추가되어,수기 방식의 입력장치,터치스크린 또는공간리모콘등보다사용에편리한인터페이스를갖출수있다.그리고, 유선또는무선인터넷기능의지원으로인터넷및컴퓨터에 접속되어,이메일, 웹브라우징 ,뱅킹또는게임등의기능도수행가능하다.이러한다양한기능을 위해표준화된범용예가사용될수있다.
[3이 따라서,본개시에서 기술되는디스플레이장치는,예를들어 범용의 OS커널 상에 ,다양한애플리케이션이자유롭게추가되거나삭제가능하므로,사용자 친화적인다양한기능이수행될수있다.상기디스플레이장치는,보다
구체적으로예를들면,네트워크 TV, HBBTV,스마트 TV, LED TV, OLED TV 등이 될수있으며,경우에따라스마트폰에도적용가능하다.
[31] 도 1은본개시의 일실시예에 따른디스플레이장치의구성을블록도로도시한 것이다.
[32] 도 1을참조하면,디스플레이장치 (100)는방송수신부 (130),외부장치
인터페이스부 (135),저장부 (140),사용자입력 인터페이스부 (150),제어부 (170), 무선통신부 (173),디스플레이부 (180),오디오출력부 (185),전원공급부 (190)를 포함할수있다.
[33] 방송수신부 (130)는튜너 (131),복조부 (132)및네트워크인터페이스부 (133)를 포함할수있다.
[34] 튜너 (131)는채널선국명령에따라특정방송채널을선국할수있다.
튜너 (131)는선국된특정방송채널에 대한방송신호를수신할수있다.
[35] 복조부 (132)는수신한방송신호를비디오신호,오디오신호,방송프로그램과 관련된데이터신호로분리할수있고,분리된비디오신호,오디오신호및 데이터신호를출력이 가능한형태로복원할수있다.
[36] 외부장치 인터페이스부 (135)는인접하는외부장치 내의 애플리케이션또는 애플리케이션목록을수신하여,제어부 (170)또는저장부 (140)로전달할수있다.
[37] 외부장치 인터페이스부 (135)는디스플레이장치 (100)와외부장치간의 연결 경로를제공할수있다.외부장치 인터페이스부 (135)는디스플레이장치 (100)에 무선또는유선으로연결된외부장치로부터출력된영상,오디오중하나이상을 수신하여,제어부 (170)로전달할수있다.외부장치 인터페이스부 (135)는복수의 외부입력단자들을포함할수있다.복수의외부입력단자들은 RGB단자,하나 이상의 HDMI(High Definition Multimedia Interface)단자,컴포넌트 (Component) 단자를포함할수있다.
[38] 외부장치 인터페이스부 (135)를통해 입력된외부장치의 영상신호는
디스플레이부 (180)를통해출력될수있다.외부장치 인터페이스부 (135)를통해 입력된외부장치의음성신호는오디오출력부 (185)를통해출력될수있다. 2020/175845 1»(:1^1{2020/002399
[39] 외부장치인터페이스부 (135)에연결가능한외부장치는셋톱박스,블루레이 플레이어, 플레이어,게임기,사운드바,스마트폰,
Figure imgf000006_0001
메모리,홈 씨어터중어느하나일수있으나,이는예시에불과하다.
[4이 네트워크인터페이스부 (133)는디스플레이장치 (100)를인터넷망을포함하는 유/무선네트워크와연결하기위한인터페이스를제공할수있다.네트워크 인터페이스부 (133)는접속된네트워크또는접속된네트워크에링크된다른 네트워크를통해 ,다른사용자또는다른전자기기와데이터를송신또는 수신할수있다.
[41] 또한,디스플레이장치 (100)에미리등록된다른사용자또는다른전자기기중 선택된사용자또는선택된전자기기에,디스플레이장치 (100)에저장된일부의 컨텐츠데이터를송신할수있다.
[42] 네트워크인터페이스부 (133)는접속된네트워크또는접속된네트워크에
링크된다른네트워크를통해,소정웹페이지에접속할수있다.즉,네트워크를 통해소정웹페이지에접속하여,해당서버와데이터를송신또는수신할수 있다.
[43] 그리고,네트워크인터페이스부 (133)는컨텐츠제공자또는네트워크운영자가 제공하는컨텐츠또는데이터들을수신할수있다.즉,네트워크
인터페이스부 (133)는네트워크를통하여컨텐츠제공자또는네트워크 제공자로부터제공되는영화,광고,게임, \切1),방송신호등의컨텐츠및그와 관련된정보를수신할수있다.
[44] 또한,네트워크인터페이스부 (133)는네트워크운영자가제공하는펌웨어의 업데이트정보및업데이트파일을수신할수있으며,인터넷또는컨텐츠 제공자또는네트워크운영자에게데이터들을송신할수있다.
[45] 네트워크인터페이스부 (133)는네트워크를통해,공중에공개 ((平해)된
애플리케이션들중원하는애플리케이션을선택하여수신할수있다.
[46] 저장부 (140)는제어부 (170)내의각신호처리및제어를위한프로그램이
저장하고,신호처리된영상,음성또는데이터신호를저장할수있다.
[47] 또한,저장부 (140)는외부장치인터페이스부 (135)또는네트워크
인터페이스부 (133)로부터입력되는영상,음성,또는데이터신호의임시저장을 위한기능을수행할수도있으며,채널기억기능을통하여소정이미지에관한 정보를저장할수도있다.
[48] 저장부 (140)는외부장치인터페이스부 (135)또는네트워크
인터페이스부 (133)로부터입력되는애플리케이션또는애플리케이션목록을 저장할수있다.
[49] 디스플레이장치 ( 100)는저장부 ( 140)내에저장되어 있는컨텐츠파일 (동영상 파일,정지영상파일,음악파일,문서파일,애플리케이션파일등)을재생하여 사용자에게제공할수있다.
[5이 사용자입력인터페이스부 (150)는사용자가입력한신호를제어부 (170)로 2020/175845 1»(:1^1{2020/002399 전달하거나,제어부 (170)로부터의신호를사용자에게전달할수있다.예를들어, 사용자입력인터페이스부 (150)는블루투
Figure imgf000007_0001
Figure imgf000007_0002
방식, 1 (1切(110)통신방식또는적외선 (1¾통신방식 등다양한통신방식에따라,원격제어장치 (200)로부터전원온/오프,채널선택, 화면설정등의제어신호를수신하여처리하거나,제어부 (170)로부터의제어 신호를원격제어장치 (200)로송신하도록처리할수있다.
[51] 또한,사용자입력인터페이스부 (150)는,전원키,채널키,볼륨키,설정치등의 로컬키 (미도시)에서입력되는제어신호를제어부 (170)에전달할수있다.
[52] 제어부 (170)에서영상처리된영상신호는디스플레이부 (180)로입력되어해당 영상신호에대응하는영상으로표시될수있다.또한,제어부 (170)에서영상 처리된영상신호는외부장치인터페이스부 (135)를통하여외부출력장치로 입력될수있다.
[53] 제어부 (170)에서처리된음성신호는오디오출력부 (185)로오디오출력될수 있다.또한,제어부 (170)에서처리된음성신호는외부장치인터페이스부 (135)를 통하여외부출력장치로입력될수있다.
[54] 그외,제어부 (170)는,디스플레이장치 (100)내의전반적인동작을제어할수 있다.
[55] 또한,제어부 (170)는사용자입력인터페이스부 (150)를통하여입력된사용자 명령또는내부프로그램에의하여디스플레이장치 00)를제어할수있으며, 네트워크에접속하여사용자가원하는애플리케이션또는애플리케이션목록을 디스플레이장치 (100)내로다운받을수있도록할수있다.
[56] 제어부 (170)는사용자가선택한채널정보등이처리한영상또는음성신호와 함께디스플레이부 (180)또는오디오출력부 (185)를통하여출력될수있도록 한다.
[57] 또한,제어부 (170)는사용자입력인터페이스부 (150)를통하여수신한외부장치 영상재생명령에따라,외부장치인터페이스부 (135)를통하여입력되는외부 장치,예를들어 ,카메라또는캠코더로부터의,영상신호또는음성신호가 디스플레이부 (180)또는오디오출력부 (185)를통해출력될수있도록한다.
[58] 한편,제어부 (170)는영상을표시하도록디스플레이부 (180)를제어할수
있으며,예를들어튜너 (131)를통해입력되는방송영상,또는외부장치 인터페이스부 (135)를통해입력되는외부입력영상,또는네트워크
인터페이스부를통해입력되는영상,또는저장부 (140)에저장된영상이 디스플레이부 (180)에서표시되도록제어할수있다.이경우,
디스플레이부 (180)에표시되는영상은정지영상또는동영상일수있으며 , 20 영상또는 30영상일수있다.
[59] 또한,제어부 (170)는디스플레이장치 (100)내에저장된컨텐츠,또는수신된 방송컨텐츠,외부로부터입력되는외부입력컨텐츠가재생되도록제어할수 있으며,상기컨텐츠는방송영상,외부입력영상,오디오파일,정지영상, 2020/175845 1»(:1^1{2020/002399 접속된웹화면,및문서파일등다양한형태일수있다.
[6이 무선통신부 (173)는유선또는무선통신을통해외부기기와통신을수행할수 있다.무선통신부 (173)는외부기기와근거리통신 (Short range communication)을 수행할수있다.이를위해,무선통신부 (173)는블루투스 (Bluetoothä),
RFID(Radio Frequency Identification),적외선통신 (Infrared Data Association; IrDA), UWB (Ultra Wideband), ZigBee, NFC(Near Field Communication),
Wi-Fi(Wireless-Fidelity), Wi-Fi Direct, Wireless USB(Wireless Universal Serial Bus) 기술중적어도하나를이용하여 ,근거리통신을지원할수있다.이러한,무선 통신부 (173)는근거리무선통신망 (Wireless Area Networks)을통해디스플레이 장치 (W0)와무선통신시스템사이,디스플레이장치 (W0)와다른디스플레이 장치 (100)사이,또는디스플레이장치 (100)와디스플레이장치 (100,또는 외부서버)가위치한네트워크사이의무선통신을지원할수있다.근거리무선 통신망은근거리무선개인통신망 (Wireless Personal Area Networks)일수있다.
[61] 여기에서 ,다른디스플레이장치 (W0)는본개시에따른디스플레이
장치 (100)와데이터를상호교환하는것이가능한 (또는연동가능한)웨어러블 디바이스 (wearable device,예를들어 ,스마트워치 (smartwatch),스마트
글래스 (smart glass), HMD(head mounted display)),스마트폰과같은이동 단말기가될수있다.무선통신부 (173)는디스플레이장치 (100)주변에,통신 가능한웨어러블디바이스를감지 (또는인식)할수있다.나아가,제어부 (170)는 감지된웨어러블디바이스가본개시에따른디스플레이장치 ( W0)와
통신하도록인증된디바이스인경우,디스플레이장치 (100)에서처리되는 데이터의적어도일부를,무선통신부 (173)를통해웨어러블디바이스로송신할 수있다.따라서,웨어러블디바이스의사용자는,디스플레이장치 (W0)에서 처리되는데이터를,웨어러블디바이스를통해이용할수있다.
[62] 디스플레이부 (180)는제어부 (170)에서처리된영상신호,데이터신호, OSD 신호또는외부장치인터페이스부 (135)에서수신되는영상신호,데이터신호 등을각각 R,G,B신호로변환하여구동신호를생성할수있다.
[63] 한편,도 1에도시된디스플레이장치 (100)는본개시의일실시예에
불과하므로.도시된구성요소들중일부는실제구현되는디스플레이
장치 (W0)의사양에따라통합,추가,또는생략될수있다.
[64] 즉,필요에따라 2이상의구성요소가하나의구성요소로합쳐지거나,혹은 하나의구성요소가 2이상의구성요소로세분되어구성될수있다.또한,각 블록에서수행하는기능은본개시의실시예를설명하기위한것이며,그 구체적인동작이나장치는본개시의권리범위를제한하지아니한다.
[65] 본개시의또다른실시예에따르면,디스플레이장치 (100)는도 1에도시된 바와달리,튜너 (131)와복조부 (132)를구비하지않고네트워크
인터페이스부 (133)또는외부장치인터페이스부 (135)를통해서영상을수신하여 재생할수도있다. 2020/175845 1»(:1^1{2020/002399
[66] 예를들어 ,디스플레이장치 (100)는방송신호또는다양한네트워크서비스에 따른컨텐츠들을수신하기위한등과같은셋탑박스등과같은영상처리 장치와상기영상처리장치로부터입력되는컨텐츠를재생하는컨텐츠재생 장치로분리되어구현될수있다.
[67] 이경우,이하에서설명할본개시의실시예에따른디스플레이장치의동작 방법은도 1을참조하여설명한바와같은디스플레이장치 (100)뿐아니라,상기 분리된셋탑박스등과같은영상처리장치또는디스플레이부 (180)및 오디오출력부 (185)를구비하는컨텐츠재생장치중어느하나에의해수행될 수도있다.
[68] 다음으로,도 2내지도 3을참조하여,본개시의일실시예에따른
원격제어장치에대해설명한다.
[69] 도 2은본개시의일실시 예에따른원격제어장치의블록도이고,도 3은
본개시의일실시 예에따른원격제어장치 (200)의실제구성 예를보여준다.
P이 먼저,도 2를참조하면,원격제어장치 (200)는지문인식부 (210),
무선통신부 (220),사용자입력부 (230),센서부 (240),출력부 (250),
전원공급부 (260),저장부 (270),제어부 (280),음성획득부 (290)를포함할수있다. 1] 도 2을참조하면,무선통신부 (225)는전술하여설명한본개시의실시예들에 따른디스플레이장치중임의의어느하나와신호를송수신한다.
2] 원격제어장치 (200)는 RF통신규격에따라디스플레이장치 (100)와신호를 송수신할수있는 RF모듈 (221)을구비하며, IR통신규격에따라디스플레이 장치 (100)와신호를송수신할수있는 IR모듈 (223)을구비할수있다.또한, 원격제어장치 (200)는블루투스통신규격에따라디스플레이장치 (100)와신호를 송수신할수있는블루투스모듈 (225)를구비할수있다.또한,
원격제어장치 (200)는 NFC(Near Field Communication)통신규격에따라 디스플레이장치 (100)와신호를송수할수있는 NFC모듈 (227)을구비하며, WLAN(Wireless LAN)통신규격에따라디스플레이장치 (100)와신호를 송수신할수있는 WLAN모듈 (229)을구비할수있다.
3] 또한,원격제어장치 (200)는디스플레이장치 ( W0)로원격제어장치 (200)의
움직임등에관한정보가담긴신호를무선통신부 (220)를통해전송한다.
4] 한편,원격제어장치 (200)는디스플레이장치 (100)가전송한신호를 RF
모듈 (221)을통하여수신할수있으며,필요에따라 IR모듈 (223)을통하여 디스플레이장치 (100)로전원온/오프,채널변경,볼륨변경등에관한명령을 전송할수있다.
5] 사용자입력부 (230)는키패드,버튼,터치패드,또는터치스크린등으로
구성될수있다.사용자는사용자입력부 (230)를조작하여
원격제어장치 (200)으로디스플레이장치 (W0)와관련된명령을입력할수있다. 사용자입력부 (230)가하드키버튼을구비할경우사용자는하드키버튼의푸쉬 동작을통하여원격제어장치 (200)으로디스플레이장치 (100)와관련된명령을 2020/175845 1»(:1^1{2020/002399 입력할수있다.이에대해서는도 3을참조하여설명한다.
6] 도 3을참조하면,원격제어장치 (200)는복수의버튼을포함할수있다.복수의 버튼은지문인식버튼 (212),전원버튼 (231),홈버튼 (232),라이브버튼 (233), 외부입력버튼 (234),음량조절버튼 (235),음성인식버튼 (236),채널변경 버튼 (237),확인버튼 (238)및뒤로가기버튼 (239)을포함할수있다.
7] 지문인식버튼 (212)은사용자의지문을인식하기위한버튼일수있다.일
실시예로,지문인식버튼 (212)은푸쉬동작이가능하여,푸쉬동작및지문인식 동작을수신할수도있다.전원버튼 (231)은디스플레이장치 (100)의전원을 온/오프하기위한버튼일수있다.홈버튼 (232)은디스플레이장치 00)의홈 화면으로이동하기위한버튼일수있다.라이브버튼 (233)은실시간방송 프로그램을디스플레이하기위한버튼일수있다.외부입력버튼 (234)은 디스플레이장치 00)에연결된외부입력을수신하기위한버튼일수있다.음량 조절버튼 (235)은디스플레이장치 (100)가출력하는음량의크기를조절하기 위한버튼일수있다.음성인식버튼 (236)은사용자의음성을수신하고,수신된 음성을인식하기위한버튼일수있다.채널변경버튼 (237)은특정방송채널의 방송신호를수신하기위한버튼일수있다.확인버튼 (238)은특정기능을 선택하기위한버튼일수있고,뒤로가기버튼 (239)은이전화면으로되돌아가기 위한버튼일수있다.
[78] 다시도 2를설명한다.
9] 사용자입력부 (230)가터치스크린을구비할경우사용자는터치스크린의
소프트키를터치하여원격제어장치 (200)로디스플레이장치 00)와관련된 명령을입력할수있다.또한,사용자입력부 (230)는스크롤키나,조그키등 사용자가조작할수있는다양한종류의입력수단을구비할수있으며본실시 예는본개시의권리범위를제한하지아니한다.
[8이 센서부 (240)는자이로센서 (241)또는가속도센서 (243)를구비할수있으며 , 자이로센서 (241)는원격제어장치 (200)의움직임에관한정보를센싱할수있다.
[81] 예를들어,자이로센서 (241)는원격제어장치 (200)의동작에관한정보를제 축을기준으로센싱할수있으며,가속도센서 (243)는원격제어장치 (200)의 이동속도등에관한정보를센싱할수있다.한편,원격제어장치 (200)는 거리측정센서를더구비할수있어,디스플레이장치 00)의
디스플레이부 (180)와의거리를센싱할수있다.
[82] 출력부 (250)는사용자입력부 (235)의조작에대응하거나디스플레이
장치 (100)에서전송한신호에대응하는영상또는음성신호를출력할수있다. 출력부 (250)를통하여사용자는사용자입력부 (235)의조작여부또는
디스플레이장치 (100)의제어여부를인지할수있다.
[83] 예를들어,출력부 (250)는사용자입력부 (235)가조작되거나무선
통신부 (225)를통하여디스플레이장치 (100)와신호가송수신되면점등되는 1止0모듈 (251),진동을발생하는진동모듈 (253),음향을출력하는음향출력 2020/175845 1»(:1^1{2020/002399 모듈 (255),또는영상을출력하는디스플레이모듈 (257)을구비할수있다.
[84] 또한,전원공급부 (260)는원격제어장치 (200)으로전원을공급하며,
원격제어장치 (200)이소정시간동안움직이지않은경우전원공급을 중단함으로서전원낭비를줄일수있다.전원공급부 (260)는
원격제어장치 (200)에구비된소정키가조작된경우에전원공급을재개할수 있다.
[85] 저장부 (270)는원격제어장치 (200)의제어또는동작에필요한여러종류의 프로그램,애플리케이션데이터등이저장될수있다.만일원격제어장치 (200)가 디스플레
Figure imgf000011_0001
통하여무선으로신호를송수신할경우 원격제어장치 (200)과디스플레이장치 00)는소정주파수대역을통하여 신호를송수신한다.
[86] 원격제어장치 (200)의제어부 (280)는원격제어장치 (200)과페어링된
디스플레이장치 (100)와신호를무선으로송수신할수있는주파수대역등에 관한정보를저장부 (270)에저장하고참조할수있다.
[87] 제어부 (280)는원격제어장치 (200)의제어에관련된제반사항을제어한다. 제어부 (280)는사용자입력부 (235)의소정키조작에대응하는신호또는 센서부 (240)에서센싱한원격제어장치 (200)의움직임에대응하는신호를무선 통신부 (225)를통하여디스플레이장치 (100)로전송할수있다.
[88] 또한,원격제어장치 (200)의음성획득부 (290)는음성을획득할수있다.
[89] 음성획득부 (290)는적어도하나이상의마이크 (291)을포함할수있고,
마이크 (291)를통해음성을획득할수있다.
[9이 다음으로도 4를설명한다.
[91] 도 4는본개시의실시예에따라원격제어장치를활용하는예를보여준다.
[92] 도 4의如는원격제어장치 (200)에대응하는포인터 (205)가
디스플레이부 (180)에표시되는것을예시한다.
[93] 사용자는원격제어장치 (200)를상하,좌우로움직이거나회전할수있다. 디스플레이장치 (100)의디스플레이부 (180)에표시된포인터 (205)는원격제어 장치 (200)의움직임에대응한다.이러한원격제어장치 (200)는,도면과같이, 30 공간상의움직임에따라해당포인터 (205)가이동되어표시되므로,공간 리모콘이라명명할수있다.
[94] 도 4의 (비는사용자가원격제어장치 (200)를왼쪽으로이동하면,디스플레이 장치 (100)의디스플레이부 (180)에표시된포인터 (205)도이에대응하여 왼쪽으로이동하는것을예시한다.
[95] 원격제어장치 (200)의센서를통하여감지된원격제어장치 (200)의움직임에 관한정보는디스플레이장치 (100)로전송된다.디스플레이장치 (100)는원격 제어장치 (200)의움직임에관한정보로부터포인터 (205)의좌표를산출할수 있다.디스플레이장치 (100)는산출한좌표에대응하도록포인터 (205)를표시할 수있다. 2020/175845 1»(:1^1{2020/002399
[96] 도 4의切는,원격제어장치 (200)내의특정버튼을누른상태에서 ,사용자가 원격제어장치 (200)를디스플레이부 (180)에서멀어지도록이동하는경우를 예시한다.이에의해,포인터 (205)에대응하는디스플레이부 (180)내의선택 영역이줌인되어확대표시될수있다.
[97] 이와반대로,사용자가원격제어장치 (200)를디스플레이부 (180)에
가까워지도록이동하는경우,포인터 (205)에대응하는디스플레이부 (180)내의 선택영역이줌아웃되어축소표시될수있다.
[98] 한편,원격제어장치 (200)가디스플레이부 (180)에서멀어지는경우,선택
영역이줌아웃되고,원격제어장치 (200)가디스플레이부 (180)에가까워지는 경우,선택영역이줌인될수도있다.
[99] 또한,원격제어장치 (200)내의특정버튼을누른상태에서는상하,좌우
이동의인식이배제될수있다.즉,원격제어장치 (200)가
디스플레이부 ( 180)에서멀어지거나접근하도록이동하는경우,상,하,좌,우 이동은인식되지않고,앞뒤이동만인식되도록할수있다.원격제어장치 (200) 내의특정버튼을누르지않은상태에서는,원격제어장치 (200)의상,하,좌,우 이동에따라포인터 (205)만이동하게된다.
[10이 한편,포인터 (205)의이동속도나이동방향은원격제어장치 (200)의
이동속도나이동방향에대응할수있다.
[101] 한편,본명세서에서의포인터는,원격제어장치 (200)의동작에대응하여, 디스플레이부 (180)에표시되는오브젝트를의미한다.따라서,포인터 (205)로 도면에도시된화살표형상외에다양한형상의오브젝트가가능하다.예를 들어,점,커서,프롬프트,두꺼운외곽선등을포함하는개념일수있다.그리고, 포인터 (205)가디스플레이부 (180)상의가로축과세로축중어느한
지점 에대응하여표시되는것은물론,
Figure imgf000012_0001
대응하여표시되는것도가능하다.
[102] 도 5는본개시의일실시 예에따른디스플레이장치의동작방법을설명하는 흐름도이다.
[103] 특히,도 5는화면상에 ,표시된항목들중사용자에의해선택또는클릭가능한 항목을분류하는방법에관한것이다.
[104] 이하에서 ,제어부 (170)는하나이상의프로세서로구성될수있다.
[105] 도 5를참조하면,디스플레이장치 (100)의제어부 (170)는어플리케이션의실행 화면을디스플레이부 (180)상에표시한다 (8501).
[106] 어플리케이션은컨텐트제공자를나타내는어플리케이션일수있다.컨텐트 제공자는영화, 1)¥0,음악등과같은미디어컨텐트를어플리케이션을통해 제공할수있다.
[107] 어플리케이션은웹을통해컨텐트를제공하는웹어플리케이션일수있다.
[108] 어플리케이션의실행화면에대해서는,도 6을참조하여설명한다.
[109] 도 6은본개시의실시예에따른웹어플리케이션의실행화면의예이다. 2020/175845 1»(:1^1{2020/002399
[110] 웹어플리케이션의실행화면 (600)은컨텐트제공자에의해제공되는컨텐트 리스트 ( 0)를포함할수있다.
[111] 컨텐트리스트 (610)는복수의 컨텐트항목들 (611내지 616)을포함할수있다.
[112] 복수의 컨텐트항목들 (611내지 616)각각은사용자에 의해선택또는클릭 가능한항목일수있다.
[113] 복수의 컨텐트항목들 (611내지 616)은컨텐트를나타내는썹네일이미지 및 컨텐트의 타이틀중하나이상을포함할수있다.
[114] 다시,도 5를설명한다.
[115] 제어부 (170)는문서객체모델 (1>0(:1111½ 아 ] 0(! , 1>0] )을이용하여 , 어플리케이션실행화면에상응하는문서를획득한다 (8503).
[116] 문서객체모델은 XML문서나, HTML문서에접근하기 위한인터페이스일수 있다.
[117] 문서객체모델은문서 내의모든요소를정의하고,각요소에접근하는방법을 제공할수있다.
[118] 어플리케이션실행화면에상응하는문서는어플리케이션의실행화면을
표현하기 위해코딩된텍스트들일수있다.
[119] 문서객체모델은 \¥3(그의표준객체모델이며,문서 객체모델을통해 얻어진 문서는계층구조로표현될수있다.
[12이 한편,제어부 (170)는디스플레이장치 (100)를통해음성 인식서비스의 제공이 가능한상태에서,문서객체모델을통해상기문서를획득할수있다.
[121] 예를들어,제어부 (170)는원격 제어장치 (200)로부터,음성 인식서비스를 요청하는명령을수신한경우,문서객체모델을통해 어플리케이션의실행 화면에상응하는문서를획득할수있다.
[122] 또다른예로,제어부 (170)는사용자가발화한기동어를인식한경우,문서객체 모델을통해어플리케이션의실행화면에상응하는문서를획득할수있다.
[123] 제어부 (170)는어플리케이션을제공하는어플리케이션제공자로부터,
어플리케이션실행화면에상응하는문서를수신할수있다.
[124] 도 7은문서객체모델을통해,얻어진 HTML문서의 예시를설명하는
도면이다.
[125] 도 7을참조하면, HTML문서 (700)는계층구조로이루어질수있다. HTML 문서 (700)는상위요소인 요소 (또는루트요소)를포함할수있다.
[126] 요소는중간요소인 1 &(1요소및바디요소를포함할수있다.
[127]
Figure imgf000013_0001
각각은하위요소를포함할수있다.
[128] 요소의하위요소는컨텐트항목의타이틀을텍스트형태로나타내어질 수있다.
[129] 바디요소의하위요소는컨텐트항목의 링크또는단락을텍스트형태로
나타내어질수있다.
[13이 다시,도 5를설명한다. 2020/175845 1»(:1^1{2020/002399
[131] 제어부 (170)는문서객체모델을통해획득된문서를이용하여,선택가능한 복수의컨텐트항목들을획득한다 (8505).
[132] 제어부 (170)는문서객체모델을통해 획득된 XML문서또는 HTML문서를 분석하여,선택또는클릭 가능한복수의 컨텐트항목을추출할수있다.
[133] 이에 대해서는,이하의도면을참조하여 설명한다.
[134] 도 8은본개시의실시 예에 따라,문서객체모델을통해 획득된문서를
이용하여,선택가능한복수의 컨텐트항목들을획득하는과정을설명하는 도면이다.
[135] 도 8은도 5의 단계 3505를구체화하여설명하는흐름도이다.
[136] 제어부 (170)는문서에포함된복수의노드들각각에대해 1차분류를
수행한다 (8801).
[137] 복수의노드들 (또는복수의 엘리먼트들)은이미지 노드,입력노드및기타 노드를포함할수있다.
[138] 이미지노드 (또는엘리먼트)는이미지 태그가포함되어 있는노드일수있다.
[139] 제어부 (170)는이미지
Figure imgf000014_0001
분류를통과시켜,이미지노드를후보컨텐트항목으로획득할수있다.
[14이 즉,제어부 (170)는 속성값이존재하는경우,이미지 태그가선택가능한 것으로판단할수있다.
[141] 제어부 (170)는이미지노드의 속성값이존재하는경우,이에상응하는 컨텐트항목은 1차분류를통과한것으로판단할수있다.
Figure imgf000014_0005
를 적용하고있다.
[145] 는 1¾1 에서 제공하는。1 111161에 대한이미지들에 ,
Figure imgf000014_0002
적용한서비스화면을보여준다.
[146] 도 9&를참조하면,채널정보를제공하는채
있고,도 %를참조하면,개인화서비스를위
Figure imgf000014_0003
도시되어 있다.
[147] 채널서비스화면 (이 0)및개인화
Figure imgf000014_0004
각각은웹
어플리케이션의실행화면일수있다.
[148] 채널서비스화면 (910)은복수의채널들을포함할수있다.
[149] 복수의채널들중특정 채널 (911)은이미지를대체하는텍스트 ¾를포함할 수있다.
[15이 즉,특정채널 (911)은채널을식별하는이미지 대신,<므¾>라는텍스트를 2020/175845 1»(:1^1{2020/002399 포함할수있다.이경우가바로, alt속성으로텍스트를적용하고있는예이다.
[151] 특정채널 (911)은선택가능한후보컨텐트항목으로분류될수있다.
[152] 개인화 TV서비스화면 (930)은녹화리스트를포함할수있고,녹화리스트는 복수의녹화항목들을포함할수있다.
[153] 복수의녹화항목들중특정녹화항목 (931)은이미지를대체하는
텍스트 (NBA)를포함할수있다.
[154] 즉,특정녹화항목 (931)은녹화항목을식별하는이미지대신 <NBA>라는
텍스트를포함할수있고,이경우가바로, alt속성으로텍스트를적용하고있는 예이다.
[155] 특정녹화항목 (931)은선택가능한후보컨텐트항목으로분류될수있다.
[156] 다시,도 8을설명한다.
[157] 입력노드 (또는엘리먼트)는 placeholder속성값이존재하는경우,해당입력 노드를후보컨텐트항목으로 1차분류할수있다.
[158] 이에대해서는,도 W을참조하여설명한다.
[159] 도 W은웹브라우져화면을통해입력노드상에 placeholder속성값이
존재하는예를설명하는도면이다.
[160] 도 W을참조하면,웹브라우져화면 (1000)이도시되어있고,웹브라우져
화면 (1000)은웹어플리케이션의실행화면일수있다.
[161] 웹브라우져화면 (1000)은컨텐트의검색을위한탐색바 (1010)를포함할수 있다.
[162] 탐색바 (1010)는 <Search>라는 placeholder속성값으로텍스트를포함하고
있다.
[163] 제어부 (170)는입력노드의 placeholder속성값이존재하는경우,해당입력 노드에상응하는탐색바 (1010)를후보컨텐트항목으로 1차분류할수있다.
[164] HTML문서의 <input>노드 (또는엘리먼트)의 placeholder속성값의예는
다음과같다.
Figure imgf000015_0001
[169] </form>
[17이 다시,도 8을설명한다.
[171] 기타노드는 child Text노드의텍스트값이존재하는노드일수있다.
[172] 즉,제어부 (170)는 HTML문서를통해하위노드에텍스트값이존재하는경우, 해당상위노드및하위노드에상응하는컨텐트항목을후보컨텐트항목으로 선정할수있다.
[173] 구체적으로,제어부 (170)는 HTML문서내의모든노드의 child노드들을
탐색할수있고, child노드의 type이 text인경우,해당데이터값을체크할수 2020/175845 1»(:1^1{2020/002399 있다.
[174] 즉, child노드의 type이 text인경우,제어부 (170)는 child노드및 child노드의 상위노드에상응하는컨텐트항목을후보컨텐트항목으로분류할수있다.
[175] 제어부 (170)는 1차분류를통과한복수의후보컨텐트항목들을
획득한다 (S803).
[176] 제어부 (170)는획득된복수의후보컨텐트항목들각각이 2차분류조건을 만족하는지를판단한다 (S805).
[177] 2차분류조건은제 1분류조건및제 2분류조건을포함할수있다.
[178] 제 1분류조건은각노드가윈도우내에존재해야하는조건일수있다.
[179] 윈도우는어플리케이션의실행화면의면적에상응하는영역을나타낼수 있다.
[180] 제어부 (170)는문서객체모델을이용하여 ,각노드에상응하는 4개의꼭지점 좌표들을획득할수있다.도 7을참조하면, HTML문서 (700)내에선택된어느 하나의요소에대한좌표정보 (750)가도시되어 있다.
[181] 제어부 (170)는좌표정보를이용하여 ,노드에상응하는 4개의꼭지점좌표들을 획득할수있다.
[182] 제어부 (170)는 4개의꼭지점좌표들중하나이상의꼭지점좌표가윈도우내에 존재하는지를체크할수있다.
[183] 제어부 (170)는 4개의꼭지점좌표들중하나이상의꼭지점좌표가윈도우내에 존재하는경우,해당노드에상응하는후보컨텐트항목을최종컨텐트항목으로 선정할수있다.
[184] 제 2분류조건은자신이최상위노드이거나,자신의노드위에텍스트를갖지 않는투명한노드가있어야하는조건일수있다.
[185] 제어부 (170)는노드의 position을가지고,문서인터페이스의 elementFromPoint method를통해최상위 Node를알아내어체크할수있다. elementFromPoint method는문서객체모델의 API(Application Programming Interface)의한예일수 있다.
[186] 제어부 (170)는노드가최상위노드가아닌경우,해당노드의상위에 ,텍스트를 갖지않는투명하거나,테두리효과만가진노드가존재하는경우,해당노드에 상응하는후보컨텐트항목을최종컨텐트항목으로선정할수있다.
[187] 테두리효과만가진노드란,하이라이트박스에의해서만식별될수있는
컨텐트항목에상응하는노드일수있다.
[188] 제어부 (170)는 Node의 position을가지고 document interface의
elementsFromPoint method를통해 Node Array를알아내고자신의 Node보다위에 있는 Node들에대해 data값을가진 Text Node가없는지체크할수있다.
[189] 제어부 (170)는자신의 Node보다위에 있는 Node들에대해 data값을가진 Text Node가없는경우,해당노드에상응하는컨텐트항목을최종컨텐트항목으로 선정할수있다. 2020/175845 1»(:1^1{2020/002399
[190] 제어부 (170)는 2차분류조건을만족하는후보컨텐트항목을선택가능한
컨텐트항목으로획득한다 (8807).
[191] 제어부 (170)는 2차분류조건을만족하지못하는후보컨텐트항목이존재하는 경우,해당후보컨텐트항목을최종컨텐트항목에서제외한다 (8809).
[192] 즉,제어부 (170)는 2차분류조건을만족하지못하는후보컨텐트항목을선택 불가능한컨텐트항목으로판단할수있다.
[193] 다시,도 5를설명한다.
[194] 그후,제어부 (170)는사용자가발화한음성명령을수신하고 (8507),수신된 음성명령을텍스트로변환한다 (8509).
[195] 음성명령은기동어가아닌,컨텐트항목의선택을위한명령일수있다.
[196]
Figure imgf000017_0001
엔진을이용하여,음성명령을텍스트로 변환할수있다.
[197] 또다른예로,사용자의음성명령을수신하는단계 8507및수신된음성
명령을텍스트로변환하는단계 3509는단계 35()1보다먼저수행될수있다.
[198] 제어부 (170)는복수의컨텐트항목들중변환된텍스트와매칭되는컨텐트 항목이존재하는지를판단한다 (8511).
[199] 제어부 (170)는 2차분류과정을통해획득된복수의컨텐트항목들중텍스트에 매칭되는컨텐트항목이존재하는지를판단할수있다.
[200] 제어부 (170)는복수의컨텐트항목들중변환된텍스트와매칭되는항목이 존재하는경우,매칭되는컨텐트항목을선택한다 (8513).
[201] 제어부 (170)는복수의컨텐트항목들각각에상응하는노드에포함된텍스트 데이터값과,변환된텍스트를비교하여,매칭되는텍스트데이터값에상응하는 컨텐트항목을획득할수있다.
[202] 제어부 (170)는매칭되는컨텐트항목이선택됨에따라해당컨텐트항목을 재생할수있다.
[203] 이와같이,본개시의실시 예에따르면,사용자는직접적인클린또는선택 동작없이,간단한음성의발화만으로,원하는컨텐트를선택할수있다.
[204] 이에따라,사용자에게향상된사용자경험이제공될수있다.
[205] 한편,제어부 (170)는복수의컨텐트항목들중변환된텍스트와매칭되는
항목이존재하지않는경우,음성인식의실패를알리는알림을출력한다 (8515).
[206] 즉,제어부 (170)는사용자가발화한음성명령에맞는컨텐트항목이존재하지 않음을알리기위한알림을디스플레이부 (180)또는오디오출력부 (185)를통해 줄력할수있다.
[207] 도 11은본개시의실시예에따른어플리케이션관리자의구성을설명하는 도면이다.
[208] 도 11의어플리케이션관리자 (1100)는제어부 (170)에포함될수있다.
[209] 어플리케이션관리자 (1100)는후보분류기 (1110)및매칭수행기 (1130)를
포함할수있다. 2020/175845 1»(:1^1{2020/002399
[210] 후보분류기 (1110)는문서 객체모델을통해획득된 HTML문서또는 XML 문서로부터,복수의후보컨텐트항목들을추출할수있다.
[211] 후보분류기 (1110)는 1차분류및 2차분류를통해복수의후보컨텐트
항목들을중선택가능한복수의 컨텐트항목들을추출할수있다.
[212] 1차분류및 2차분류는도 8의설명으로대체한다.
[213] 매칭수행기 (1130)는선택또는클릭 가능한복수의 컨텐트항목들중사용자가 발화한음성 명령의 텍스트와매칭되는컨텐트항목이 있는지를판단할수있다.
[214] 매칭수행기 (1130)는선택또는클릭 가능한복수의 컨텐트항목들중사용자가 발화한음성 명령의 텍스트와매칭되는컨텐트항목이 있는경우,해당컨텐트 항목을선택또는클릭할수있다.
[215] 도 12및도 13은본개시의실시 예에따른사용자시나리오를설명하는
도면이다.
[216] 도 12를참조하면,웹어플리케이션의실행화면 (600)이도시되어 있다.
[217] 사용자는<강철비>라는컨텐트항목의 명칭을발화할수있다.
[218] 디스플레이장치 (100)는사용자가발화한음성 명령을텍스트로변환하고, 선택또는클릭 가능한복수의 컨텐트항목들중변환된텍스트와매칭되는 컨텐트항목이존재하는지를판단할수있다.
[219] 디스플레이장치 (100)는특정 컨텐트항목 (611)이 변환된텍스트와매칭되는 것으로판단한경우,해당컨텐트항목 (611)이선택또는클릭된것과동일한 결과를출력할수있다.
[22이 예를들어,디스플레이장치 (100)는도 13에도시된바와같이,컨텐트
항목 (611)에상응하는컨텐트영상 (1300)을재생할수있다.
[221] 또다른예로,디스플레이장치 (100)는컨텐트항목 (611)의상세정보를
디스플레이부 (180)를통해표시할수있다.
[222] 다음으로,어플리케이션의실행화면내에,사용자가발화한음성의 텍스트와 동일한텍스트가복수개존재하는경우,이를취급하는방법에 대해설명한다.
[223] 도 14는사용자가발화한음성의 텍스트와동일한텍스트가복수개존재하는 경우,이를취급하는방법을설명하는흐름도이다.
[224] 특히,도 14는도 5의 단계 3511에서 최종적으로선택또는클릭 가능한것으로 판단된복수의 컨텐트항목들중,실제로선택이나클릭이불가능한항목이 포함된경우,이를처리하는예를설명하는도면일수있다.
[225] 도 5의 단계 3511에서 최종적으로얻어진선택또는클릭가능한컨텐트
항목들중에는실제,선택또는클릭이불가능한컨텐트항목이포함될수도 있다.
[226] 제어부 (170)는문서객체모델 (이하, 1)01^)의변화감시를시작한다 (81401).
[227] 제어부 (170)는ᅵ 아 에 요 를이용하여 ( 요 를생성하여 £)0] 변화 감시를시작한다.
[228] 제어부 (170)는디스플레이장치 (100)가음성 인식서비스를제공할시 , 1乂^의 2020/175845 1»(:1/10公020/002399 변화를감시할수있다.
[229] 또다른예로,제어부 (170)는사용자가발화한음성명령을수신할시, DOM의 변화를감시할수있다.
[23이 MutationObserver는 Web API를통해 DOM의변경이되었는지를감시하는
생성자일수있다.
[231] MutationObserver를이용하여 observer를생성하여 DOM변화감시를시작하는 것은다음의웹사이트에서설명된내용이참고될수있다.
[232] (https://developer.mozillaorg/en-US/docsAVeb/API/MutationObserver/MutationObse rver)
[233] 2)옵션으로는다음의값들이사용될수있다.
[234] Options = {childList: true, attributes: true, subtree: true, characterData: true,
attributeOldValue: true, characterDataOldValue: true}
[235] 제어부 (170)는어플리케이션의실행화면내에 ,사용자가발화한음성명령에 상응하는텍스트가복수개존재하는지를판단한다 (S1403).
[236] 제어부 (170)는 DOM변화의감시시작후,사용자가발화한음성명령에
상응하는텍스트가복수개존재하는지를판단할수있다.
[237] 제어부 (170)는선택또는클릭가능하다고판단된복수의컨텐트항목들중 사용자가발화한음성명령에상응하는텍스트에매칭되는컨텐트항목이복수 개존재하는지를판단할수있다.
[238] 제어부 (170)는사용자가발화한음성명령에상응하는텍스트가복수개
존재하는경우,복수의중복텍스트들을우선순위에따라순서대로
선택한다 (S1405).
[239] 우선순위는어플리케이션의실행화면내에서,복수의중복텍스트들각각의 위치에따라결정될수있다.
[24이 제어부 (170)는 DOM을통해복수의중복텍스트들각각의좌표를획득할수 있다.
[241] 제어부 (170)는획득된좌표들에기반하여,상측에서하측으로,중복
텍스트들을순차적으로선택할수있다.
[242] 또다른예로,제어부 (170)는획득된좌표들에기반하여,좌측에서우측으로, 중복텍스트들을순차적으로선택할수있다.
[243] 이와같이,우선순위는사용자관점에따라,가장먼저선택할확률이높은
텍스트의배치를기준으로,할당될수있다.
[244] 제어부 (170)는중복텍스트의선택에따라 DOM의변화가감지되는지를
판단한다 (S1407).
[245] 제어부 (170)는복수의중복텍스트들중어느하나의중복텍스트가선택된후, 일정시간동안, DOM의변화가발생되는지를감시할수있다.일정시간은 50ms일수있으나,예시에불과한수치이다.
[246] 제어부 (170)는 DOM의변화가감지되지않은경우,단계 S1405로돌아가,후 2020/175845 1»(:1^1{2020/002399 순위의중복텍스트를선택한다.
[247] 제어부 (170)는중복텍스트의선택후,일정시간동안 DOM의변화가감지되지 않은경우,중복텍스트에상응하는노드의선택또는클릭이불가능한것으로 판단할수있다.
[248] 제어부 (170)는 1>0] 의변화가감지된경우,해당중복텍스트를선택가능한 것으로판단하고, 1)01^변화의감시를종료한다 (81409).
[249] 제어부 (170)는중복텍스트의선택후,일정시간내에 DOM의변화가감지된 경우,중복텍스트에상응하는노드에대해선택또는클릭이정상적으로 수행되었다고판단할수있다.
[25이 도 15는웹어플리케이션의실행화면내에중복텍스트들이포함된경우,이를 처리하는사용자시나리오를설명하는도면이다.
[251] 도 15를참조하면,웹어플리케이션의실행화면 (1500)이도시되어 있다.
[252] 웹어플리케이션의실행화면 (1500)은<검색>이라는텍스트를포함하는
2가지의중복텍스트들 (1510, 1530)을포함할수있다.
[253] 사용자가<검색>이라는음성명령을발화한경우,디스플레이장치 (100)는웹 어플리케이션의실행화면에상응하는
Figure imgf000020_0001
문서를통해<검색>에매칭되는 중복텍스트들 (1510, 1530)을추출할수있다.
[254] 디스플레이장치 (100)는 HTML문서를통해제 1중복텍스트 (1510)및제 2중복 텍스트 (1530)각각의좌표를획득할수있다.
[255] 디스플레이장치 (100)는제 1중복텍스트 (1510)의좌표및제 2중복
텍스트 (1530)의좌표에기반하여,선택대상의우선순위를결정할수있다.
[256] 디스플레이장치 (100)는제 1중복텍스트 (1510)가제 2중복텍스트 (1530)보다 상측에위치하므로,제 1중복텍스트 (1510)를먼저선택할수있다.
[257] 디스플레이장치 (100)는제 1중복텍스트 (1510)를선택한후, DOM의변화가 일정시간내에발생하는지를판단할수있다.
[258] 디스플레이장치 (100)는제 1중복텍스트 (1510)를선택한후, DOM의변화가 일정시간내에발생되지않는경우,제 1중복텍스트 (1510)를선택또는 불가능한것으로최종적으로판단할수있다.
[259] 디스플레이장치 (100)는차순위의제 2중복텍스트 (1530)를선택할수있다. 디스플레이장치 (100)는제 2중복텍스트 (1530)의선택후, DOM의변화가일정 시간내에발생하는경우,제 2중복텍스트 (1510)를선택또는클릭한것과같은 결과를출력할수있다.
[26이 즉,디스플레이장치 (100)는검색창에입력된명령어에대한검색을수행할수 있다.
[261] 다음으로,도 16을설명한다.
[262] 도 16은웹어플리케이션의실행화면내에중복텍스트들이포함된경우,이를 처리하는또다른사용자시나리오를설명하는도면이다.
[263] 도 16을참조하면,웹어플리케이션의실행화면 (1600)이도시되어 있다. 2020/175845 1»(:1^1{2020/002399
[264] 웹어플리케이션의실행화면 (1600)은<1乂) (3 >이라는텍스트를포함하는 2가지의중복텍스트들 (1610, 1630)을포함할수있다.
[265] 사용자가<검색>이라는음성 명령을발화한경우,디스플레이장치 (100)는웹 어플리케이션의실행화면에상응하는 HTML문서를통해<1乂) 0 >에 매칭되는중복텍스트들 (1610, 1630)을추출할수있다.
[266] 디스플레이장치 (100)는 HTML문서를통해제 1중복텍스트 (1610)및제 2중복 텍스트 (1630)각각의좌표를획득할수있다.
[267] 디스플레이장치 (100)는제 1중복텍스트 (1610)의좌표및제 2중복
텍스트 (1630)의좌표에 기반하여,선택대상의우선순위를결정할수있다.
[268] 디스플레이장치 (100)는제 1중복텍스트 (1610)가제 2중복텍스트 (1630)보다 상측에 위치하므로,제 1중복텍스트 (1610)를먼저선택할수있다.
[269] 디스플레이장치 (100)는제 1중복텍스트 (1610)를선택한후, DOM의 변화가 일정시간내에 발생하는지를판단할수있다.
[27이 디스플레이장치 (100)는제 1중복텍스트 (1610)를선택한후, DOM의 변화가 일정시간내에 발생되지 않는경우,제 1중복텍스트 (1610)를선택또는 불가능한것으로최종적으로판단할수있다.
[271] 디스플레이장치 (100)는차순위의제 2중복텍스트 (1630)를선택할수있다. 디스플레이장치 (100)는제 2중복텍스트 (1630)의선택후, DOM의 변화가일정 시간내에발생하는경우,제 2중복텍스트 (1610)를선택또는클릭한것과같은 결과를줄력할수있다.
[272] 즉,디스플레이장치 (100)는해당웹어플리케이션에로그인을수행할수있다.
[273] 메뉴나카테고리명등의타이틀과해당서브페이지내 텍스트가동일한
경우가존재할수있다.이런경우,타이틀은클릭이불가능한단순텍스트이고 실제클릭이가능한텍스트는서브페이지에존재할수있다.
[274] 본개시의실시 예에 따르면,메뉴나카테고리명등의타이틀과해당서브
페이지 내 텍스트가동일한경우,실제로,선택또는클릭 가능한텍스트를 빠르게구분할수있고,사용자의음성 명령에따른동작도제대로수행될수 있다.
[275] 도 17은본개시의또다른실시 예에 따른어플리케이션관리자의구성을
설명하는도면이다.
[276] 도 17을참조하면,어플리케이션관리자 (1700)는후보분류기 (1기 0),매칭
수행기 (1730)및중복텍스트처리기 (1750)을포함할수있다.
[277] 어플리케이션관리자 (1700)는제어부 (170)에포함될수있다.
[278] 후보분류기 (1기 0)는문서 객체모델을통해획득된 HTML문서또는 XML 문서로부터,복수의후보컨텐트항목들을추출할수있다.
[279] 후보분류기 (1기 0)는 1차분류및 2차분류를통해복수의후보컨텐트
항목들을중선택가능한복수의 컨텐트항목들을추출할수있다.
[28이 1차분류및 2차분류는도 8의설명으로대체한다. 2020/175845 1»(:1^1{2020/002399
[281] 매칭수행기 (1730)는선택또는클릭가능한것으로판단된복수의컨텐트 항목들중사용자가발화한음성명령의텍스트와매칭되는컨텐트항목이 있는지를판단할수있다.
[282] 매칭수행기 (1730)는선택또는클릭가능한것으로판단된복수의컨텐트 항목들중사용자가발화한음성명령의텍스트와매칭되는컨텐트항목이있는 경우,해당컨텐트항목을선택또는클릭할수있다.
[283] 중복텍스트처리기 (1750)는선택또는클릭가능한것으로판단된복수의중복 텍스트들을추출할수있다.
[284] 중복텍스트처리기 (1750)는사용자가발화한음성명령에상응하는텍스트가 중복텍스트에해당되는지를판단할수있다.
[285] 중복텍스트처리기 (1750)는어플리케이션의실행화면내에,사용자가발화한 음성명령에상응하는텍스트가복수개존재하는지를판단할수있다.
[286] 중복텍스트처리기 (1750)는사용자가발화한음성명령에상응하는텍스트가 복수개존재하는경우,복수의중복텍스트들을우선순위에따라순서대로 선택할수있다.
[287] 중복텍스트처리기 (1750)는중복텍스트의선택에따라 DOM의변화가
감지되는지를판단할수있다.
[288] 중복텍스트처리기 (1750)는 DOM의변화가감지되지않은경우,후순위의 중복텍스트를선택할수있다.
[289] 중복텍스트처리기 (1750)는 DOM의변화가감지된경우,해당중복텍스트를 선택가능한것으로판단하고, DOM변화의감시를종료할수있다.
[29이 중복텍스트처리기 (1750)는 DOM의변화가감지되었음을나타내는신호를 매칭수행기 (1730)에전달할수있다.
[291] 매칭수행기 (1730)는 DOM의변화를일으킨중복텍스트를선택할수있다.
[292] 본개시의실시예에의하면,전술한방법은,프로그램이기록된매체에
프로세서가읽을수있는코드로서구현하는것이가능하다.프로세서가읽을수 있는매체의예로는, ROM, RAM, CD-ROM,자기테이프,플로피디스크,광 데이터저장장치등이있으며,캐리어웨이브 (예를들어,인터넷을통한전송)의 형태로구현되는것도포함한다.
[293] 상기와같이설명된디스플레이장치는상기설명된실시예들의구성과
방법이한정되게적용될수있는것이아니라,상기실시예들은다양한변형이 이루어질수있도록각실시예들의전부또는일부가선택적으로조합되어 구성될수도있다.

Claims

2020/175845 1»(:1/10公020/002399 청구범위
[청구항 1] 어플리케이션의실행화면을표시하는디스플레이 ;및
사용자가발화한음성명령을수신하고,상기음성명령을수신함에따라, 문서객체모델을이용하여,상기실행화면에상응하는문서를획득하고, 획득된문서로부터,선택가능한복수의컨텐트항목들을획득하고, 획득된복수의컨텐트항목들중상기음성명령의텍스트와매칭되는 컨텐트항목이존재하는지를판단하고,판단결과,상기음성명령의 텍스트와매칭되는컨텐트항목이존재하는경우,매칭되는컨텐트 항목을선택하는프로세서를포함하는
디스플레이장치.
[청구항 2] 제 1항에 있어서,
상기프로세서는
상기문서에포함된복수의노드들중하나이상의이미지노드들,하나 이상의입력노드들및하나이상의텍스트노드를복수의후보컨텐트 항목들로획득하고,
상기복수의후보컨텐트항목들중상기선택가능한복수의컨텐트 항목들을획득하는
디스플레이장치.
[청구항 3] 제 2항에 있어서,
상기프로세서는
이미지노드의대체속성값이존재하는경우,해당이미지노드를후보 컨텐트항목으로분류하고,
입력노드의 placeholder속성값이존재하는경우,해당입력노드를후보 컨텐트항목으로분류하고,
상위노드의차일드텍스트노드의텍스트값이존재하는경우,해당상위 노드를후보컨텐트항목으로분류하는
디스플레이장치.
[청구항 4] 제 3항에 있어서,
상기프로세서는
상기복수의후보컨텐트항목들중분류조건을만족하는지를판단하고, 상기분류조건은
각노드에상응하는좌표들중일부가상기어플리케이션의실행화면의 면적을나타내는윈도우내에있고,
각노드가최상위노드이거나,자신의노드위에텍스트를갖지않는 투명한노드인조건인
디스플레이장치.
[청구항 5] 제 1항에 있어서, 2020/175845 1»(:1^1{2020/002399 상기문서는 HTML문서이고,
상기문석객체모델은상기 HTML문서에접근하기 위한인터페이스인 디스플레이장치.
[청구항 6] 제 1항에 있어서,
상기프로세서는
상기복수의 컨텐트항목들중상기 텍스트와매칭되는항목이존재하지 않는경우,음성 인식의실패를알리는알림을출력하는
디스플레이장치.
[청구항 7] 제 1항에 있어서,
상기프로세서는
상기 음성 명령의 텍스트와매칭되는복수의중복텍스트들이존재하는 경우,상기복수의중복텍스트들에 대해상기문서 객체모델의 변화를 일으킨중복텍스트를선택하는
디스플레이장치.
[청구항 8] 제 7항에 있어서,
상기프로세서는
상기복수의중복텍스트들을우선순위에따라순차적으로선택하고, 선택에 따라일정시간내에서,상기문서객체모델의 변화를일으킨 상기중복텍스트를최종적으로,선택하는
디스플레이장치.
[청구항 9] 제 8항에 있어서,
상기우선순위는
상기복수의중복텍스트들각각의 위치에따라결정되는 디스플레이장치.
[청구항 ] 제 1항에 있어서,
상기프로세서는
상기 컨텐트항목의선택에따라상기 컨텐트항목의 컨텐트를재생하는 디스플레이장치.
[청구항 11] 디스플레이장치의동작방법에 있어서,
어플리케이션의실행화면을표시하는단계;
사용자가발화한음성 명령을수신하는단계;
상기 음성 명령을수신함에따라,문서 객체모델을이용하여,상기실행 화면에상응하는문서를획득하는단계 ;
획득된문서로부터,선택가능한복수의 컨텐트항목들을획득하는단계; 획득된복수의 컨텐트항목들중상기음성 명령의 텍스트와매칭되는 컨텐트항목이존재하는지를판단하는단계;및
판단결과,상기음성 명령의 텍스트와매칭되는컨텐트항목이존재하는 경우,매칭되는컨텐트항목을선택하는단계를포함하는 2020/175845 1»(:1^1{2020/002399 디스플레이장치의동작방법 .
[청구항 12] 제 11항에 있어서,
상기선택가능한복수의 컨텐트항목들을획득하는단계는 상기문서에포함된복수의노드들중하나이상의 이미지 노드들,하나 이상의 입력노드들및하나이상의 텍스트노드를복수의후보컨텐트 항목들로획득하는단계 및
상기복수의후보컨텐트항목들중상기선택가능한복수의 컨텐트 항목들을획득하는단계를포함하는
디스플레이장치의동작방법 .
[청구항 13] 제 12항에 있어서,
상기복수의후보컨텐트항목들로획득하는단계는
이미지 노드의 대체속성값이존재하는경우,해당이미지노드를후보 컨텐트항목으로분류하는단계,
입력 노드의 placeholder속성값이존재하는경우,해당입력노드를후보 컨텐트항목으로분류하는단계와
상위 노드의차일드텍스트노드의 텍스트값이존재하는경우,해당상위 노드를후보컨텐트항목으로분류하는단계를포함하는 디스플레이장치의동작방법 .
[청구항 14] 제 13항에 있어서,
상기복수의후보컨텐트항목들중분류조건을만족하는지를판단하는 단계를더포함하고,
상기분류조건은
각노드에상응하는좌표들중일부가상기 어플리케이션의실행화면의 면적을나타내는윈도우내에 있고,
각노드가최상위 노드이거나,자신의노드위에 텍스트를갖지 않는 투명한노드인조건인
디스플레이장치의동작방법 .
[청구항 15] 제 11항에 있어서,
상기문서는 HTML문서이고,
상기문석객체모델은상기 HTML문서에접근하기 위한인터페이스인 디스플레이장치의동작방법 .
PCT/KR2020/002399 2019-02-26 2020-02-19 디스플레이 장치 및 그의 동작 방법 WO2020175845A1 (ko)

Priority Applications (1)

Application Number Priority Date Filing Date Title
US17/428,798 US11978448B2 (en) 2019-02-26 2020-02-19 Display device and method of operating the same

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
KR20190022179 2019-02-26
KR10-2019-0022179 2019-02-26

Publications (1)

Publication Number Publication Date
WO2020175845A1 true WO2020175845A1 (ko) 2020-09-03

Family

ID=72239774

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/KR2020/002399 WO2020175845A1 (ko) 2019-02-26 2020-02-19 디스플레이 장치 및 그의 동작 방법

Country Status (2)

Country Link
US (1) US11978448B2 (ko)
WO (1) WO2020175845A1 (ko)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111787408B (zh) * 2020-07-31 2022-07-22 北京小米移动软件有限公司 多类型多媒体混合播放的处理方法、播放装置及存储介质

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20080094368A1 (en) * 2006-09-06 2008-04-24 Bas Ording Portable Electronic Device, Method, And Graphical User Interface For Displaying Structured Electronic Documents
KR20130102839A (ko) * 2012-03-08 2013-09-23 삼성전자주식회사 웹 페이지 상에서 본문 추출을 위한 방법 및 장치
KR20160014926A (ko) * 2014-07-30 2016-02-12 삼성전자주식회사 음성 인식 장치 및 그 제어 방법
KR20160083058A (ko) * 2013-11-07 2016-07-11 스킵스톤 엘엘씨 실시간 또는 저장된 비디오, 오디오나 텍스트 컨텐츠 내에서 반응성 응답을 자동적으로 활성화하기 위한 시스템 및 방법
KR20170129165A (ko) * 2015-03-20 2017-11-24 페이스북, 인크. 시선 추적과 음성 인식을 조합하여 제어를 개선하는 방법

Family Cites Families (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20070006078A1 (en) * 2005-07-01 2007-01-04 Microsoft Corporation Declaratively responding to state changes in an interactive multimedia environment
US20070006079A1 (en) * 2005-07-01 2007-01-04 Microsoft Corporation State-based timing for interactive multimedia presentations
US9218052B2 (en) * 2013-03-14 2015-12-22 Samsung Electronics Co., Ltd. Framework for voice controlling applications
US20140350928A1 (en) * 2013-05-21 2014-11-27 Microsoft Corporation Method For Finding Elements In A Webpage Suitable For Use In A Voice User Interface
US10423709B1 (en) * 2018-08-16 2019-09-24 Audioeye, Inc. Systems, devices, and methods for automated and programmatic creation and deployment of remediations to non-compliant web pages or user interfaces
US10796086B2 (en) * 2018-08-25 2020-10-06 Microsoft Technology Licensing, Llc Selectively controlling modification states for user-defined subsets of objects within a digital document
US11620102B1 (en) * 2018-09-26 2023-04-04 Amazon Technologies, Inc. Voice navigation for network-connected device browsers

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20080094368A1 (en) * 2006-09-06 2008-04-24 Bas Ording Portable Electronic Device, Method, And Graphical User Interface For Displaying Structured Electronic Documents
KR20130102839A (ko) * 2012-03-08 2013-09-23 삼성전자주식회사 웹 페이지 상에서 본문 추출을 위한 방법 및 장치
KR20160083058A (ko) * 2013-11-07 2016-07-11 스킵스톤 엘엘씨 실시간 또는 저장된 비디오, 오디오나 텍스트 컨텐츠 내에서 반응성 응답을 자동적으로 활성화하기 위한 시스템 및 방법
KR20160014926A (ko) * 2014-07-30 2016-02-12 삼성전자주식회사 음성 인식 장치 및 그 제어 방법
KR20170129165A (ko) * 2015-03-20 2017-11-24 페이스북, 인크. 시선 추적과 음성 인식을 조합하여 제어를 개선하는 방법

Also Published As

Publication number Publication date
US11978448B2 (en) 2024-05-07
US20220005473A1 (en) 2022-01-06

Similar Documents

Publication Publication Date Title
US11704089B2 (en) Display device and system comprising same
US20220293099A1 (en) Display device and artificial intelligence system
US10448107B2 (en) Display device
US11412281B2 (en) Channel recommendation device and operating method therefor
KR20170035167A (ko) 디스플레이 장치 및 그의 동작 방법
KR102576388B1 (ko) 디스플레이 장치 및 그의 동작 방법
US11544602B2 (en) Artificial intelligence device
US20220293106A1 (en) Artificial intelligence server and operation method thereof
US20210014572A1 (en) Display device
US11907011B2 (en) Display device
US12087296B2 (en) Display device and artificial intelligence server
WO2020175845A1 (ko) 디스플레이 장치 및 그의 동작 방법
KR20200102861A (ko) 디스플레이 장치 및 그의 동작 방법
US11881220B2 (en) Display device for providing speech recognition service and method of operation thereof
EP3905707A1 (en) Display device and operating method thereof
US20220232278A1 (en) Display device for providing speech recognition service
KR102700206B1 (ko) 디스플레이 장치
KR20190034856A (ko) 디스플레이 장치 및 그의 동작 방법
EP4236329A1 (en) Display device and operating method therefor
US20220343909A1 (en) Display apparatus
US20230054251A1 (en) Natural language processing device
KR20200069936A (ko) 미디어에 포함된 정보를 제공하는 장치 및 그 방법
KR20190000173A (ko) 디스플레이 장치

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 20762376

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 20762376

Country of ref document: EP

Kind code of ref document: A1