US20110292082A1 - Graphics Display System With Anti-Flutter Filtering And Vertical Scaling Feature - Google Patents
Graphics Display System With Anti-Flutter Filtering And Vertical Scaling Feature Download PDFInfo
- Publication number
- US20110292082A1 US20110292082A1 US12/953,774 US95377410A US2011292082A1 US 20110292082 A1 US20110292082 A1 US 20110292082A1 US 95377410 A US95377410 A US 95377410A US 2011292082 A1 US2011292082 A1 US 2011292082A1
- Authority
- US
- United States
- Prior art keywords
- graphics
- window
- video
- memory
- display
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Abandoned
Links
- 238000001914 filtration Methods 0.000 title claims abstract description 46
- 238000000034 method Methods 0.000 claims abstract description 82
- 230000008569 process Effects 0.000 abstract description 52
- 230000015654 memory Effects 0.000 description 329
- 239000000872 buffer Substances 0.000 description 168
- 238000012545 processing Methods 0.000 description 57
- 238000010586 diagram Methods 0.000 description 49
- 238000002156 mixing Methods 0.000 description 45
- 239000000203 mixture Substances 0.000 description 44
- 238000006243 chemical reaction Methods 0.000 description 38
- 239000013598 vector Substances 0.000 description 35
- 230000006870 function Effects 0.000 description 32
- 238000012546 transfer Methods 0.000 description 27
- 239000002131 composite material Substances 0.000 description 21
- 238000011068 loading method Methods 0.000 description 21
- 230000007246 mechanism Effects 0.000 description 20
- 238000005070 sampling Methods 0.000 description 18
- 230000001360 synchronised effect Effects 0.000 description 17
- 241000023320 Luma <angiosperm> Species 0.000 description 16
- 238000013461 design Methods 0.000 description 16
- OSWPMRLSEDHDFF-UHFFFAOYSA-N methyl salicylate Chemical compound COC(=O)C1=CC=CC=C1O OSWPMRLSEDHDFF-UHFFFAOYSA-N 0.000 description 16
- 230000000694 effects Effects 0.000 description 13
- 241001125929 Trisopterus luscus Species 0.000 description 10
- 239000007787 solid Substances 0.000 description 9
- 241001522296 Erithacus rubecula Species 0.000 description 7
- RZVAJINKPMORJF-UHFFFAOYSA-N Acetaminophen Chemical compound CC(=O)NC1=CC=C(O)C=C1 RZVAJINKPMORJF-UHFFFAOYSA-N 0.000 description 6
- 238000004364 calculation method Methods 0.000 description 6
- 239000003086 colorant Substances 0.000 description 6
- 238000007796 conventional method Methods 0.000 description 6
- 230000000737 periodic effect Effects 0.000 description 6
- 238000005516 engineering process Methods 0.000 description 5
- 238000005259 measurement Methods 0.000 description 5
- 230000004044 response Effects 0.000 description 5
- 238000004088 simulation Methods 0.000 description 5
- 230000003068 static effect Effects 0.000 description 5
- 238000003860 storage Methods 0.000 description 5
- 230000006399 behavior Effects 0.000 description 4
- 230000008859 change Effects 0.000 description 4
- 230000009977 dual effect Effects 0.000 description 4
- 230000002093 peripheral effect Effects 0.000 description 4
- 230000002250 progressing effect Effects 0.000 description 4
- 230000009467 reduction Effects 0.000 description 4
- 230000002829 reductive effect Effects 0.000 description 4
- 238000004458 analytical method Methods 0.000 description 3
- 238000013459 approach Methods 0.000 description 3
- 230000003111 delayed effect Effects 0.000 description 3
- 230000001419 dependent effect Effects 0.000 description 3
- 230000010355 oscillation Effects 0.000 description 3
- 101001002508 Homo sapiens Immunoglobulin-binding protein 1 Proteins 0.000 description 2
- 102100021042 Immunoglobulin-binding protein 1 Human genes 0.000 description 2
- 230000003044 adaptive effect Effects 0.000 description 2
- 230000008901 benefit Effects 0.000 description 2
- 238000012937 correction Methods 0.000 description 2
- 230000001934 delay Effects 0.000 description 2
- 239000000284 extract Substances 0.000 description 2
- 238000011049 filling Methods 0.000 description 2
- 238000009877 rendering Methods 0.000 description 2
- 238000013341 scale-up Methods 0.000 description 2
- 238000012360 testing method Methods 0.000 description 2
- 230000007704 transition Effects 0.000 description 2
- 241000197727 Euscorpius alpha Species 0.000 description 1
- 230000009471 action Effects 0.000 description 1
- 238000007792 addition Methods 0.000 description 1
- 230000002411 adverse Effects 0.000 description 1
- OFHCOWSQAMBJIW-AVJTYSNKSA-N alfacalcidol Chemical compound C1(/[C@@H]2CC[C@@H]([C@]2(CCC1)C)[C@H](C)CCCC(C)C)=C\C=C1\C[C@@H](O)C[C@H](O)C1=C OFHCOWSQAMBJIW-AVJTYSNKSA-N 0.000 description 1
- 238000003491 array Methods 0.000 description 1
- 230000003139 buffering effect Effects 0.000 description 1
- 238000004422 calculation algorithm Methods 0.000 description 1
- 238000004891 communication Methods 0.000 description 1
- 230000000295 complement effect Effects 0.000 description 1
- 238000010276 construction Methods 0.000 description 1
- 238000010924 continuous production Methods 0.000 description 1
- 239000013078 crystal Substances 0.000 description 1
- 230000007423 decrease Effects 0.000 description 1
- 230000003247 decreasing effect Effects 0.000 description 1
- 238000001514 detection method Methods 0.000 description 1
- 238000011156 evaluation Methods 0.000 description 1
- 230000001771 impaired effect Effects 0.000 description 1
- 230000006872 improvement Effects 0.000 description 1
- 238000003780 insertion Methods 0.000 description 1
- 230000037431 insertion Effects 0.000 description 1
- 230000003993 interaction Effects 0.000 description 1
- 230000002452 interceptive effect Effects 0.000 description 1
- 238000012432 intermediate storage Methods 0.000 description 1
- 238000004519 manufacturing process Methods 0.000 description 1
- 238000013507 mapping Methods 0.000 description 1
- 230000005055 memory storage Effects 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 238000005457 optimization Methods 0.000 description 1
- 238000012856 packing Methods 0.000 description 1
- 238000010422 painting Methods 0.000 description 1
- 230000036961 partial effect Effects 0.000 description 1
- 229920000729 poly(L-lysine) polymer Polymers 0.000 description 1
- 230000000979 retarding effect Effects 0.000 description 1
- 230000000630 rising effect Effects 0.000 description 1
- 238000000926 separation method Methods 0.000 description 1
- 230000026676 system process Effects 0.000 description 1
- 230000000007 visual effect Effects 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T9/00—Image coding
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/06—Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
- G06F3/0601—Interfaces specially adapted for storage systems
- G06F3/0602—Interfaces specially adapted for storage systems specifically adapted to achieve a particular effect
- G06F3/061—Improving I/O performance
- G06F3/0611—Improving I/O performance in relation to response time
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/06—Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
- G06F3/0601—Interfaces specially adapted for storage systems
- G06F3/0628—Interfaces specially adapted for storage systems making use of a particular technique
- G06F3/0653—Monitoring storage devices or systems
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/06—Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
- G06F3/0601—Interfaces specially adapted for storage systems
- G06F3/0628—Interfaces specially adapted for storage systems making use of a particular technique
- G06F3/0655—Vertical data movement, i.e. input-output transfer; data movement between one or more hosts and one or more storage devices
- G06F3/0659—Command handling arrangements, e.g. command buffers, queues, command scheduling
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/06—Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
- G06F3/0601—Interfaces specially adapted for storage systems
- G06F3/0668—Interfaces specially adapted for storage systems adopting a particular infrastructure
- G06F3/0671—In-line storage system
- G06F3/0683—Plurality of storage devices
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T1/00—General purpose image data processing
- G06T1/20—Processor architectures; Processor configuration, e.g. pipelining
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T1/00—General purpose image data processing
- G06T1/60—Memory management
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T9/00—Image coding
- G06T9/007—Transform coding, e.g. discrete cosine transform
-
- G—PHYSICS
- G09—EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
- G09G—ARRANGEMENTS OR CIRCUITS FOR CONTROL OF INDICATING DEVICES USING STATIC MEANS TO PRESENT VARIABLE INFORMATION
- G09G5/00—Control arrangements or circuits for visual indicators common to cathode-ray tube indicators and other visual indicators
- G09G5/001—Arbitration of resources in a display system, e.g. control of access to frame buffer by video controller and/or main processor
-
- G—PHYSICS
- G09—EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
- G09G—ARRANGEMENTS OR CIRCUITS FOR CONTROL OF INDICATING DEVICES USING STATIC MEANS TO PRESENT VARIABLE INFORMATION
- G09G5/00—Control arrangements or circuits for visual indicators common to cathode-ray tube indicators and other visual indicators
- G09G5/02—Control arrangements or circuits for visual indicators common to cathode-ray tube indicators and other visual indicators characterised by the way in which colour is displayed
- G09G5/06—Control arrangements or circuits for visual indicators common to cathode-ray tube indicators and other visual indicators characterised by the way in which colour is displayed using colour palettes, e.g. look-up tables
-
- G—PHYSICS
- G09—EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
- G09G—ARRANGEMENTS OR CIRCUITS FOR CONTROL OF INDICATING DEVICES USING STATIC MEANS TO PRESENT VARIABLE INFORMATION
- G09G5/00—Control arrangements or circuits for visual indicators common to cathode-ray tube indicators and other visual indicators
- G09G5/12—Synchronisation between the display unit and other units, e.g. other display units, video-disc players
-
- G—PHYSICS
- G09—EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
- G09G—ARRANGEMENTS OR CIRCUITS FOR CONTROL OF INDICATING DEVICES USING STATIC MEANS TO PRESENT VARIABLE INFORMATION
- G09G5/00—Control arrangements or circuits for visual indicators common to cathode-ray tube indicators and other visual indicators
- G09G5/14—Display of multiple viewports
-
- G—PHYSICS
- G09—EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
- G09G—ARRANGEMENTS OR CIRCUITS FOR CONTROL OF INDICATING DEVICES USING STATIC MEANS TO PRESENT VARIABLE INFORMATION
- G09G5/00—Control arrangements or circuits for visual indicators common to cathode-ray tube indicators and other visual indicators
- G09G5/22—Control arrangements or circuits for visual indicators common to cathode-ray tube indicators and other visual indicators characterised by the display of characters or indicia using display control signals derived from coded signals representing the characters or indicia, e.g. with a character-code memory
- G09G5/24—Generation of individual character patterns
- G09G5/28—Generation of individual character patterns for enhancement of character form, e.g. smoothing
-
- G—PHYSICS
- G09—EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
- G09G—ARRANGEMENTS OR CIRCUITS FOR CONTROL OF INDICATING DEVICES USING STATIC MEANS TO PRESENT VARIABLE INFORMATION
- G09G5/00—Control arrangements or circuits for visual indicators common to cathode-ray tube indicators and other visual indicators
- G09G5/34—Control arrangements or circuits for visual indicators common to cathode-ray tube indicators and other visual indicators for rolling or scrolling
- G09G5/346—Control arrangements or circuits for visual indicators common to cathode-ray tube indicators and other visual indicators for rolling or scrolling for systems having a bit-mapped display memory
-
- G—PHYSICS
- G09—EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
- G09G—ARRANGEMENTS OR CIRCUITS FOR CONTROL OF INDICATING DEVICES USING STATIC MEANS TO PRESENT VARIABLE INFORMATION
- G09G5/00—Control arrangements or circuits for visual indicators common to cathode-ray tube indicators and other visual indicators
- G09G5/36—Control arrangements or circuits for visual indicators common to cathode-ray tube indicators and other visual indicators characterised by the display of a graphic pattern, e.g. using an all-points-addressable [APA] memory
-
- G—PHYSICS
- G09—EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
- G09G—ARRANGEMENTS OR CIRCUITS FOR CONTROL OF INDICATING DEVICES USING STATIC MEANS TO PRESENT VARIABLE INFORMATION
- G09G5/00—Control arrangements or circuits for visual indicators common to cathode-ray tube indicators and other visual indicators
- G09G5/36—Control arrangements or circuits for visual indicators common to cathode-ray tube indicators and other visual indicators characterised by the display of a graphic pattern, e.g. using an all-points-addressable [APA] memory
- G09G5/363—Graphics controllers
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/42—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals characterised by implementation details or hardware specially adapted for video compression or decompression, e.g. dedicated software implementation
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/42—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals characterised by implementation details or hardware specially adapted for video compression or decompression, e.g. dedicated software implementation
- H04N19/423—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals characterised by implementation details or hardware specially adapted for video compression or decompression, e.g. dedicated software implementation characterised by memory arrangements
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/42—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals characterised by implementation details or hardware specially adapted for video compression or decompression, e.g. dedicated software implementation
- H04N19/436—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals characterised by implementation details or hardware specially adapted for video compression or decompression, e.g. dedicated software implementation using parallelised computational arrangements
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/44—Decoders specially adapted therefor, e.g. video decoders which are asymmetric with respect to the encoder
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/50—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding
- H04N19/59—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding involving spatial sub-sampling or interpolation, e.g. alteration of picture size or resolution
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/60—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using transform coding
- H04N19/61—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using transform coding in combination with predictive coding
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/40—Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
- H04N21/41—Structure of client; Structure of client peripherals
- H04N21/422—Input-only peripherals, i.e. input devices connected to specially adapted client devices, e.g. global positioning system [GPS]
- H04N21/42204—User interfaces specially adapted for controlling a client device through a remote control device; Remote control devices therefor
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/40—Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
- H04N21/41—Structure of client; Structure of client peripherals
- H04N21/426—Internal components of the client ; Characteristics thereof
- H04N21/42653—Internal components of the client ; Characteristics thereof for processing graphics
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/40—Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
- H04N21/43—Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
- H04N21/4302—Content synchronisation processes, e.g. decoder synchronisation
- H04N21/4305—Synchronising client clock from received content stream, e.g. locking decoder clock with encoder clock, extraction of the PCR packets
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/40—Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
- H04N21/43—Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
- H04N21/431—Generation of visual interfaces for content selection or interaction; Content or additional data rendering
- H04N21/4312—Generation of visual interfaces for content selection or interaction; Content or additional data rendering involving specific graphical features, e.g. screen layout, special fonts or colors, blinking icons, highlights or animations
- H04N21/4316—Generation of visual interfaces for content selection or interaction; Content or additional data rendering involving specific graphical features, e.g. screen layout, special fonts or colors, blinking icons, highlights or animations for displaying supplemental content in a region of the screen, e.g. an advertisement in a separate window
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/40—Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
- H04N21/43—Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
- H04N21/434—Disassembling of a multiplex stream, e.g. demultiplexing audio and video streams, extraction of additional data from a video stream; Remultiplexing of multiplex streams; Extraction or processing of SI; Disassembling of packetised elementary stream
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/40—Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
- H04N21/43—Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
- H04N21/44—Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to encoded video stream scene graphs
- H04N21/4402—Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to encoded video stream scene graphs involving reformatting operations of video signals for household redistribution, storage or real-time display
- H04N21/440263—Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to encoded video stream scene graphs involving reformatting operations of video signals for household redistribution, storage or real-time display by altering the spatial resolution, e.g. for displaying on a connected PDA
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/40—Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
- H04N21/43—Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
- H04N21/443—OS processes, e.g. booting an STB, implementing a Java virtual machine in an STB or power management in an STB
- H04N21/4438—Window management, e.g. event handling following interaction with the user interface
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/40—Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
- H04N21/47—End-user applications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N5/00—Details of television systems
- H04N5/04—Synchronising
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N5/00—Details of television systems
- H04N5/14—Picture signal circuitry for video frequency region
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N5/00—Details of television systems
- H04N5/44—Receiver circuitry for the reception of television signals according to analogue transmission standards
- H04N5/445—Receiver circuitry for the reception of television signals according to analogue transmission standards for displaying additional information
- H04N5/44504—Circuit details of the additional information generator, e.g. details of the character or graphics signal generator, overlay mixing circuits
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N5/00—Details of television systems
- H04N5/44—Receiver circuitry for the reception of television signals according to analogue transmission standards
- H04N5/445—Receiver circuitry for the reception of television signals according to analogue transmission standards for displaying additional information
- H04N5/45—Picture in picture, e.g. displaying simultaneously another television channel in a region of the screen
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N7/00—Television systems
- H04N7/01—Conversion of standards, e.g. involving analogue television standards or digital television standards processed at pixel level
- H04N7/0117—Conversion of standards, e.g. involving analogue television standards or digital television standards processed at pixel level involving conversion of the spatial resolution of the incoming video signal
- H04N7/0122—Conversion of standards, e.g. involving analogue television standards or digital television standards processed at pixel level involving conversion of the spatial resolution of the incoming video signal the input and the output signals having different aspect ratios
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N7/00—Television systems
- H04N7/01—Conversion of standards, e.g. involving analogue television standards or digital television standards processed at pixel level
- H04N7/0135—Conversion of standards, e.g. involving analogue television standards or digital television standards processed at pixel level involving interpolation processes
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N9/00—Details of colour television systems
- H04N9/44—Colour synchronisation
- H04N9/45—Generation or recovery of colour sub-carriers
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N9/00—Details of colour television systems
- H04N9/64—Circuits for processing colour signals
- H04N9/641—Multi-purpose receivers, e.g. for auxiliary information
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N9/00—Details of colour television systems
- H04N9/64—Circuits for processing colour signals
- H04N9/642—Multi-standard receivers
-
- G—PHYSICS
- G09—EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
- G09G—ARRANGEMENTS OR CIRCUITS FOR CONTROL OF INDICATING DEVICES USING STATIC MEANS TO PRESENT VARIABLE INFORMATION
- G09G2310/00—Command of the display device
- G09G2310/02—Addressing, scanning or driving the display screen or processing steps related thereto
- G09G2310/0224—Details of interlacing
-
- G—PHYSICS
- G09—EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
- G09G—ARRANGEMENTS OR CIRCUITS FOR CONTROL OF INDICATING DEVICES USING STATIC MEANS TO PRESENT VARIABLE INFORMATION
- G09G2320/00—Control of display operating conditions
- G09G2320/02—Improving the quality of display appearance
- G09G2320/0247—Flicker reduction other than flicker reduction circuits used for single beam cathode-ray tubes
-
- G—PHYSICS
- G09—EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
- G09G—ARRANGEMENTS OR CIRCUITS FOR CONTROL OF INDICATING DEVICES USING STATIC MEANS TO PRESENT VARIABLE INFORMATION
- G09G2340/00—Aspects of display data processing
- G09G2340/02—Handling of images in compressed format, e.g. JPEG, MPEG
-
- G—PHYSICS
- G09—EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
- G09G—ARRANGEMENTS OR CIRCUITS FOR CONTROL OF INDICATING DEVICES USING STATIC MEANS TO PRESENT VARIABLE INFORMATION
- G09G2340/00—Aspects of display data processing
- G09G2340/04—Changes in size, position or resolution of an image
- G09G2340/0407—Resolution change, inclusive of the use of different resolutions for different screen areas
-
- G—PHYSICS
- G09—EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
- G09G—ARRANGEMENTS OR CIRCUITS FOR CONTROL OF INDICATING DEVICES USING STATIC MEANS TO PRESENT VARIABLE INFORMATION
- G09G2340/00—Aspects of display data processing
- G09G2340/10—Mixing of images, i.e. displayed pixel being the result of an operation, e.g. adding, on the corresponding input pixels
-
- G—PHYSICS
- G09—EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
- G09G—ARRANGEMENTS OR CIRCUITS FOR CONTROL OF INDICATING DEVICES USING STATIC MEANS TO PRESENT VARIABLE INFORMATION
- G09G2340/00—Aspects of display data processing
- G09G2340/12—Overlay of images, i.e. displayed pixel being the result of switching between the corresponding input pixels
- G09G2340/125—Overlay of images, i.e. displayed pixel being the result of switching between the corresponding input pixels wherein one of the images is motion video
-
- G—PHYSICS
- G09—EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
- G09G—ARRANGEMENTS OR CIRCUITS FOR CONTROL OF INDICATING DEVICES USING STATIC MEANS TO PRESENT VARIABLE INFORMATION
- G09G2360/00—Aspects of the architecture of display systems
- G09G2360/02—Graphics controller able to handle multiple formats, e.g. input or output formats
-
- G—PHYSICS
- G09—EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
- G09G—ARRANGEMENTS OR CIRCUITS FOR CONTROL OF INDICATING DEVICES USING STATIC MEANS TO PRESENT VARIABLE INFORMATION
- G09G2360/00—Aspects of the architecture of display systems
- G09G2360/12—Frame memory handling
- G09G2360/121—Frame memory handling using a cache memory
-
- G—PHYSICS
- G09—EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
- G09G—ARRANGEMENTS OR CIRCUITS FOR CONTROL OF INDICATING DEVICES USING STATIC MEANS TO PRESENT VARIABLE INFORMATION
- G09G2360/00—Aspects of the architecture of display systems
- G09G2360/12—Frame memory handling
- G09G2360/125—Frame memory handling using unified memory architecture [UMA]
-
- G—PHYSICS
- G09—EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
- G09G—ARRANGEMENTS OR CIRCUITS FOR CONTROL OF INDICATING DEVICES USING STATIC MEANS TO PRESENT VARIABLE INFORMATION
- G09G2360/00—Aspects of the architecture of display systems
- G09G2360/12—Frame memory handling
- G09G2360/126—The frame memory having additional data ports, not inclusive of standard details of the output serial port of a VRAM
-
- G—PHYSICS
- G09—EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
- G09G—ARRANGEMENTS OR CIRCUITS FOR CONTROL OF INDICATING DEVICES USING STATIC MEANS TO PRESENT VARIABLE INFORMATION
- G09G2360/00—Aspects of the architecture of display systems
- G09G2360/12—Frame memory handling
- G09G2360/128—Frame memory using a Synchronous Dynamic RAM [SDRAM]
-
- G—PHYSICS
- G09—EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
- G09G—ARRANGEMENTS OR CIRCUITS FOR CONTROL OF INDICATING DEVICES USING STATIC MEANS TO PRESENT VARIABLE INFORMATION
- G09G5/00—Control arrangements or circuits for visual indicators common to cathode-ray tube indicators and other visual indicators
- G09G5/02—Control arrangements or circuits for visual indicators common to cathode-ray tube indicators and other visual indicators characterised by the way in which colour is displayed
-
- G—PHYSICS
- G09—EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
- G09G—ARRANGEMENTS OR CIRCUITS FOR CONTROL OF INDICATING DEVICES USING STATIC MEANS TO PRESENT VARIABLE INFORMATION
- G09G5/00—Control arrangements or circuits for visual indicators common to cathode-ray tube indicators and other visual indicators
- G09G5/02—Control arrangements or circuits for visual indicators common to cathode-ray tube indicators and other visual indicators characterised by the way in which colour is displayed
- G09G5/024—Control arrangements or circuits for visual indicators common to cathode-ray tube indicators and other visual indicators characterised by the way in which colour is displayed using colour registers, e.g. to control background, foreground, surface filling
-
- G—PHYSICS
- G09—EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
- G09G—ARRANGEMENTS OR CIRCUITS FOR CONTROL OF INDICATING DEVICES USING STATIC MEANS TO PRESENT VARIABLE INFORMATION
- G09G5/00—Control arrangements or circuits for visual indicators common to cathode-ray tube indicators and other visual indicators
- G09G5/02—Control arrangements or circuits for visual indicators common to cathode-ray tube indicators and other visual indicators characterised by the way in which colour is displayed
- G09G5/026—Control of mixing and/or overlay of colours in general
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N11/00—Colour television systems
- H04N11/06—Transmission systems characterised by the manner in which the individual colour picture signal components are combined
- H04N11/12—Transmission systems characterised by the manner in which the individual colour picture signal components are combined using simultaneous signals only
- H04N11/14—Transmission systems characterised by the manner in which the individual colour picture signal components are combined using simultaneous signals only in which one signal, modulated in phase and amplitude, conveys colour information and a second signal conveys brightness information, e.g. NTSC-system
- H04N11/143—Encoding means therefor
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N11/00—Colour television systems
- H04N11/06—Transmission systems characterised by the manner in which the individual colour picture signal components are combined
- H04N11/20—Conversion of the manner in which the individual colour picture signal components are combined, e.g. conversion of colour television standards
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/40—Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
- H04N21/41—Structure of client; Structure of client peripherals
- H04N21/426—Internal components of the client ; Characteristics thereof
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N5/00—Details of television systems
- H04N5/04—Synchronising
- H04N5/12—Devices in which the synchronising signals are only operative if a phase difference occurs between synchronising and synchronised scanning devices, e.g. flywheel synchronising
- H04N5/126—Devices in which the synchronising signals are only operative if a phase difference occurs between synchronising and synchronised scanning devices, e.g. flywheel synchronising whereby the synchronisation signal indirectly commands a frequency generator
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N5/00—Details of television systems
- H04N5/44—Receiver circuitry for the reception of television signals according to analogue transmission standards
- H04N5/46—Receiver circuitry for the reception of television signals according to analogue transmission standards for receiving on more than one standard at will
Definitions
- the present invention relates generally to integrated circuits, and more particularly to an integrated circuit graphics display system.
- Graphics display systems are typically used in television control electronics, such as set top boxes, integrated digital TVs, and home network computers. Graphics display systems typically include a display engine that may perform display functions. The display engine is the part of the graphics display system that receives display pixel data from any combination of locally attached video and graphics input ports, processes the data in some way, and produces final display pixels as output.
- the present invention provides a graphics display system that includes a graphics filter.
- the graphics filter includes means for scaling graphics and means for performing anti-flutter filtering.
- the means for performing anti-flutter filtering is preferably the same as the means for scaling video.
- FIG. 1 is a block diagram of an integrated circuit graphics display system according to a presently preferred embodiment of the invention
- FIG. 2 is a block diagram of certain functional blocks of the system
- FIG. 3 is a block diagram of an alternate embodiment of the system of FIG. 2 that incorporates an on-chip I/O bus;
- FIG. 4 is a functional block diagram of exemplary video and graphics display pipelines
- FIG. 5 is a more detailed block diagram of the graphics and video pipelines of the system.
- FIG. 6 is a map of an exemplary window descriptor for describing graphics windows and solid surfaces
- FIG. 7 is a flow diagram of an exemplary process for sorting window descriptors in a window controller
- FIG. 8 is a flow diagram of a graphics window control data passing mechanism and a color look-up table loading mechanism
- FIG. 9 is a state diagram of a state machine in a graphics converter that may be used during processing of header packets
- FIG. 10 is a block diagram of an embodiment of a display engine
- FIG. 11 is a block diagram of an embodiment of a color look-up table (CLUT).
- FIG. 12 is a timing diagram of signals that may be used to load a CLUT
- FIG. 13 is a block diagram illustrating exemplary graphics line buffers
- FIG. 14 is a flow diagram of a system for controlling the graphics line buffers of FIG. 13 ;
- FIG. 15 is a representation of left scrolling using a window soft horizontal scrolling mechanism
- FIG. 16 is a representation of right scrolling using a window soft horizontal scrolling mechanism
- FIG. 17 is a flow diagram illustrating a system that uses graphics elements or glyphs for anti-aliased text and graphics applications
- FIG. 18 is a block diagram of certain functional blocks of a video decoder for performing video synchronization
- FIG. 19 is a block diagram of an embodiment of a chroma-locked sample rate converter (SRC);
- FIG. 20 is a block diagram of an alternate embodiment of the chroma-locked SRC of FIG. 19 ;
- FIG. 21 is a block diagram of an exemplary line-locked SRC
- FIG. 22 is a block diagram of an exemplary time base corrector (TBC).
- TBC time base corrector
- FIG. 23 is a flow diagram of a process that employs a TBC to synchronize an input video to a display clock
- FIG. 24 is a flow diagram of a process for video scaling in which downscaling is performed prior to capture of video in memory and upscaling is performed after reading video data out of memory;
- FIG. 25 is a detailed block diagram of components used during video scaling with signal paths involved in downscaling
- FIG. 26 is a detailed block diagram of components used during video scaling with signal paths involved in upscaling
- FIG. 27 is a detailed block diagram of components that may be used during video scaling with signal paths indicated for both upscaling and downscaling;
- FIG. 28 is a flow diagram of an exemplary process for blending graphics and video surfaces
- FIG. 29 is a flow diagram of an exemplary process for blending graphics windows into a combined blended graphics output
- FIG. 30 is a flow diagram of an exemplary process for blending graphics, video and background color
- FIG. 31 is a block diagram of a polyphase filter that performs both anti-flutter filtering and vertical scaling of graphics windows;
- FIG. 32 is a functional block diagram of an exemplary memory service request and handling system with dual memory controllers
- FIG. 33 is a functional block diagram of an implementation of a real time scheduling system
- FIG. 34 is a timing diagram of an exemplary CPU servicing mechanism that has been implemented using real time scheduling
- FIG. 35 is a timing diagram that illustrates certain principles of critical instant analysis for an implementation of real time scheduling
- FIG. 36 is a flow diagram illustrating servicing of requests according to the priority of the task.
- FIG. 37 is a block diagram of a graphics accelerator, which may be coupled to a CPU and a memory controller.
- the graphics display system is preferably contained in an integrated circuit 10 .
- the integrated circuit may include inputs 12 for receiving video signals 14 , a bus 20 for connecting to a CPU 22 , a bus 24 for transferring data to and from memory 28 , and an output 30 for providing a video output signal 32 .
- the system may further include an input 26 for receiving audio input 34 and an output 27 for providing audio output 36 .
- the graphic display system accepts video input signals that may include analog video signals, digital video signals, or both.
- the analog signals may be, for example, NTSC, PAL and SECAM signals or any other conventional type of analog signal.
- the digital signals may be in the form of decoded MPEG signals or other format of digital video.
- the system includes an on-chip decoder for decoding the MPEG or other digital video signals input to the system.
- Graphics data for display is produced by any suitable graphics library software, such as Direct Draw marketed by Microsoft Corporation, and is read from the CPU 22 into the memory 28 .
- the video output signals 32 may be analog signals, such as composite NTSC, PAL, Y/C (S-video), SECAM or other signals that include video and graphics information.
- the system provides serial digital video output to an on-chip or off-chip serializer that may encrypt the output.
- the graphics display system memory 28 is preferably a unified synchronous dynamic random access memory (SDRAM) that is shared by the system, the CPU 22 and other peripheral components.
- SDRAM synchronous dynamic random access memory
- the CPU uses the unified memory for its code and data while the graphics display system performs all graphics, video and audio functions assigned to it by software.
- the amount of memory and CPU performance are preferably tunable by the system designer for the desired mix of performance and memory cost.
- a set-top box is implemented with SDRAM that supports both the CPU and graphics.
- the graphics display system preferably includes a video decoder 50 , video scaler 52 , memory controller 54 , window controller 56 , display engine 58 , video compositor 60 , and video encoder 62 .
- the system may optionally include a graphics accelerator 64 and an audio engine 66 .
- the system may display graphics, passthrough video, scaled video or a combination of the different types of video and graphics. Passthrough video includes digital or analog video that is not captured in memory. The passthrough video may be selected from the analog video or the digital video by a multiplexer.
- Bypass video which may come into the chip on a separate input, includes analog video that is digitized off-chip into conventional YUV (luma chroma) format by any suitable decoder, such as the BT829 decoder, available from Brooktree Corporation, San Diego, Calif.
- YUV format may also be referred to as YCrCb format where Cr and Cb are equivalent to U and V, respectively.
- the video decoder (VDEC) 50 preferably digitizes and processes analog input video to produce internal YUV component signals with separated luma and chroma components. In an alternate embodiment, the digitized signals may be processed in another format, such as RGB.
- the VDEC 50 preferably includes a sample rate converter 70 and a time base corrector 72 that together allow the system to receive non-standard video signals, such as signals from a VCR.
- the time base corrector 72 enables the video encoder to work in passthrough mode, and corrects digitized analog video in the time domain to reduce or prevent jitter.
- the video scaler 52 may perform both downscaling and upscaling of digital video and analog video as needed.
- scale factors may be adjusted continuously from a scale factor of much less than one to a scale factor of four.
- analog and digital video input either one may be scaled while the other is displayed full size at the same time as passthrough video. Any portion of the input may be the source for video scaling.
- the video scaler preferably downscales before capturing video frames to memory, and upscales after reading from memory, but preferably does not perform both upscaling and downscaling at the same time.
- the memory controller 54 preferably reads and writes video and graphics data to and from memory by using burst accesses with burst lengths that may be assigned to each task.
- the memory is any suitable memory such as SDRAM.
- the memory controller includes two substantially similar SDRAM controllers, one primarily for the CPU and the other primarily for the graphics display system, while either controller may be used for any and all of these functions.
- the graphics display system preferably processes graphics data using logical windows, also referred to as viewports, surfaces, sprites, or canvasses, that may overlap or cover one another with arbitrary spatial relationships. Each window is preferably independent of the others.
- the windows may consist of any combination of image content, including anti-aliased text and graphics, patterns, GIF images, JPEG images, live video from MPEG or analog video, three dimensional graphics, cursors or pointers, control panels, menus, tickers, or any other content, all or some of which may be animated.
- Window descriptors are data structures that describe one or more parameters of the graphics window. Window descriptors may include, for example, image pixel format, pixel color type, alpha blend factor, location on the screen, address in memory, depth order on the screen, or other parameters.
- the system preferably supports a wide variety of pixel formats, including RGB 16, RGB 15, YUV 4:2:2 (ITU-R 601), CLUT2, CLUT4, CLUT8 or others.
- each pixel in the preferred embodiment has its own alpha value.
- window descriptors are not used for video windows. Instead, parameters for video windows, such as memory start address and window size are stored in registers associated with the video compositor.
- the window controller 56 preferably manages both the video and graphics display pipelines.
- the window controller preferably accesses graphics window descriptors in memory through a direct memory access (DMA) engine 76 .
- the window controller may sort the window descriptors according to the relative depth of their corresponding windows on the display.
- the window controller preferably sends header information to the display engine at the beginning of each window on each scan line, and sends window header packets to the display engine as needed to display a window.
- the window controller preferably coordinates capture of non-passthrough video into memory, and transfer of video between memory and the video compositor.
- the display engine 58 preferably takes graphics information from memory and processes it for display.
- the display engine preferably converts the various formats of graphics data in the graphics windows into YUV component format, and blends the graphics windows to create blended graphics output having a composite alpha value that is based on alpha values for individual graphics windows, alpha values per pixel, or both.
- the display engine transfers the processed graphics information to memory buffers that are configured as line buffers.
- the buffer may include a frame buffer.
- the output of the display engine is transferred directly to a display or output block without being transferred to memory buffers.
- the video compositor 60 receives one or more types of data, such as blended graphics data, video window data, passthrough video data and background color data, and produces a blended video output.
- the video encoder 62 encodes the blended video output from the video compositor into any suitable display format such as composite NTSC, PAL, Y/C (S-video), SECAM or other signals that may include video information, graphics information, or a combination of video and graphics information.
- the video encoder converts the blended video output of the video compositor into serial digital video output using an on-chip or off chip serializer that may encrypt the output.
- the graphics accelerator 64 preferably performs graphics operations that may require intensive CPU processing, such as operations on three dimensional graphics images.
- the graphics accelerator may be programmable.
- the audio engine 66 preferably supports applications that create and play audio locally within a set-top box and allow mixing of the locally created audio with audio from a digital audio source, such as MPEG or Dolby, and with digitized analog audio.
- the audio engine also preferably supports applications that capture digitized baseband audio via an audio capture port and store sounds in memory for later use, or that store audio to memory for temporary buffering in order to delay the audio for precise lip-syncing when frame-based video time correction is enabled.
- the graphics display system further includes an I/O bus 74 connected between the CPU 22 , memory 28 and one or more of a wide variety of peripheral devices, such as flash memory, ROM, MPEG decoders, cable modems or other devices.
- the on-chip I/O bus 74 of the present invention preferably eliminates the need for a separate interface connection, sometimes referred in the art to as a north bridge.
- the I/O bus preferably provides high speed access and data transfers between the CPU, the memory and the peripheral devices, and may be used to support the full complement of devices that may be used in a full featured set-top box or digital TV.
- the I/O bus is compatible with the 68000 bus definition, including both active DSACK and passive DSACK (e.g., ROM/flash devices), and it supports external bus masters and retry operations as both master and slave.
- the bus preferably supports any mix of 32-bit, 16-bit and 8-bit devices, and operates at a clock rate of 33 MHz.
- the clock rate is preferably asynchronous with (not synchronized with) the CPU clock to enable independent optimization of those subsystems.
- the graphics display system generally includes a graphics display pipeline 80 and a video display pipeline 82 .
- the graphics display pipeline preferably contains functional blocks, including window control block 84 , DMA (direct memory access) block 86 , FIFO (first-in-first-out memory) block 88 , graphics converter block 90 , color look up table (CLUT) block 92 , graphics blending block 94 , static random access memory (SRAM) block 96 , and filtering block 98 .
- the system preferably spatially processes the graphics data independently of the video data prior to blending.
- the window control block 84 obtains and stores graphics window descriptors from memory and uses the window descriptors to control the operation of the other blocks in the graphics display pipeline.
- the windows may be processed in any order. In the preferred embodiment, on each scan line, the system processes windows one at a time from back to front and from the left edge to the right edge of the window before proceeding to the next window. In an alternate embodiment, two or more graphics windows may be processed in parallel. In the parallel implementation, it is possible for all of the windows to be processed at once, with the entire scan line being processed left to right. Any number of other combinations may also be implemented, such as processing a set of windows at a lower level in parallel, left to right, followed by the processing of another set of windows in parallel at a higher level.
- the DMA block 86 retrieves data from memory 110 as needed to construct the various graphics windows according to addressing information provided by the window control block. Once the display of a window begins, the DMA block preferably retains any parameters that may be needed to continue to read required data from memory. Such parameters may include, for example, the current read address, the address of the start of the next lines, the number of bytes to read per line, and the pitch. Since the pipeline preferably includes a vertical filter block for anti-flutter and scaling purposes, the DMA block preferably accesses a set of adjacent display lines in the same frame, in both fields.
- the DMA preferably accesses both fields of the interlaced final display under certain conditions, such as when the vertical filter and scaling are enabled. In such a case, all lines, not just those from the current display field, are preferably read from memory and processed during every display field. In this embodiment, the effective rate of reading and processing graphics is equivalent to that of a non-interlaced display with a frame rate equal to the field rate of the interlaced display.
- the FIFO block 88 temporarily stores data read from the memory 110 by the DMA block 86 , and provides the data on demand to the graphics converter block 90 .
- the FIFO may also serve to bridge a boundary between different clock domains in the event that the memory and DMA operate under a clock frequency or phase that differs from the graphics converter block 90 and the graphics blending block 94 .
- the FIFO block is not needed. The FIFO block may be unnecessary, for example, if the graphics converter block processes data from memory at the rate that it is read from the memory and the memory and conversion functions are in the same clock domain.
- the graphics converter block 90 takes raw graphics data from the FIFO block and converts it to YUValpha (YUVa) format.
- Raw graphics data may include graphics data from memory that has not yet been processed by the display engine.
- One type of YUVa format that the system may use includes YUV 4:2:2 (i.e. two U and V samples for every four Y samples) plus an 8-bit alpha value for every pixel, which occupies overall 24 bits per pixel.
- Another suitable type of YUVa format includes YUV 4:4:4 plus the 8-bit alpha value per pixel, which occupies 32 bits per pixel.
- the graphics converter may convert the raw graphics data into a different format, such as RGBalpha.
- the alpha value included in the YUVa output may depend on a number of factors, including alpha from chroma keying in which a transparent pixel has an alpha equal to zero, alpha per CLUT entry, alpha from Y (luma), or alpha per window where one alpha value characterizes all of the contents of a given window.
- the graphics converter block 90 preferably accesses the CLUT 92 during conversion of CLUT formatted raw graphics data.
- CLUT there is only one CLUT.
- multiple CLUTs are used to process different graphics windows having graphics data with different CLUT formats.
- the CLUT may be rewritten by retrieving new CLUT data via the DMA block when required. In practice, it typically takes longer to rewrite the CLUT than the time available in a horizontal blanking interval, so the system preferably allows one horizontal line period to change the CLUT.
- Non-CLUT images may be displayed while the CLUT is being changed.
- the color space of the entries in the CLUT is preferably in YUV but may also be implemented in RGB.
- the graphics blending block 94 receives output from the graphics converter block 90 and preferably blends one window at a time along the entire width of one scan line, with the back-most graphics window being processed first.
- the blending block uses the output from the converter block to modify the contents of the SRAM 96 .
- the result of each pixel blend operation is a pixel in the SRAM that consists of the weighted sum of the various graphics layers up to and including the present one, and the appropriate alpha blend value for the video layers, taking into account the graphics layers up to and including the present one.
- the SRAM 96 is preferably configured as a set of graphics line buffers, where each line buffer corresponds to a single display line.
- the blending of graphics windows is preferably performed one graphics window at a time on the display line that is currently being composited into a line buffer. Once the display line in a line buffer has been completely composited so that all the graphics windows on that display line have been blended, the line buffer is made available to the filtering block 98 .
- the filtering block 98 preferably performs both anti-flutter filtering (AFF) and vertical sample rate conversion (SRC) using the same filter.
- This block takes input from the line buffers and performs finite impulse response polyphase filtering on the data. While anti-flutter filtering and vertical axis SRC are done in the vertical axis, there may be different functions, such as horizontal SRC or scaling that are performed in the horizontal axis.
- the filter takes input from only vertically adjacent pixels at one time. It multiplies each input pixel times a specified coefficient, and sums the result to produce the output.
- the polyphase action means that the coefficients, which are samples of an approximately continuous impulse response, may be selected from a different fractional-pixel phase of the impulse response every pixel.
- the filter performs horizontal scaling appropriate coefficients are selected for a finite impulse response polyphase filter to perform the horizontal scaling.
- both horizontal and vertical filtering and scaling can be performed.
- the video display pipeline 82 may include a FIFO block 100 , an SRAM block 102 , and a video scaler 104 .
- the video display pipeline portion of the architecture is similar to that of the graphics display pipeline, and it shares some elements with it.
- the video pipeline supports up to one scaled video window per scan line, one passthrough video window, and one background color, all of which are logically behind the set of graphics windows. The order of these windows, from back to front, is preferably fixed as background color, then passthrough video, then scaled video.
- the video windows are preferably in YUV format, although they may be in either 4:2:2 or 4:2:0 variants or other variants of YUV, or alternatively in other formats such as RGB.
- the scaled video window may be scaled up in both directions by the display engine, with a factor that can range up to four in the preferred embodiment. Unlike graphics, the system generally does not have to correct for square pixel aspect ratio with video.
- the scaled video window may be alpha blended into passthrough video and a background color, preferably using a constant alpha value for each video signal.
- the FIFO block 100 temporarily stores captured video windows for transfer to the video scaler 104 .
- the video scaler preferably includes a filter that performs both upscaling and downscaling.
- the scaler function may be a set of two polyphase SRC functions, one for each dimension.
- the vertical SRC may be a four-tap filter with programmable coefficients in a fashion similar to the vertical filter in the graphics pipeline, and the horizontal filter may use an 8-tap SRC, also with programmable coefficients.
- a shorter horizontal filter is used, such as a 4-tap horizontal SRC for the video upscaler. Since the same filter is preferably used for downscaling, it may be desirable to use more taps than are strictly needed for upscaling to accommodate low pass filtering for higher quality downscaling.
- the video pipeline uses a separate window controller and DMA. In an alternate embodiment, these elements may be shared.
- the FIFOs are logically separate but may be implemented in a common SRAM.
- the video compositor block 108 blends the output of the graphics display pipeline, the video display pipeline, and passthrough video.
- the background color is preferably blended as the lowest layer on the display, followed by passthrough video, the video window and blended graphics.
- the video compositor composites windows directly to the screen line-by-line at the time the screen is displayed, thereby conserving memory and bandwidth.
- the video compositor may include, but preferably does not include, display frame buffers, double-buffered displays, off-screen bit maps, or blitters.
- the display engine 58 preferably includes graphics FIFO 132 , graphics converter 134 , RGB-to-YUV converter 136 , YUV-444-to-YUV422 converter 138 and graphics blender 140 .
- the graphics FIFO 132 receives raw graphics data from memory through a graphics DMA 124 and passes it to the graphics converter 134 , which preferably converts the raw graphics data into YUV 4:4:4 format or other suitable format.
- a window controller 122 controls the transfer of raw graphics data from memory to the graphics converter 132 .
- the graphics converter preferably accesses the RGB-to-YUV converter 136 during conversion of RGB formatted data and the graphics CLUT 146 during conversion of CLUT formatted data.
- the RGB-to-YUV converter is preferably a color space converter that converts raw graphics data in RGB space to graphics data in YUV space.
- the graphics CLUT 146 preferably includes a CLUT 150 , which stores pixel values for CLUT-formatted graphics data, and a CLUT controller 152 , which controls operation of the CLUT.
- the YUV444-to-YUV422 converter 138 converts graphics data from YUV 4:4:4 format to YUV 4:2:2 format.
- the term YUV 4:4:4 means, as is conventional, that for every four horizontally adjacent samples, there are four Y values, four U values, and four V values; the term YUV 4:2:2 means, as is conventional, that for every four samples, there are four Y values, two U values and two V values.
- the YUV444-to-YUV422 converter 138 is preferably a UV decimator that sub-samples U and V from four samples per every four samples of Y to two samples per every four samples of Y.
- Graphics data in YUV 4:4:4 format and YUV 4:2:2 format preferably also includes four alpha values for every four samples.
- the YUV444-to-YUV422 converter may also perform low-pass filtering of UV and alpha. For example, if the graphics data with YUV 4:4:4 format has higher than desired frequency content, a low pass filter in the YUV444-to-YUV422 converter may be turned on to filter out high frequency components in the U and V signals, and to perform matched filtering of the alpha values.
- the graphics blender 140 blends the YUV 4:2:2 signals together, preferably one line at a time using alpha blending, to create a single line of graphics from all of the graphics windows on the current display line.
- the filter 170 preferably includes a single 4-tap vertical polyphase graphics filter 172 , and a vertical coefficient memory 174 .
- the graphics filter may perform both anti-flutter filtering and vertical scaling.
- the filter preferably receives graphics data from the display engine through a set of seven line buffers 59 , where four of the seven line buffers preferably provide data to the taps of the graphics filter at any given time.
- the system may receive video input that includes one decoded MPEG video in ITU-R 656 format and one analog video signal.
- the ITU-R 656 decoder 160 processes the decoded MPEG video to extract timing and data information.
- an on-chip video decoder (VDEC) 50 converts the analog video signal to a digitized video signal.
- VDEC on-chip video decoder
- an external VDEC such as the Brooktree BT829 decoder converts the analog video into digitized analog video and provides the digitized video to the system as bypass video 130 .
- Analog video or MPEG video may be provided to the video compositor as passthrough video. Alternatively, either type of video may be captured into memory and provided to the video compositor as a scaled video window.
- the digitized analog video signals preferably have a pixel sample rate of 13.5 MHz, contain a 16 bit data stream in YUV 4:2:2 format, and include timing signals such as top field and vertical sync signals.
- the VDEC 50 includes a time base corrector (TBC) 72 comprising a TBC controller 164 and a FIFO 166 .
- TBC time base corrector
- the digitized analog video is corrected in the time domain in the TBC 72 before being blended with other graphics and video sources.
- the video input which runs nominally at 13.5 MHZ is synchronized with the display clock which runs nominally at 13.5 MHZ at the output; these two frequencies that are both nominally 13.5 MHz are not necessarily exactly the same frequency.
- the video output is preferably offset from the video input by a half scan line per field.
- a capture FIFO 158 and a capture DMA 154 preferably capture the digitized analog video signals and MPEG video.
- the SDRAM controller 126 provides captured video frames to the external SDRAM.
- a video DMA 144 transfers the captured video frames to a video FIFO 148 from the external SDRAM.
- the digitized analog video signals and MPEG video are preferably scaled down to less than 100% prior to being captured and are scaled up to more than 100% after being captured.
- the video scaler 52 is shared by both upscale and downscale operations.
- the video scaler preferably includes a multiplexer 176 , a set of line buffers 178 , a horizontal and vertical coefficient memory 180 and a scaler engine 182 .
- the scaler engine 182 preferably includes a set of two polyphase filters, one for each of horizontal and vertical dimensions.
- the vertical filter preferably includes a four-tap filter with programmable filter coefficients.
- the horizontal filter preferably includes an eight-tap filter with programmable filter coefficients.
- three line buffers 178 supply video signals to the scaler engine 182 .
- the three line buffers 178 preferably are 720 ⁇ 16 two port SRAM.
- the three line buffers 178 may provide video signals to three of the four taps of the four-tap vertical filter while the video input provides the video signal directly to the fourth tap.
- a shift register having eight cells in series may be used to provide inputs to the eight taps of the horizontal polyphase filter, each cell providing an input to one of the eight taps.
- the multiplexer 168 preferably provides a video signal to the video scaler prior to capture.
- the video FIFO 148 provides a video signal to the video scaler after capture. Since the video scaler 52 is shared between downscaling and upscaling filtering, downscaling and upscaling operations are not performed at the same time in this particular embodiment.
- the video compositor 60 blends signals from up to four different sources, which may include blended graphics from the filter 170 , video from a video FIFO 148 , passthrough video from a multiplexer 168 , and background color from a background color module 184 .
- various numbers of signals may be composited, including, for example, two or more video windows.
- the video compositor preferably provides final output signal to the data size converter 190 , which serializes the 16-bit word sample into an 8-bit word sample at twice the clock frequency, and provides the 8-bit word sample to the video encoder 62 .
- the video encoder 62 encodes the provided YUV 4:2:2 video data and outputs it as an output of the graphics display system in any desired analog or digital format.
- the artist or application developer has a need to include rectangular objects on the screen, with the objects having a solid color and a uniform alpha blend factor (alpha value). These regions (or objects) may be rendered with other displayed objects on top of them or beneath them. In conventional graphics devices, such solid color objects are rendered using the number of distinct pixels required to fill the region. It may be advantageous in terms of memory size and memory bandwidth to render such objects on the display directly, without expending the memory size or bandwidth required in conventional approaches.
- video and graphics are displayed on regions referred to as windows.
- Each window is preferably a rectangular area of screen bounded by starting and ending display lines and starting and ending pixels on each display line.
- Raw graphics data to be processed and displayed on a screen preferably resides in the external memory.
- a display engine converts raw graphics data into a pixel map with a format that is suitable for display.
- the display engine implements graphics windows of many types directly in hardware.
- Each of the graphics windows on the screen has its own value of various parameters, such as location on the screen, starting address in memory, depth order on the screen, pixel color type, etc.
- the graphics windows may be displayed such that they may overlap or cover each other, with arbitrary spatial relationships.
- a data structure called a window descriptor contains parameters that describe and control each graphics window.
- the window descriptors are preferably data structures for representing graphics images arranged in logical surfaces, or windows, for display.
- Each data structure preferably includes a field indicating the relative depth of the logical surface on the display, a field indicating the alpha value for the graphics in the surface, a field indicating the location of the logical surface on the display, and a field indicating the location in memory where graphics image data for the logical surface is stored.
- All of the elements that make up any given graphics display screen are preferably specified by combining all of the window descriptors of the graphics windows that make up the screen into a window descriptor list.
- the display engine constructs the display image from the current window descriptor list.
- the display engine composites all of the graphics windows in the current window descriptor list into a complete screen image in accordance with the parameters in the window descriptors and the raw graphics data associated with the graphics windows.
- a graphics window with a solid color and fixed translucency may be described entirely in a window descriptor having appropriate parameters. These parameters describe the color and the translucency (alpha) just as if it were a normal graphics window. The only difference is that there is no pixel map associated with this window descriptor.
- the display engine generates a pixel map accordingly and performs the blending in real time when the graphics window is to be displayed.
- a window consisting of a rectangular object having a constant color and a constant alpha value may be created on a screen by including a window descriptor in the window descriptor list.
- the window descriptor indicates the color and the alpha value of the window, and a null pixel format, i.e., no pixel values are to be read from memory.
- Other parameters indicate the window size and location on the screen, allowing the creation of solid color windows with any size and location.
- no pixel map is required, memory bandwidth requirements are reduced and a window of any size may be displayed.
- alpha-only type window Another type of graphics window that the window descriptors preferably describe is an alpha-only type window.
- the alpha-only type windows preferably use a constant color and preferably have graphics data with 2, 4 or 8 bits per pixel.
- an alpha-4 format may be an alpha-only format used in one of the alpha-only type windows.
- the alpha-4 format specifies the alpha-only type window with alpha blend values having four bits per pixel.
- the alpha-only type window may be particularly useful for displaying anti-aliased text.
- a window controller preferably controls transfer of graphics display information in the window descriptors to the display engine.
- the window controller has internal memory to store eight window descriptors. In other embodiments, the window controller may have memory allocated to store more or less window descriptors.
- the window controller preferably reads the window descriptors from external memory via a direct memory access (DMA) module.
- DMA direct memory access
- the DMA module may be shared by both paths of the display pipeline as well as some of the control logic, such as the window controller and the CLUT.
- the DMA module preferably has three channels where the graphics pipeline and the video pipeline use separate DMA modules. These may include window descriptor read, graphics data read and CLUT read. Each channel has externally accessible registers to control the start address and the number of words to read.
- the DMA module Once the DMA module has completed a transfer as indicated by its start and length registers, it preferably activates a signal that indicates the transfer is complete. This allows the DMA module that sets up operations for that channel to begin setting up of another transfer.
- the window controller In the case of graphics data reads, the window controller preferably sets up a transfer of one line of graphics pixels and then waits for the DMA controller to indicate that the transfer of that line is complete before setting up the transfer of the next line, or of a line of another window.
- each window descriptor preferably includes four 32-bit words (labeled Word 0 through Word 3 ) containing graphics window display information.
- Word 0 preferably includes a window operation parameter, a window format parameter and a window memory start address.
- the window operation parameter preferably is a 2-bit field that indicates which operation is to be performed with the window descriptor.
- the window operation parameter is 00b
- the window descriptor performs a normal display operation and when it is 01b, the window descriptor performs graphics color look-up table (“CLUT”) re-loading.
- the window operation parameter of 10b is preferably not used.
- the window operation parameter of 11b preferably indicates that the window descriptor is the last of a sequence of window descriptors in memory.
- the window format parameter preferably is a 4-bit field that indicates a data format of the graphics data to be displayed in the graphics window.
- the data formats corresponding to the window format parameter is described in Table 1 below.
- Data win_format Format Data Format Description 0000b RGB16 5-BIT RED, 6-BIT GREEN, 5-BIT BLUE 0001b RGB15 + 1 RGB15 plus one bit alpha (keying) 0010b RGBA4444 4-BIT RED, GREEN, BLUE, ALPHA 0100b CLUT2 2-bit CLUT with YUV and alpha in table 0101b CLUT4 4-bit CLUT with YUV and alpha in table 0110b CLUT8 8-bit CLUT with YUV and alpha in table 0111b ACLUT16 8-BIT ALPHA, 8-BIT CLUT INDEX 1000b ALPHA0 Single win_alpha and single RGB win_color 1001b ALPHA2 2-bit alpha with single RGB win_color 1010b ALPHA4 4-bit alpha with single RGB win_color 1011b ALPHA8 8-bit alpha with single RGB win_color 1100b YUV422 U and V are sampled at half the rate of Y 1111b RESERV
- the window memory start address preferably is a 26-bit data field that indicates a starting memory address of the graphics data of the graphics window to be displayed on the screen.
- the window memory start address points to the first address in the corresponding external SDRAM which is accessed to display data on the graphics window defined by the window descriptor.
- the window memory start address indicates a starting memory address of data to be loaded into the graphics CLUT.
- Word 1 in the window descriptor preferably includes a window layer parameter, a window memory pitch value and a window color value.
- the window layer parameter is preferably a 4-bit data indicating the order of layers of graphics windows. Some of the graphics windows may be partially or completely stacked on top of each other, and the window layer parameter indicates the stacking order.
- the window layer parameter preferably indicates where in the stack the graphics window defined by the window descriptor should be placed.
- a graphics window with a window layer parameter of 0000b is defined as the bottom most layer, and a graphics window with a window layer parameter of 1111b is defined as the top most layer.
- the window memory pitch value is preferably a 12-bit data field indicating the pitch of window memory addressing. Pitch refers to the difference in memory address between two pixels that are vertically adjacent within a window.
- the window color value preferably is a 16-bit RGB color, which is applied as a single color to the entire graphics window when the window format parameter is 1000b, 1001b, 1010b, or 1011b. Every pixel in the window preferably has the color specified by the window color value, while the alpha value is determined per pixel and per window as specified in the window descriptor and the pixel format.
- the engine preferably uses the window color value to implement a solid surface.
- Word 2 in the window descriptor preferably includes an alpha type, a widow alpha value, a window y-end value and a window y-start value.
- the word 2 preferably also includes two bits reserved for future definition, such as high definition television (HD) applications.
- the alpha type is preferably a 2-bit data field that indicates the method of selecting an alpha value for the graphics window.
- the alpha type of 00b indicates that the alpha value is to be selected from chroma keying. Chroma keying determines whether each pixel is opaque or transparent based on the color of the pixel.
- Opaque pixels are preferably considered to have an alpha value of 1.0, and transparent pixels have an alpha value of 0, both on a scale of 0 to 1.
- Chroma keying compares the color of each pixel to a reference color or to a range of possible colors; if the pixel matches the reference color, or if its color falls within the specified range of colors, then the pixel is determined to be transparent. Otherwise it is determined to be opaque.
- the alpha type of 01b indicates that the alpha value should be derived from the graphics CLUT, using the alpha value in each entry of the CLUT.
- the alpha type of 10b indicates that the alpha value is to be derived from the luminance Y.
- the Y value that results from conversion of the pixel color to the YUV color space, if the pixel color is not already in the YUV color, is used as the alpha value for the pixel.
- the alpha type of 11b indicates that only a single alpha value is to be applied to the entire graphics window. The single alpha value is preferably included as the window alpha value next.
- the window alpha value preferably is an 8-bit alpha value applied to the entire graphics window.
- the effective alpha value for each pixel in the window is the product of the window alpha and the alpha value determined for each pixel. For example, if the window alpha value is 0.5 on a scale of 0 to 1, coded as 0x80, then the effective alpha value of every pixel in the window is one-half of the value encoded in or for the pixel itself. If the window format parameter is 1000b, i.e., a single alpha value is to be applied to the graphics window, then the per-pixel alpha value is treated as if it is 1.0, and the effective alpha value is equal to the window alpha value.
- the window y-end value preferably is a 10-bit data field that indicates the ending display line of the graphics window on the screen.
- the graphics window defined by the window descriptor ends at the display line indicated by the window y-end value.
- the window y-start value preferably is a 10-bit data field that indicates a starting display line of the graphics window on a screen.
- the graphics window defined by the window descriptor begins at the display line indicated in the window y-start value.
- Word 3 in the window descriptor preferably includes a window filter enable parameter, a blank start pixel value, a window x-size value and a window x-start value.
- the word 3 includes two bits reserved for future definition, such as HD applications. Five bits of the 32-bit word 3 are not used.
- the window filter enable parameter is a 1-bit field that indicates whether low pass filtering is to be enabled during YUV 4:4:4 to YUV 4:2:2 conversion.
- the blank start pixel value preferably is a 4-bit parameter indicating a number of blank pixels at the beginning of each display line.
- the blank start pixel value preferably signifies the number of pixels of the first word read from memory, at the beginning of the corresponding graphics window, to be discarded.
- This field indicates the number of pixels in the first word of data read from memory that are not displayed. For example, if memory words are 32 bits wide and the pixels are 4 bits each, there are 8 possible first pixels in the first word. Using this field, 0 to 7 pixels may be skipped, making the 1 st to the 8 th pixel in the word appear as the first pixel, respectively.
- the blank start pixel value allows graphics windows to have any horizontal starting position on the screen, and may be used during soft horizontal scrolling of a graphics window.
- the window x-size value preferably is a 10-bit data field that indicates the size of a graphics window in the x direction, i.e., horizontal direction.
- the window x-size value preferably indicates the number of pixels of a graphics window in a display line.
- the window x-start value preferably is a 10-bit data field that indicates a starting pixel of the graphics window on a display line.
- the graphics window defined by the window descriptor preferably begins at the pixel indicated by the window x-start value of each display line. With the window x-start value, any pixel of a given display line can be chosen to start painting the graphics window. Therefore, there is no need to load pixels on the screen prior to the beginning of the graphics window display area with black.
- a FIFO in the graphics display path accepts raw graphics data as the raw graphics data is read from memory, at the full memory data rate using a clock of the memory controller.
- the FIFO provides this data, initially stored in an external memory, to subsequent blocks in the graphics pipeline.
- the system preferably processes graphics images for display by organizing the graphics images into windows in which the graphics images appear on the screen, obtaining data that describes the windows, sorting the data according to the depth of the window on the display, transferring graphics images from memory, and blending the graphics images using alpha values associated with the graphics images.
- a packet of control information called a header packet is passed from the window controller to the display engine. All of the required control information from the window controller preferably is conveyed to the display engine such that all of the relevant variables from the window controller are properly controlled in a timely fashion and such that the control is not dependent on variations in delays or data rates between the window controller and the display engine.
- a header packet preferably indicates the start of graphics data for one graphics window.
- the graphics data for that graphics window continues until it is completed without requiring a transfer of another header packet.
- a new header packet is preferably placed in the FIFO when another window is to start.
- the header packets may be transferred according to the order of the corresponding window descriptors in the window descriptor lists.
- windows may be specified to overlap one another. At the same time, windows may start and end on any line, and there may be many windows visible on any one line. There are a large number of possible combinations of window starting and ending locations along vertical and horizontal axes and depth order locations.
- the system preferably indicates the depth order of all windows in the window descriptor list and implements the depth ordering correctly while accounting for all windows.
- Each window descriptor preferably includes a parameter indicating the depth location of the associated window.
- the range that is allowed for this parameter can be defined to be almost any useful value.
- the window descriptors are ordered in the window descriptor list in order of the first display scan line where the window appears. For example if window A spans lines 10 to 20 , window B spans lines 12 to 18 , and window C spans lines 5 to 20 , the order of these descriptors in the list would be ⁇ C, A, B ⁇ .
- on-chip memory capable of storing a number of window descriptors.
- this memory can store up to 8 window descriptors on-chip, however the size of this memory may be made larger or smaller without loss of generality.
- Window descriptors are read from main memory into the on-chip descriptor memory in order from the start of the list, and stopping when the on-chip memory is full or when the most recently read descriptor describes a window that is not yet visible, i.e., its starting line is on a line that has a higher number than the line currently being constructed.
- a window may be cast out of the on-chip memory and the next descriptor in the list may read from main memory.
- the order of the window descriptors in the on-chip memory bears no particular relation to the depth order of the windows on the screen.
- the hardware that controls the compositing of windows builds up the display in layers, starting from the back-most layer.
- the back most layer is layer 0 .
- the hardware performs a quick search of the back-most window descriptor that has not yet been composited, regardless of its location in the on-chip descriptor memory. In the preferred embodiment, this search is performed as follows:
- All 8 window descriptors are stored on chip in such a way that the depth order numbers of all of them are available simultaneously. While the depth numbers in the window descriptors are 4 bit numbers, representing 0 to 15, the on-chip memory has storage for 5 bits for the depth number. Initially the 5 bit for each descriptor is set to 0.
- the depth order values are compared in a hierarchy of pair-wise comparisons, and the lower of the two depth numbers in each comparison wins the comparison. That is, at the first stage of the test descriptor pairs ⁇ 0, 1 ⁇ , ⁇ 2, 3 ⁇ , ⁇ 4, 5 ⁇ , and ⁇ 6, 7 ⁇ are compared, where ⁇ 0-7 ⁇ represent the eight descriptors stored in the on-chip memory. This results in four depth numbers with associated descriptor numbers. At the next stage two pair-wise comparisons compare ⁇ (0, 1), (2, 3) ⁇ and ⁇ (4, 5), (6, 7) ⁇ .
- one pair-wise comparison finds the smallest depth number of all, and its associated descriptor number. This number points the descriptor in the on-chip memory with the lowest depth number, and therefore the greatest depth, and this descriptor is used first to render the associated window on the screen.
- the fifth bit of the depth number in the on-chip memory is set to 1, thereby ensuring that the depth value number is greater than 15, and as a result this depth number will preferably never again be found to be the back-most window until all windows have been rendered on this scan line, preventing rendering this window twice.
- the fifth bits of all the on-chip depth numbers are again set to 0; descriptors that describe windows that are no longer visible on the screen are cast out of the on-chip memory; new descriptors are read from memory as required (that is, if all windows in the on-chip memory are visible, the next descriptor is read from memory, and this repeats until the most recently read descriptor is not yet visible on the screen), and the process of finding the back most descriptor and rendering windows onto the screen repeats.
- window descriptors are preferably sorted by the window controller and used to transfer graphics data to the display engine.
- Each of window descriptors including the window descriptor 0 through the window descriptor 7 300 a - h , preferably contains a window layer parameter.
- each window descriptor is preferably associated with a window line done flag indicating that the window descriptor has been processed on a current display line.
- the window controller preferably performs window sorting at each display line using the window layer parameters and the window line done flags.
- the window controller preferably places the graphics window that corresponds to the window descriptor with the smallest window layer parameter at the bottom, while placing the graphics window that corresponds to the window descriptor with the largest window layer parameter at the top.
- the window controller preferably transfers the graphics data for the bottom-most graphics window to be processed first.
- the window parameters of the bottom-most window are composed into a header packet and written to the graphics FIFO.
- the DMA engine preferably sends a request to the memory controller to read the corresponding graphics data for this window and send the graphics data to the graphics FIFO.
- the graphics FIFO is then read by the display engine to compose a display line, which is then written to graphics line buffers.
- the window line done flag is preferably set true whenever the window surface has been processed on the current display line.
- the window line done flag and the window layer parameter may be concatenated together for sorting.
- the window line done flag is added to the window layer parameter as the most significant bit during sorting such that ⁇ window line done flag[4], window layer parameter[3:0] ⁇ is a five bit binary number, a window layer value, with window line done flag as the most significant bit.
- the window controller preferably selects a window descriptor with the smallest window layer value to be processed. Since the window line done flag is preferably the most significant bit of the window layer value, any window descriptor with this flag set, i.e., any window that has been processed on the current display line, will have a higher window layer value than any of the other window descriptors that have not yet been processed on the current display line.
- the window line done flag associated with that particular window descriptor is preferably set high, signifying that the particular window descriptor has been processed for the current display line.
- a sorter 304 preferably sorts all eight window descriptors after any window descriptor is processed.
- the sorting may be implemented using binary tree sorting or any other suitable sorting algorithm.
- binary tree sorting for eight window descriptors the window layer value for four pairs of window descriptors are compared at a first level using four comparators to choose the window descriptor that corresponds to a lower window in each pair.
- two comparators are used to select the window descriptor that corresponds to the bottom most graphics window in each of two pairs.
- the bottom-most graphics windows from each of the two pairs are compared against each other preferably using only one comparator to select the bottom window.
- a multiplexer 302 preferably multiplexes parameters from the window descriptors.
- the output of the sorter i.e., window selected to be the bottom most, is used to select the window parameters to be sent to a direct memory access (“DMA”) module 306 to be packaged in a header packet and sent to a graphics FIFO 308 .
- DMA direct memory access
- the display engine preferably reads the header packet in the graphics FIFO and processes the raw graphics data based on information contained in the header packet.
- the header packet preferably includes a first header word and a second header word.
- Corresponding graphics data is preferably transferred as graphics data words.
- Each of the first header word, the second header word and the graphics data words preferably includes 32 bits of information plus a data type bit.
- the first header word preferably includes a 1-bit data type, a 4-bit graphics type, a 1-bit first window parameter, a 1-bit top/bottom parameter, a 2-bit alpha type, an 8-bit window alpha value and a 16-bit window color value.
- Table 2 shows contents of the first header word.
- the 1-bit data type preferably indicates whether a 33-bit word in the FIFO is a header word or a graphics data word.
- a data type of 1 indicates that the associated 33-bit word is a header word while the data type of 0 indicates that the associated 33-bit word is a graphics data word.
- the graphics type indicates the data format of the graphics data to be displayed in the graphics window similar to the window format parameter in the word 0 of the window descriptor, which is described in Table 1 above. In the preferred embodiment, when the graphics type is 1111, there is no window on the current display line, indicating that the current display line is empty.
- the first window parameter of the first header word preferably indicates whether the window associated with that first header word is a first window on a new display line.
- the top/bottom parameter preferably indicates whether the current display line indicated in the first header word is at the top or the bottom edges of the window.
- the alpha type preferably indicates a method of selecting an alpha value individually for each pixel in the window similar to the alpha type in the word 2 of the window descriptor.
- the window alpha value preferably is an alpha value to be applied to the window as a whole and is similar to the window alpha value in the word 2 of the window descriptor.
- the window color value preferably is the color of the window in 16-bit RGB format and is similar to the window color value in the word 1 of the window descriptor.
- the second header word preferably includes the 1-bit data type, a 4-bit blank pixel count, a 10-bit left edge value, a 1-bit filter enable parameter and a 10-bit window size value.
- Table 3 shows contents of the second header word in the preferred embodiment.
- the second header word preferably starts with the data type indicating whether the second header word is a header word or a graphics data word.
- the blank pixel count preferably indicates a number of blank pixels at a left edge of the window and is similar to the blank start pixel value in the word 3 of the window descriptor.
- the left edge preferably indicates a starting location of the window on a scan line, and is similar to the window x-start value in the word 3 of the window descriptor.
- the filter enable parameter preferably enables a filter during a conversion of graphics data from a YUV 4:4:4 format to a YUV 4:2:2 format and is similar to the window filter enable parameter in word 3 of the window descriptor.
- Some YUV 4:4:4 data may contain higher frequency content than others, which may be filtered by enabling a low pass filter during a conversion to the YUV 4:2:2 format.
- the window size value preferably indicates the actual horizontal size of the window and is similar to the window x-size value in word 3 of the window descriptor.
- an empty-line header is preferably placed into the FIFO so that the display engine may release the display line for display.
- Packetized data structures have been used primarily in the communication world where large amount of data needs to be transferred between hardware using a physical data link (e.g., wires). The idea is not known to have been used in the graphics world where localized and small data control structures need to be transferred between different design entities without requiring a large off-chip memory as a buffer.
- header packets are used, and a general-purpose FIFO is used for routing. Routing may be accomplished in a relatively simple manner in the preferred embodiment because the write port of the FIFO is the only interface.
- the graphics FIFO is a synchronous 32 ⁇ 33 FIFO built with a static dual-port RAM with one read port and one write port.
- the write port preferably is synchronous to a 81 MHz memory clock while the read port may be asynchronous (not synchronized) to the memory clock.
- the read port is preferably synchronous to a graphics processing clock, which runs preferably at 81 MHz, but not necessarily synchronized to the memory clock.
- Two graphics FIFO pointers are preferably generated, one for the read port and one for the write port.
- each graphics FIFO pointer is a 6-bit binary counter which ranges from 000000b to 111111b, i.e., from 0 to 63.
- the graphics FIFO is only 32 words deep and requires only 5 bits to represent each 33-bit word in the graphics FIFO. An extra bit is preferably used to distinguish between FIFO full and FIFO empty states.
- the graphics data words preferably include the 1-bit data type and 32-bit graphics data bits.
- the data type is 0 for the graphics data words.
- the number of graphics data words in one DMA burst preferably does not exceed 16.
- a graphics display FIFO is not used.
- the graphics converter processes data from memory at the rate that it is read from memory.
- the memory and conversion functions are in a same clock domain.
- Other suitable FIFO designs may be used.
- a flow diagram illustrates a process for loading and processing window descriptors.
- the system is preferably reset in step 310 .
- the system in step 312 preferably checks for a vertical sync (“VSYNC”).
- VSYNC vertical sync
- the system in step 314 preferably proceeds to load window descriptors into the window controller from the external SDRAM or other suitable memory over the DMA channel for window descriptors.
- the window controller may store up to eight window descriptors in one embodiment of the present invention.
- the step in step 316 preferably sends a new line header indicating the start of a new display line.
- the system in step 320 preferably sorts the window descriptors in accordance with the process described in reference to FIG. 7 . Although sorting is indicated as a step in this flow diagram, sorting actually may be a continuous process of selecting the bottom-most window, i.e., the window to be processed.
- the system in step 322 preferably checks to determine if a starting display line of the window is greater than the line count of the current display line. If the starting display line of the window is greater than the line count, i.e., if the current display line is above the starting display line of the bottom most window, the current display line is a blank line.
- step 318 preferably increments the line count and sends another new line header in step 316 .
- the process of sending a new line header and sorting window descriptor continues as long as the starting display line of the bottom most (in layer order) window is below the current display line.
- the display engine and the associated graphics filter preferably operate in one of two modes, a field mode and a frame mode.
- raw graphics data associated with graphics windows is preferably stored in frame format, including lines from both interlaced fields in the case of an interlaced display.
- the display engine preferably skips every other display line during processing.
- the system in step 318 preferably increments the line count by two each time to skip every other line.
- the display engine processes every display line sequentially. In the frame mode, therefore, the system in step 318 preferably increments the line count by one each time.
- the system in step 324 preferably determines from the header packet whether the window descriptor is for displaying a window or re-loading the CLUT. If the window header indicates that the window descriptor is for re-loading CLUT, the system in step 328 preferably sends the CLUT data to the CLUT and turns on the CLUT write strobe to load CLUT.
- the system in step 326 preferably sends a new window header to indicate that graphics data words for a new window on the display line are going to be transferred into the graphics FIFO. Then, the system in step 330 preferably requests the DMA module to send graphics data to the graphics FIFO over the DMA channel for graphics data. In the event the FIFO does not have sufficient space to store graphics data in a new data packet, the system preferably waits until such space is made available.
- the system in step 332 preferably determines whether the last line of the current window has been transferred. If the last line has been transferred, a window descriptor done flag associated with the current window is preferably set. The window descriptor done flag indicates that the graphics data associated with the current window descriptor has been completely transferred.
- the system sets a window descriptor done flag in step 334 . Then the system in step 336 preferably sets a new window descriptor update flag and increments a window descriptor update counter to indicate that a new window descriptor is to be copied from the external memory.
- the system in step 338 preferably sets the window line done flag for the current window descriptor to signify that processing of this window descriptor on the current display line has been completed.
- the system in step 340 preferably checks the window line done flags associated with all eight window descriptors to determine whether they are all set, which would indicate that all the windows of the current display line have been processed. If not all window line done flags are set, the system preferably proceeds to step 320 to sort the window descriptors and repeat processing of the new bottom-most window descriptor.
- step 342 preferably checks whether an all window descriptor done flag has been set to determine whether all window descriptors have been processed completely.
- the all window descriptor done flag is set when processing of all window descriptors in the current frame or field have been processed completely. If the all window descriptor done flag is set, the system preferably returns to step 310 to reset and awaits another VSYNC in step 312 . If not all window descriptors have been processed, the system in step 344 preferably determines if the new window descriptor update flag has been set. In the preferred embodiment, this flag would have been set in step 334 if the current window descriptor has been completely processed.
- the system in step 352 preferably sets up the DMA to transfer a new window descriptor from the external memory. Then the system in step 350 preferably clears the new window descriptor update flag. After the system clears the new window descriptor update flag or when the new window descriptor update flag is not set in the first place, the system in step 348 preferably increments a line counter to indicate that the window descriptors for a next display line should be processed. The system in step 346 preferably clears all eight window line done flags to indicate that none of the window descriptors have been processed for the next display line. Then the system in step 316 preferably initiates processing of the new display line by sending a new line header to the FIFO.
- the graphics converter in the display engine converts raw graphics data having various different formats into a common format for subsequent compositing with video and for display.
- the graphics converter preferably includes a state machine that changes state based on the content of the window data packet. Referring to FIG. 9 , the state machine in the graphics converter preferably controls unpacking and processing of the header packets.
- a first header word processing state 354 is preferably entered wherein a first window parameter of the first header word is checked (step 356 ) to determine if the window data packet is for a first graphics window of a new line. If the header packet is not for a first window of a new line, after the first header word is processed, the state preferably changes to a second header word processing state 362 .
- the state machine preferably enters a clock switch state 358 .
- the clock for a graphics line buffer which is going to store the new line switches from a display clock to a memory clock, e.g., from a 13.5 MHz clock to a 81 MHz clock.
- a graphics type in the first header word is preferably checked (step 360 ) to determine if the header packet represents an empty line.
- a graphics type of 1111b preferably refers to an empty line.
- the state machine enters the first header word processing state 354 , in which the first header word of the next header packet is processed. If the graphics type is not 1111b, i.e. the display line is not empty, the second header word is processed. Then the state machine preferably enters a graphics content state 364 wherein words from the FIFO are checked (step 366 ) one at a time to verify that they are data words. The state machine preferably remains in the graphics content state as long as each word read is a data word.
- the state machine While in the graphics content state, if a word received is not a data word, i.e., it is a first or second header word, then the state machine preferably enters a pipeline complete state 368 and then to the first header processing state 354 where reading and processing of the next window data packet is commenced.
- the display engine 58 is preferably coupled to memory over a memory interface 370 and a CLUT over a CLUT interface 372 .
- the display engine preferably includes the graphics FIFO 132 which receives the header packets and the graphics data from the memory controller over the memory interface.
- the graphics FIFO preferably provides received raw graphics data to the graphics converter 134 which converts the raw graphics data into the common compositing format.
- the RGB to YUV converter 136 and data from the CLUT over the CLUT interface 372 are used to convert RGB formatted data and CLUT formatted data, respectively.
- the graphics converter preferably processes all of the window layers of each scan line in half the time, or less, of an interlaced display line, due to the need to have lines from both fields available in the SRAM for use by the graphics filter when frame mode filtering is enabled.
- the graphics converter operates at 81 MHz in one embodiment of the present invention, and the graphics converter is able to process up to eight windows on each scan line and up to three full width windows.
- the graphics converter processes 81 Mpixels per second, it can convert three windows, each covering the width of the display, in half of the active display time of an interlaced scan line.
- the graphics converter processes all the window layers of each scan line in half the time of an interlaced display line, due to the need to have lines from both fields available in the SRAM for use by the graphics filter. In practice, there may be some more time available since the active display time leaves out the blanking time, while the graphics converter can operate continuously.
- Graphics pixels are preferably read from the FIFO in raw graphics format, using one of the multiple formats allowed in the present invention and specified in the window descriptor.
- Each pixel may occupy as little as two bits or as much as 16 bits in the preferred embodiment.
- Each pixel is converted to a YUVa24 format (also referred to as aYUV 4:4:2:2), such as two adjacent pixels sharing a UV pair and having unique Y and alpha values, and each of the Y, U, V and alpha components occupying eight bits.
- the conversion process is generally dependent on the pixel format type and the alpha specification method, both of which are indicated by the window descriptor for the currently active window.
- the graphics converter uses the CLUT memory to convert CLUT format pixels into RGB or YUV pixels.
- the graphics converter preferably includes a color space converter.
- the color space converter preferably is accurate for all coefficients. If the converter is accurate to eight or nine bits it can be used to accurately convert eight bit per component graphics, such as CLUT entries with this level of accuracy or RGB24 images.
- the graphics converter preferably produces one converted pixel per clock cycle, even when there are multiple graphics pixels packed into one word of data from the FIFO.
- the graphics processing clock which preferably runs at 81 MHz, is used during the graphics conversion.
- the graphics converter preferably reads data from the FIFO whenever both conditions are met, including that the converter is ready to receive more data, and the FIFO has data ready.
- the graphics converter preferably receives an input from a graphics blender, which is the next block in the pipeline, which indicates when the graphics blender is ready to receive more converted graphics data. The graphics converter may stall if the graphics blender is not ready, and as a result, the graphics converter may not be ready to receive graphics data from the FIFO.
- the graphics converter preferably converts the graphics data into a YUValpha (“YUVa”) format.
- This YUVa format includes YUV 4:2:2 values plus an 8-bit alpha value for every pixel, and as such it occupies 24 bits per pixel; this format is alternately referred to as aYUV 4:4:2:2.
- the YUV444-to-YUV422 converter 138 converts graphics data with the aYUV 4:4:4:4 format from the graphics converter into graphics data with the aYUV 4:4:2:2 format and provides the data to the graphics blender 140 .
- the YUV444-to-YUV422 converter preferably has a capacity of performing low pass filtering to filter out high frequency components when needed.
- the graphics converter also sends and receives clock synchronization information to and from the graphics line buffers over a clock control interface 376 .
- the graphics blender 140 When provided with the converted graphics data, the graphics blender 140 preferably composites graphics windows into graphics line buffers over a graphics line buffer interface 374 .
- the graphics windows are alpha blended into blended graphics and preferably stored in graphics line buffers.
- a color look-up table (“CLUT”) is preferably used to supply color and alpha values to the raw graphics data formatted to address information contents of the CLUT.
- CLUT color look-up table
- For graphics windows using a color look-up table (CLUT) format it may be necessary to load specific color look-up table entries from external memory to on-chip memory before the graphics window is displayed.
- the system preferably includes a display engine that processes graphics images formatted in a plurality of formats including a color look up table (CLUT) format.
- CLUT color look up table
- the system provides a data structure that describes the graphics in a window, provides a data structure that provides an indicator to load a CLUT, sorts the data structures into a list according to the location of the window on the display, and loads conversion data into a CLUT for converting the CLUT-formatted data into a different data format according to the sequence of data structures on the list.
- each window on the display screen is described with a window descriptor.
- the same window descriptor is used to control CLUT loading as the window descriptor used to display graphics on screen.
- the window descriptor preferably defines the memory starting address of the graphics contents, the x position on the display screen, the width of the window, the starting vertical display line and end vertical display line, window layer, etc.
- the same window structure parameters and corresponding fields may be used to define the CLUT loading.
- the graphics contents memory starting address may define CLUT memory starting address; the width of graphics window parameter may define the number of CLUT entries to be loaded; the starting vertical display line and ending vertical display line parameters may be used to define when to load the CLUT; and the window layer parameter may be used to define the priority of CLUT loading if several windows are displayed at the same time, i.e., on the same display line.
- only one CLUT is used.
- the contents of the CLUT are preferably updated to display graphics windows with CLUT formatted data that is not supported by the current content of the CLUT.
- the CLUT is closely associated with the graphics converter.
- the CLUT consists of one SRAM with 256 entries and 32 bits per entry. In other embodiments, the number of entries and bits per entry may vary. Each entry contains three color components; either RGB or YUV format, and an alpha component. For every CLUT-format pixel converted, the pixel data may be used as the address to the CLUT and the resulting value may be used by the converter to produce the YUVa (or alternatively RGBa) pixel value.
- the CLUT may be re-loaded by retrieving new CLUT data via the direct memory access module when needed. It generally takes longer to re-load the CLUT than the time available in a horizontal blanking interval. Accordingly, in the preferred embodiment, a whole scan line time is allowed to re-load the CLUT. While the CLUT is being reloaded, graphics images in non-CLUT formats may be displayed.
- the CLUT reloading is preferably initiated by a window descriptor that contains information regarding CLUT reloading rather than a graphics window display information.
- the graphics CLUT 146 preferably includes a graphics CLUT controller 400 and a static dual-port RAM (SRAM) 402 .
- the SRAM preferably has a size of 256 ⁇ 32 which corresponds to 256 entries in the graphics CLUT.
- Each entry in the graphics CLUT preferably has 32 bits composed of Y+U+V+alpha from the most significant bit to the least significant bit.
- the size of each field, including Y, U, V, and alpha, is preferably eight bits.
- the graphics CLUT preferably has a write port that is synchronized to a 81 MHz memory clock and a read port that may be asynchronous to the memory clock.
- the read port is preferably synchronous to the graphics processing clock, which runs preferably at 81 MHz, but not necessarily synchronized to the memory clock.
- the static dual-port RAM (“SRAM”) is preferably addressed by a read address which is provided by graphics data in the CLUT images.
- the graphics data is preferably output as read data 414 when a memory address in the CLUT containing that graphics data is addressed by a read address 412 .
- the window controller preferably controls the write port with a CLUT memory request signal 404 and a CLUT memory write signal 408 .
- CLUT memory data 410 is also preferably provided to the graphics CLUT via the direct memory access module from the external memory.
- the graphics CLUT controller preferably receives the CLUT memory data and provides the received CLUT memory data to the SRAM for writing.
- an exemplary timing diagram shows different signals involved during a writing operation of the CLUT.
- the CLUT memory request signal 418 is asserted when the CLUT is to be re-loaded.
- a rising edge of the CLUT memory request signal 418 is used to reset a write pointer associated with the write port.
- the CLUT memory write signal 420 is asserted to indicate the beginning of a CLUT re-loading operation.
- the CLUT memory data 422 is provided synchronously to the 81 MHz memory clock 416 to be written to the SRAM.
- the write pointer associated with the write port is updated each time the CLUT is loaded with CLUT memory data.
- the process of reloading a CLUT is associated with the process of processing window descriptors illustrated in FIG. 8 since CLUT re-loading is initiated by a window descriptor.
- steps 324 and 328 of FIG. 8 if the window descriptor is determined to be for reloading CLUT in step 324 , the system in step 328 sends the CLUT data to the CLUT.
- the window descriptor for the CLUT reloading may appear anywhere in the window descriptor list. Accordingly, the CLUT reloading may take place at any time whenever CLUT data is to be updated.
- the CLUT loading mechanism in one embodiment of the present invention, more than one window with different CLUT tables may be displayed on the same display line.
- only the minimum required entries are preferably loaded into the CLUT, instead of loading all the entries every time.
- the loading of only the minimum required entries may save memory bandwidth and enables more functionality.
- the CLUT loading mechanism is preferably relatively flexible and easy to control, making it suitable for various applications.
- the CLUT loading mechanism of the present invention may also simplify hardware design, as the same state machine for the window controller may be used for CLUT loading.
- the CLUT preferably also shares the same DMA logic and layer/priority control logic as the window controller.
- the system preferably blends a plurality of graphics images using line buffers.
- the system initializes a line buffer by loading the line buffer with data that represents transparent black, obtains control of a line buffer for a compositing operation, composites graphics contents into the line buffer by blending the graphics contents with the existing contents of the line buffer, and repeats the step of compositing graphics contents into the line buffer until all of the graphics surfaces for the particular line have been composited.
- the graphics line buffer temporarily stores composited graphics images (blended graphics).
- a graphics filter preferably uses blended graphics in line buffers to perform vertical filtering and scaling operations to generate output graphics images.
- the display engine composites graphics images line by line using a clock rate that is faster than the pixel display rate, and graphics filters run at the pixel display rate.
- multiple lines of graphics images may be composited in parallel.
- the line buffers may not be needed. Where line buffers are used, the system may incorporate an innovative control scheme for providing the line buffers containing blended graphics to the graphics filter and releasing the line buffers that are used up by the graphics filter.
- the line buffers are preferably built with synchronous static dual-port random access memory (“SRAM”) and dynamically switch their clocks between a memory clock and a display clock.
- SRAM static dual-port random access memory
- Each line buffer is preferably loaded with graphics data using the memory clock and the contents of the line buffer is preferably provided to the graphics filter synchronously to the display clock.
- the memory clock is an 81 MHz clock used by the graphics converter to process graphics data while the display clock is a 13.5 MHz clock used to display graphics and video signals on a television screen. Other embodiments may use other clock speeds.
- the graphics line buffer preferably includes a graphics line buffer controller 500 and line buffers 504 .
- the graphics line buffer controller 500 preferably receives memory clock buffer control signals 508 as well as display clock buffer control signals 510 .
- the memory clock control signals and the display clock control signals are used to synchronize the graphics line buffers to the memory clock and the display clock, respectively.
- the graphics line buffer controller receives a clock selection vector 514 from the display engine to control which graphics line buffers are to operate in which clock domain.
- the graphics line buffer controller returns a clock enable vector to the display engine to indicate clock synchronization settings in accordance with the clock selection vector.
- the line buffers 504 include seven line buffers 506 a - g .
- the line buffers temporarily store lines of YUVa24 graphics pixels that are used by a subsequent graphics filter. This allows for four line buffers to be used for filtering and scaling, two are available for progressing by one or two lines at the end of every line, and one for the current compositing operation.
- Each of the ports to the SRAM including line buffers is 24 bits wide to accommodate graphics data in YUVa24 format in this embodiment of the present invention.
- the SRAM has one read port and one write port.
- One read port and one write port are used for the graphics blender interface, which performs a read-modify-write typically once per clock cycle.
- an SRAM with only one port is used.
- the data stored in the line buffers may be YUVa32 (4:4:4:4), RGBa32, or other formats.
- the line buffers are preferably controlled by the graphics line buffer controller over a line buffer control interface 502 . Over this interface, the graphics line buffer controller transfers graphics data to be loaded to the line buffers.
- the graphics filter reads contents of the line buffers over a graphics line buffer interface 516 and clears the line buffers by loading them with transparent black pixels prior to releasing them to be loaded with more graphics data for display.
- a flow diagram of a process of using line buffers to provide composited graphics data from a display engine to a graphics filter is illustrated.
- the system in step 522 receives a vertical sync (VSYNC) indicating a field start.
- VSYNC vertical sync
- all line buffers preferably operate in the memory clock domain. Accordingly, the line buffers are synchronized to the 81 MHz memory clock in one embodiment of the present invention. In other embodiments, the speed of the memory clock may be different from 81 MHz, or the line buffers may not operate in the clock domain of the main memory.
- the system in step 524 preferably resets all line buffers by loading them with transparent black pixels.
- the system in step 526 preferably stores composited graphics data in the line buffers. Since all buffers are cleared at every field start by the display engine to the equivalent of transparent black pixels, the graphics data may be blended the same way for any graphics window, including the first graphics window to be blended. Regardless of how many windows are composited into a line buffer, including zero windows, the result is preferably always the correct pixel data.
- the system in step 528 preferably detects a horizontal sync (HSYNC) which signifies a new display line.
- HSELNC horizontal sync
- the graphics blender preferably receives a line buffer release signal from the graphics filter when one or more line buffers are no longer needed by the graphics filter. Since four line buffers are used with the four-tap graphics filter at any given time, one to three line buffers are preferably made available for use by the graphics blender to begin constructing new display lines in them. Once a line buffer release signal is recognized, an internal buffer usage register is updated and then clock switching is performed to enable the display engine to work on the newly released one to three line buffers. In other embodiments, the number of line buffers may be more or less than seven, and more or less than three line buffers may be released at a time.
- the system in step 534 preferably performs clock switching.
- Clock switching is preferably done in the memory clock domain by the display engine using a clock selection vector.
- Each bit of the clock selection vector preferably corresponds to one of the graphics line buffers. Therefore, in one embodiment of the present invention with seven graphics line buffers, there are seven bits in the clock selection vector. For example, a corresponding bit of logic 1 in the clock selection vector indicates that the line buffer operates in the memory clock domain while a corresponding bit of logic 0 indicates that the line buffer operates in the display clock domain.
- Clock switching logic preferably switches between the memory clock and the display clock in accordance with the clock selection vector.
- the clock selection vector is preferably also used to multiplex the memory clock buffer control signals and the display clock buffer control signals.
- clock switching preferably is done at the field start and the line start to accommodate the graphics filter to access graphics data in real-time. At the field and line starts, clock switching may be done without causing glitches on the display side. Clock switching typically requires a dead cycle time.
- a clock enable vector indicates that the graphics line buffers are ready to synchronize to the clocks again.
- the clock enable vector is preferably the same size at the clock selection vector. The clock enable vector is returned to the display engine to be compared with the clock selection vector.
- the clock selection vector is sent by the display engine to the graphics line buffer block.
- the clocks are preferably disabled to ensure a glitch-free clock switching.
- the graphics line buffers send the clock enable vector to the display engine with the clock synchronization settings requested in the clock selection vector.
- the display engine compares contents of the clock selection vector and the clock enable vector. When the contents match, the clock synchronization is preferably turned on again.
- the system in step 536 preferably provides the graphics data in the line buffers to the graphics filter for anti-flutter filtering, sample rate conversion (SRC) and display.
- SRC sample rate conversion
- the system looks for a VSYNC in step 538 . If the VSYNC is detected, the current field has been completed, and therefore, the system in step 530 preferably switches clocks for all line buffers to the memory clock and resets the line buffers in step 524 for display of another field. If the VSYNC is not detected in step 538 , the current display line is not the last display line of the current field. The system continues to step 528 to detect another HSYNC for processing and displaying of the next display line of the current field.
- Graphics memory buffers are conventionally implemented using low-cost DRAM, SDRAM, for example. Such memory devices are typically slow and may require each burst transfer to be within a page. Smooth (or soft) horizontal scrolling, however, preferably enables the starting address to be set to any arbitrary pixel. This may conflict with the transfer of data in bursts within the well-defined pages of DRAM. In addition, complex control logic may be required to monitor if page boundaries are to be crossed during the transfer of pixel maps for each step during soft horizontal scrolling.
- an implementation of a soft horizontal scrolling mechanism is achieved by incrementally modifying the content of a window descriptor for a particular graphics window.
- the window soft horizontal scrolling mechanism preferably enables positioning the contents of graphics windows on arbitrary positions on a display line.
- the soft horizontal scrolling of graphics windows is implemented based on an architecture in which each graphics window is independently stored in a normal graphics buffer memory device (SDRAM, EDO-DRAM, DRAM) as a separate object. Windows are composed on top of each other in real time as required. To scroll a window to the left or right, a special field is defined in the window descriptor that tells how many pixels are to be shifted to the left or right.
- SDRAM graphics buffer memory
- EDO-DRAM EDO-DRAM
- DRAM dynamic random access memory
- the system according to the present invention provides a method of horizontally scrolling a display window to the left, which includes the steps of blanking out one or more pixels at a beginning of a portion of graphics data, the portion being aligned with a start address; and displaying the graphics data starting at the first non-blanked out pixel in the portion of the graphics data aligned with the start address.
- the system according to the present invention also provides a method of horizontally scrolling a display window to the right which includes the steps of moving a read pointer to a new start address that is immediately prior to a current start address, blanking out one or more pixels at a beginning of a portion of graphics data, the portion being aligned to the new start address, and displaying the graphics data starting at the first non-blanked out pixel in the portion of the graphics data aligned with the new start address.
- each graphics window is preferably addressed using an integer word address. For example, if the memory system uses 32 bit words, then the address of the start of a window is defined to be aligned to a multiple of 32 bits, even if the first pixel that is desired to be displayed is not so aligned.
- Each graphics window also preferably has associated with it a horizontal offset parameter, in units of pixels, that indicates a number of pixels to be ignored, starting at the indicated starting address, before the active display of the window starts.
- the horizontal offset parameter is the blank start pixel value in the word 3 of the window descriptor. For example, if the memory system uses 32-bit words and the graphics format of a window uses 8 bits per pixel, each 32-bit word contains four pixels. In this case, the display of the window may ignore one, two or three pixels (8, 16, or 24 bits), causing an effective left shift of one, two, or three pixels.
- the memory system uses 32-bit words. In other embodiments, the memory system may use more or less number of bits per word, such as 16 bits per word or 64 bits per word. In addition, pixels in other embodiments may have various different number of bits per pixel, such as 1, 2, 4, 8, 16, 24 and 32.
- a first pixel (e.g., the first 8 bits) 604 of a 32-bit word 600 which is aligned to the start address, is blanked out.
- a read pointer 602 Prior to blanking out, a read pointer 602 points to the first bit of the 32-bit word. After blanking out, the read pointer 602 points to the ninth bit of the 32-bit word.
- a shift of four pixels is implemented by changing the start address by one to the next 32-bit word. Shifts of any number of pixels are thereby implemented by a combination of adjusting the starting word address and adjusting the pixel shift amount.
- the same mechanism may be used for any number of bits per pixel (1, 2, 4, etc.) and any memory word size.
- the shifting cannot be achieved simply by blanking some of the bits at the start address since any blanking at the start will simply have an effect of shifting pixels to the left. Further, the shifting to the right cannot be achieved by blanking some of the bits at the end of the last data word of a display line since display of a window starts at the start address regardless of the position of the last pixel to be displayed.
- a read pointer pointing at the start address is preferably moved to an address that is just before the start address, thereby making that address the new start address. Then, a portion of the data word aligned with the new start address is blanked out. This provides the effect of shifting the graphics display to the right.
- a memory system may use 32-bit words and the graphics format of a window may use 2 bits per pixel, e.g., a CLUT 2 format. If the graphics display is to be shifted by a pixel to the right, the read pointer is moved to an address that is just before the start address, and that address becomes a new start address. Then, the first 30 bits of the 32-bit word that is aligned with the new start address are blanked out. In this case, blanking out of a portion of the 32-bit word that is aligned with the new start address has the effect of shifting the graphics display to the right.
- the graphics format of a window may use 2 bits per pixel, e.g., a CLUT 2 format.
- a 32-bit word 610 that is aligned with the starting address is shifted to the right by one pixel.
- the 32-bit word 610 has a CLUT 2 format, and therefore contains 16 pixels.
- a read pointer 612 points at the beginning of the 32-bit word 610 .
- an address that is just before the start address is made a new start address.
- a 32-bit data word 618 is aligned with the new start address.
- the first 30 bits (15 pixels) 616 of the 32-bit data word 618 aligned with the new start address are blanked out.
- the read pointer 612 points at a new location, which is the 31 st bit of the new start address.
- the 31 St bit and the 32 nd bit of the new start address may constitute a pixel 618 . Insertion of the pixel 618 in front of 16 pixels of the 32-bit data word 610 effectively shifts those 16 pixels to the right by one pixel.
- a graphical element or glyph generally represents an image of text or graphics.
- Graphical element may refer to text glyphs or graphics.
- graphical elements are rendered as arrays of pixels (picture elements) with two states for every pixel, i.e. the foreground and background colors.
- the background color is transparent, allowing video or other graphics to show through.
- the interlaced nature of TV displays causes horizontal edges of graphical elements, or any portion of graphical elements with a significant vertical gradient, to show a “fluttering” appearance with conventional methods.
- Some conventional methods blend the edges of graphical elements with background colors in a frame buffer, by first reading the color in the frame buffer at every pixel where the graphical element will be written, combining that value with the foreground color of the graphical element, and writing the result back to the frame buffer memory.
- This method requires there to be a frame buffer; it requires the frame buffer to use a color format that supports such blending operations, such as RGB24 or RGB16, and it does not generally support the combination of graphical elements over full motion video, as such functionality may require repeating the read, combine and write back function of all pixels of all graphical elements for every frame or field of the video in a timely manner.
- the system preferably displays a graphical element by filtering the graphical element with a low pass filter to generate a multi-level value per pixel at an intended final display resolution and uses the multi-level values as alpha blend values for the graphical element in the subsequent compositing stage.
- a method of displaying graphical elements on televisions and other displays is used.
- a deep color frame buffer with, for example, 16, 24, or 32 bits per pixel is not required to implement this method since this method is effective with as few as two bits per pixel.
- this method may result in a significant reduction in both the memory space and the memory bandwidth required to display text and graphics.
- the method preferably provides high quality when compared with conventional methods of anti-aliased text, and produces higher display quality than is available with conventional methods that do not support anti-aliased text.
- a flow diagram illustrates a process of providing very high quality display of graphical elements in one embodiment of the present invention.
- the bi-level graphical elements are filtered by the system in step 652 .
- the graphical elements are preferably initially rendered by the system in step 650 at a significantly higher resolution than the intended final display resolution, for example, four times the final resolution in both horizontal and vertical axes.
- the filter may be any suitable low pass filter, such as a “box” filter.
- the result of the filtering operation is a multi-level value per pixel at the intended display resolution.
- the number of levels may be reduced to fit the number of bits used in the succeeding steps.
- the system in step 654 determines whether the number of levels are to be reduced by reducing the number of bits used. If the system determines that the number of levels are to be reduced, the system in step 656 preferably reduces the number of bits.
- the result of box-filtering 4 ⁇ 4 super-sampled graphical elements normally results in 17 possible levels; these may be converted through truncation or other means to 16 levels to match a 4 bit representation, or eight levels to match a 3 bit representation, or four levels to match a 2 bit representation.
- the filter may provide a required vertical axis low pass filter function to provide anti-flutter filter effect for interlaced display.
- the system preferably uses the resulting multi-level values, either with or without reduction in the number of bits, as alpha blend values, which are preferably pixel alpha component values, for the graphical elements in a subsequent compositing stage.
- alpha blend values which are preferably pixel alpha component values
- the multi-level graphical element pixels are preferably written into a graphics display buffer where the values are used as alpha blend values when the display buffer is composited with other graphics and video images.
- the display buffer is defined to have a constant foreground color consistent with the desired foreground color of the text or graphics, and the value of every pixel in the display buffer is defined to be the alpha blend value for that pixel.
- an Alpha-4 format specifies four bits per pixel of alpha blend value in a graphics window, where the 4 bits define alpha blend values of 0/16, 1/16, 2/16, . . . , 13/16, 14/16, and 16/16. The value 15/16 is skipped in this example in order to obtain the endpoint values of 0 and 16/16 (1) without requiring the use of an additional bit.
- the display window has a constant foreground color which is specified in the window descriptor.
- the alpha blend value per pixel is specified for every pixel in the graphical element by choosing a CLUT index for every pixel, where the CLUT entry associated with every index contains the desired alpha blend value as part of the CLUT contents.
- a graphical element with a constant foreground color and 4 bits of alpha per pixel can be encoded in a CLUT 4 format such that every pixel of the display buffer is defined to be a 4 bit CLUT index, and each of the associated 16 CLUT entries has the appropriate alpha blend value (0/16, 1/16, 2/16, . . . , 14/16, 16/16) as well as the (same) constant foreground color in the color portion of the CLUT entries.
- the alpha per pixel values are used to form the alpha portion of color+alpha pixels in the display buffer, such as alphaRGB(4,4,4,4) with 4 bits for each of alpha, Red, Green, and Blue, or alphaRGB32 with 8 bits for each component.
- This format does not require the use of a CLUT.
- the graphical element may or may not have a constant foreground color.
- the various foreground colors are processed using a low-pass filter as described earlier, and the outline of the entire graphical element (including all colors other than the background) is separately filtered also using a low pass filter as described.
- the filtered foreground color is used as either the direct color value in, e.g., an alphaRGB format (or other color space, such as alphaYUV) or as the color choice in a CLUT format, and the result of filtering the outline is used as the alpha per pixel value in either a direct color format such as alphaRGB or as the choice of alpha value per CLUT entry in a CLUT format.
- the graphical elements are displayed on the TV screen by compositing the display buffer containing the graphical elements with optionally other graphics and video contents while blending the subject display buffer with all layers behind it using the alpha per pixel values created in the preceding steps. Additionally, the translucency or opacity of the entire graphical element may be varied by specifying the alpha value of the display buffer via such means as the window alpha value that may be specified in a window descriptor.
- a composite video signal (analog video) is received into the system, it is preferably digitized and separated into YUV (luma and chroma) components for processing.
- Samples taken for YUV are preferably synchronized to a display clock for compositing with graphics data at the video compositor.
- Mixing or overlaying of graphics with decoded analog video may require synchronizing the two image sources exactly.
- Undesirable artifacts such as jitter may be visible on the display unless a synchronization mechanism is implemented to correctly synchronize the samples from the analog video to the display clock.
- analog video often does not adhere strictly to the television standards such as NTSC and PAL.
- analog video which originates in VCRs may have synchronization signals that are not aligned with chroma reference signals and also may have inconsistent line periods.
- the synchronization mechanism preferably should correctly synchronize samples from non-standard analog videos as well.
- the system therefore, preferably includes a video synchronizing mechanism that includes a first sample rate converter for converting a sampling rate of a stream of video samples to a first converted rate, a filter for processing at least some of the video samples with the first converted rate, and a second sample rate converter for converting the first converted rate to a second converted rate.
- a video synchronizing mechanism that includes a first sample rate converter for converting a sampling rate of a stream of video samples to a first converted rate, a filter for processing at least some of the video samples with the first converted rate, and a second sample rate converter for converting the first converted rate to a second converted rate.
- the video decoder 50 preferably samples and synchronizes the analog video input.
- the video receiver preferably receives an analog video signal 706 into an analog-to-digital converter (ADC) 700 where the analog video is digitized.
- ADC analog-to-digital converter
- the digitized analog video 708 is preferably sub-sampled by a chroma-locked sample rate converter (SRC) 708 .
- a sampled video signal 710 is provided to an adaptive 2H comb filter/chroma demodulator/luma processor 702 to be separated into YUV (luma and chroma) components. In the 2H comb filter/chroma demodulator/luma processor 702 , the chroma components are demodulated.
- the luma component is preferably processed by noise reduction, coring and detail enhancement operations.
- the adaptive 2H comb filter provides the sampled video 712 , which has been separated into luma and chroma components and processed, to a line-locked SRC 704 .
- the luma and chroma components of the sample video is preferably sub-sampled once again by the line-locked SRC and the sub-sampled video 714 is provided to a time base corrector (TBC) 72 .
- TBC time base corrector
- the time base corrector preferably provides an output video signal 716 that is synchronized to a display clock of the graphics display system. In one embodiment of the present invention, the display clock runs at a nominal 13.5 MHz.
- the synchronization mechanism preferably includes the chroma-locked SRC 70 , the line-locked SRC 704 and the TBC 72 .
- the chroma-locked SRC outputs samples that are locked to chroma subcarrier and its reference bursts while the line-locked SRC outputs samples that are locked to horizontal syncs.
- samples of analog video are over-sampled by the ADC 700 and then down-sampled by the chroma-locked SRC to four times the chroma sub-carrier frequency (Fsc).
- the down-sampled samples are down-sampled once again by the line-locked SRC to line-locked samples with an effective sample rate of nominally 13.5 MHz.
- the time base corrector is used to align these samples to the display clock, which runs nominally at 13.5 MHz.
- Analog composite video has a chroma signal frequency interleaved in frequency with the luma signal.
- this chroma signal is modulated on to the Fsc of approximately 3.579545 MHz, or exactly 227.5 times the horizontal line rate.
- the luma signal covers a frequency span of zero to approximately 4.2 MHz.
- One method for separating the luma from the chroma is to sample the video at a rate that is a multiple of the chroma sub-carrier frequency, and use a comb filter on the sampled data. This method generally imposes a limitation that the sampling frequency is a multiple of the chroma sub-carrier frequency (Fsc).
- a chroma-locked sampling frequency generally imposes significant costs and complications on the implementation, as it may require the creation of a sample clock of the correct frequency, which itself may require a stable, low noise controllable oscillator (e.g. a VCXO) in a control loop that locks the VCXO to the chroma burst frequency.
- a stable, low noise controllable oscillator e.g. a VCXO
- Different sample frequencies are typically required for different video standards with different chroma subcarrier frequencies.
- Sampling at four times the subcarrier frequency, i.e. 14.318 MHz for NTSC standard and 17.72 MHz for PAL standard generally requires more anti-alias filtering before digitization than is required when sampling at higher frequencies such as 27 MHz.
- such a chroma-locked clock frequency is often unrelated to the other frequencies in a large scale digital device, requiring multiple clock domains and asynchronous internal interfaces.
- the sampling frequency preferably is 27 MHz and preferably is not locked to the input video signal in phase or frequency.
- the sampled video data then goes through the chroma-locked SRC that down-samples the data to an effective sampling rate of 4Fsc. This and all subsequent operations are preferably performed in digital processing in a single integrated circuit.
- the effective sample rate of 4Fsc does not require a clock frequency that is actually at 4Fsc, rather the clock frequency can be almost any higher frequency, such as 27 MHz, and valid samples occur on some clock cycles while the overall rate of valid samples is equal to 4Fsc.
- the down-sampling (decimation) rate of the SRC is preferably controlled by a chroma phase and frequency tracking module.
- the chroma phase and frequency tracking module looks at the output of the SRC during the color burst time interval and continuously adjusts the decimation rate in order to align the color burst phase and frequency.
- the chroma phase and frequency tracking module is implemented as a logical equivalent of a phase locked loop (PLL), where the chroma burst phase and frequency are compared in a phase detector to the effective sample rate, which is intended to be 4Fsc, and the phase and frequency error terms are used to control the SRC decimation rate.
- PLL phase locked loop
- the decimation function is applied to the incoming sampled video, and therefore the decimation function controls the chroma burst phase and frequency that is applied to the phase detector.
- This system is a closed feedback loop (control loop) that functions in much the same way as a conventional PLL, and its operating parameters are readily designed in the same way as those of PLLs.
- the chroma-locked SRC 70 preferably includes a sample rate converter (SRC) 730 , a chroma tracker 732 and a low pass filter (LPF).
- the SRC 730 is preferably a polyphase filter having time-varying coefficients.
- the SRC is preferably implemented with 35 phases and the conversion ratio of 35/66.
- the SRC 730 preferably interpolates by exactly 35 and decimates by (66+epsilon), i.e. the decimation rate is preferably adjustable within a range determined by the minimum and maximum values of epsilon, generally a small range.
- Epsilon is a first adjustment value, which is used to adjust the decimation rate of a first sample rate converter, i.e., the chroma-locked sample rate converter.
- Epsilon is preferably generated by the control loop comprising the chroma tracker 732 and the LPF 734 , and it can be negative, positive or zero.
- the chroma tracker tracks phase and frequency of the chroma bursts and compares them against an expected pattern.
- the conversion rate of the chroma-locked SRC is adjusted so that, in effect, the SRC samples the chroma burst at exactly four times per chroma sub-carrier cycle.
- the SRC takes the samples at phases 0 degrees, 90 degrees, 180 degrees and 270 degrees of the chroma sub-carrier cycle. This means that a sample is taken at every cycle of the color sub-carrier at a zero crossing, a positive peak, zero crossing and a negative peak, (0, +1, 0, ⁇ 1). If the pattern obtained from the samples is different from (0, +1, 0, ⁇ 1), this difference is detected and the conversion ratio needs to be adjusted inside the control loop.
- the chroma tracker 732 When the output samples of the chroma-locked SRC are lower in frequency or behind in phase, e.g., the pattern looks like ( ⁇ 1, 0, +1, 0), then the chroma tracker 732 will make epsilon negative. When epsilon is negative, the sample rate conversion ratio is higher than the nominal 35/66, and this has the effect of increasing the frequency or advancing the phase of samples at the output of the chroma-locked SRC. When the output samples of the chroma-locked SRC are higher in frequency or leading in phase, e.g., the pattern looks like (+1, 0, ⁇ 1, 0), then the chroma tracker 732 will make epsilon positive.
- the sample rate conversion ratio is lower than the nominal 35/66, and this has the effect of decreasing the frequency or retarding the phase of samples out of the chroma-locked SRC.
- the chroma tracker provides error signal 736 to the LPF 734 that filters the error signal to filter out high frequency components and provides the filtered error signal to the SRC to complete the control loop.
- the sampling clock may run at the system clock frequency or at the clock frequency of the destination of the decoded digital video. If the sampling clock is running at the system clock, the cost of the integrated circuit may be lower than one that has a system clock and a sub-carrier locked video decoder clock. A one clock integrated circuit may also cause less noise or interference to the analog-to-digital converter on the IC.
- the system is preferably all digital, and does not require an external crystal or a voltage controlled oscillator.
- an alternate embodiment of the chroma-locked SRC 70 preferably varies the sampling rate while the conversion rate is held constant.
- a voltage controlled oscillator (e.g., VCXO) 760 varies the sampling rate by providing a sampling frequency signal 718 to the ADC 700 .
- the conversion rate in this embodiment is fixed at 35/66 in the SRC 750 which is the ratio between four times the chroma sub-carrier frequency and 27 MHz.
- the chroma burst signal at the output of the chroma-locked SRC is compared with the expected chroma burst signal in a chroma tracker 752 .
- the error signals 756 from the comparison between the converted chroma burst and the expected chroma burst are passed through a low pass filter 754 and then filtered error signals 758 are provided to the VCXO 760 to control the oscillation frequency of the VCXO.
- the oscillation frequency of the VCXO changes in response to the voltage level of the provided error signals.
- Use of input voltage to control the oscillation frequency of a VCXO is well known in the art.
- the system as described here is a form of a phase locked loop (PLL), the design and use of which is well known in the art.
- PLL phase locked loop
- the samples with the effective sample rate of 4 Fsc are preferably decimated to samples with a sample rate of nominally 13.5 MHz through the use of a second sample rate converter. Since this sample rate is less than the electrical clock frequency of the digital integrated circuit in the preferred embodiment, only some clock cycles carry valid data.
- the sample rate is preferably converted to 13.5 MHz, and is locked to the horizontal line rate through the use of horizontal sync signals.
- the second sample rate converter is a line-locked sample rate converter (SRC).
- the line-locked sample rate converter converts the current line of video to a constant (Pout) number of pixels.
- This constant number of pixels Pout is normally 858 for ITU-R BT.601 applications and 780 for NTSC square pixel applications.
- the current line of video may have a variable number of pixels (Pin).
- the number of input samples Pin of the current line of video is accurately measured.
- This line measurement is used to calculate the sample rate conversion ratio needed to convert the line to exactly Pout samples.
- An adjustment value to the sample rate conversion ratio is passed to a sample rate converter module in the line-locked SRC to implement the calculated sample rate conversion ratio for the current line.
- the sample conversion ratio is calculated only once for each line.
- the line-locked SRC also scales YUV components to the proper amplitudes required by ITU-R BT.601.
- the number of samples detected in a horizontal line may be more or less if the input video is a non-standard video. For example, if the incoming video is from a VCR, and the sampling rate is four times the color sub—carrier frequency (4Fsc), then the number of samples taken between two horizontal syncs may be more or less than 910, where 910 is the number of samples per line that is obtained when sampling NTSC standard video at a sampling frequency of 4Fsc. For example, the horizontal line time from a VCR may vary if the video tape has been stretched.
- the horizontal line time may be accurately measured by detecting two successive horizontal syncs. Each horizontal sync is preferably detected at the leading edge of the horizontal sync. In other embodiments, the horizontal syncs may be detected by other means. For example, the shape of the entire horizontal sync may be looked at for detection. In the preferred embodiment, the sample rate for each line of video has been converted to four times the color sub-carrier frequency (4Fsc) by the chroma-locked sample rate converter. The measurement of the horizontal line time is preferably done at two levels of accuracy, an integer pixel accuracy and a sub-sample accuracy.
- the integer pixel accuracy is preferably done by counting the integer number of pixels that occur between two successive sync edges.
- the sync edge is presumed to be detected when the data crosses some threshold value.
- the threshold value is chosen to represent an appropriate slicing level for horizontal sync in the 10-bit number system of the ADC; a typical value for this threshold is 128.
- the negative peak (or a sync tip) of the digitized video signal normally occurs during the sync pulses.
- the threshold level would normally be set such that it occurs at approximately the mid-point of the sync pulses.
- the threshold level may be automatically adapted by the video decoder, or it may be set explicitly via a register or other means.
- the horizontal sync tracker preferably detects the horizontal sync edge to a sub-sample accuracy of ( 1/16)th of a pixel in order to more accurately calculate the sample rate conversion.
- the incoming samples generally do not include a sample taken exactly at the threshold value for detecting horizontal sync edges.
- the horizontal sync tracker preferably detects two successive samples, one of which has a value lower than the threshold value and the other of which has a value higher than the threshold value.
- the sub-pixel calculation is preferably started.
- the sync edge of a horizontal sync is generally not a vertical line, but has a slope.
- the video signal goes through a low pass filter.
- the low pass filter generally decreases sharpness of the transition, i.e., the low pass filter may make the transition from a low level to a high level last longer.
- the horizontal sync tracker preferably uses a sub-sample interpolation technique to obtain an accurate measurement of sync edge location by drawing a straight line between the two successive samples of the horizontal sync signal just above and just below the presumed threshold value to determine where the threshold value has been crossed.
- the three values are the threshold level (T), the value of the sample that crossed the threshold level (V 2 ) and the value of the previous sample that did not cross the threshold level (V 1 ).
- the sub-sample value is the ratio of T ⁇ V 1 )/(V 2 ⁇ V 1 ). In the present embodiment a division is not performed.
- the difference (V 2 ⁇ V 1 ) is divided by 16 to make a variable called DELTA.
- V 1 is then incremented by DELTA until it exceeds the threshold T.
- the number of times that DELTA is added to V 1 in order to make it exceed the threshold (T) is the sub-pixel accuracy in terms of 1/16 th of a pixel.
- the threshold value T is presumed to be 146 scale levels, and if the values V 1 and V 2 of the two successive samples are 140 and 156, respectively, the DELTA is calculated to be 1, and the crossing of the threshold value is determined through interpolation to be six DELTAs away from the first of the two successive samples.
- the sample with value 140 is the nth sample and the sample with the value 156 is the (n+1)th sample, the (n+( 6/16))th sample would have had the threshold value.
- a fractional sample i.e., 6/16 sample, is added to the number of samples counted between two successive horizontal syncs.
- the sample rate converter module In order to sample rate convert the current number of input pixels Pin to the desired output pixels Pout, the sample rate converter module has a sample rate conversion ratio of Pin/Pout.
- the sample rate converter module in the preferred embodiment of the line-locked sample rate converter is a polyphase filter with time-varying coefficients. There is a fixed number of phases (I) in the polyphase filter. In the preferred embodiment, the number of phases (I) is 33.
- the control for the polyphase filter is the decimation rate (d_act) and a reset phase signal.
- the line measurement Pin is sent to a module that converts it to a decimation rate d_act such that I/d_act (33/d_act) is equal to Pin/Pout.
- the results are the same, but there are savings to hardware.
- the current line length, Pin will have a relatively small variance with respect to the nominal line length. Pin is nominally 910. It typically varies by less than 62. For NTSC, this variation is less than 5 microseconds.
- the difference (Pin ⁇ Pin_nominal) may be represented by fewer bits than are required to represent Pin so a smaller multiplier can be used.
- d_act_nominal 35 and Pin_nominal is 910.
- the value (I/Pout)*(Pin ⁇ Pin_nominal) may now be called a delta_dec (delta decimation rate) or a second adjustment value.
- the conversion rate applied preferably is 33/(35+delta_dec) where the samples are interpolated by 33 and decimated by (35+delta_dec).
- a horizontal sync tracker preferably detects horizontal syncs, accurately counts the number of samples between two successive horizontal syncs and generates delta_dec.
- the horizontal sync tracker If the number of samples between two successive horizontal syncs is greater than 910, the horizontal sync tracker generates a positive delta_dec to keep the output sample rate at 858 samples per horizontal line. On the other hand, if the number of samples between two successive horizontal syncs is less than 910, the horizontal sync tracker generates a negative delta_dec to keep the output sample rate at 858 samples per horizontal line.
- the horizontal sync tracker For PAL standard video, the horizontal sync tracker generates the delta_dec to keep the output sample rate at 864 samples per horizontal line.
- the position of each horizontal sync pulse is determined to sub-pixel accuracy by interpolating between two successive samples, one of which being immediately below the threshold value and the other being immediately above the threshold value.
- the number of samples between the two successive horizontal sync pulses is preferably calculated to sub-sample accuracy by determining the positions of two successive horizontal sync pulses, both to sub-pixel accuracy.
- the horizontal sync tracker preferably uses the difference between 910 and the number of samples between two successive horizontal syncs to reduce the amount of hardware needed.
- the decimation rate adjustment value, delta_dec which is calculated for each line, preferably goes through a low pass filter before going to the sample rate converter module.
- One of the benefits of this method is filtering of variations in the line lengths of adjacent lines where the variations may be caused by noise that affects the accuracy of the measurement of the sync pulse positions.
- the input sample clock is not free running, but is instead line-locked to the input analog video, preferably 27 MHz.
- the chroma-locked sample rate converter converts the 27 MHz sampled data to a sample rate of four times the color sub-carrier frequency.
- the analog video signal is demodulated to luma and chroma component video signals, preferably using a comb filter.
- the luma and chroma component video signals are then sent to the line-locked sample rate converter where they are preferably converted to a sample rate of 13.5 MHz.
- the 13.5 MHz sample rate at the output may be exactly one-half of the 27 MHz sample rate at the input.
- the conversion ratio of the line-locked sample rate converter is preferably exactly one-half of the inverse of the conversion ratio performed by the chroma-locked sample rate converter.
- the line-locked SRC 704 preferably includes an SRC 770 which preferably is a polyphase filter with time varying coefficients.
- the number of phases is preferably fixed at 33 while the nominal decimation rate is 35.
- the conversion ratio used is preferably 33/(35+delta_dec) where delta_dec may be positive or negative.
- the delta_dec is a second adjustment value, which is used to adjust the decimation rate of the second sample rate converter.
- the actual decimation rate and phase are automatically adjusted for each horizontal line so that the number of samples per horizontal line is 858 (720 active Y samples and 360 active U and V samples) and the phase of the active video samples is aligned properly with the horizontal sync signals.
- the decimation (down-sampling) rate of the SRC is preferably controlled by a horizontal sync tracker 772 .
- the horizontal sync tracker adjusts the decimation rate once per horizontal line in order to result in a correct number and phase of samples in the interval between horizontal syncs.
- the horizontal sync tracker preferably provides the adjusted decimation rate to the SRC 770 to adjust the conversion ratio.
- the decimation rate is preferably calculated to achieve a sub-sample accuracy of 1/16.
- the line-locked SRC 704 also includes a YUV scaler 780 to scale YUV components to the proper amplitudes required by ITU-R BT.601.
- the time base corrector preferably synchronizes the samples having the line-locked sample rate of nominally 13.5 MHz to the display clock that runs nominally at 13.5 MHz. Since the samples at the output of the TBC are synchronized to the display clock, passthrough video may be provided to the video compositor without being captured first.
- the composite video may be sampled in any conventional way with a clock rate that is generally used in the art.
- the composite video is sampled initially at 27 MHz, down sampled to the sample rate of 14.318 MHz by the chroma-locked SRC, and then down sampled to the sample rate of nominally 13.5 MHz by the line-locked SRC.
- the video decoder uses for timing the 27 MHz clock that was used for input sampling.
- the 27 MHz clock being free-running, is not locked to the line rate nor to the chroma frequency of the incoming video.
- the decoded video samples are stored in a FIFO the size of one display line of active video at 13.5 MHz, i.e., 720 samples with 16 bits per sample or 1440 bytes.
- the maximum delay amount of this FIFO is one display line time with a normal, nominal delay of one-half a display line time.
- video samples are outputted from the FIFO at the display clock rate that is nominally 13.5 MHz. Except for vertical syncs of the input video, the display clock rate is unrelated to the timing of the input video. In alternate embodiments, larger or smaller FIFOs may be used.
- the effective sample rate and the display clock rate are both nominally 13.5 MHz the rate of the sampled video entering the FIFO and the display rate are generally different. This discrepancy is due to differences between the actual frequencies of the effective input sample rate and the display clock.
- the effective input sample rate is nominally 13.5 MHz but it is locked to operate at 858 times the line rate of the video input, while the display clock operates nominally at 13.5 MHz independently of the line rate of the video input.
- video is displayed with an initial delay of one-half a horizontal line time at the start of every field. This allows the input and output rates to differ up to the point where the input and output horizontal phases may change by up to one—half a horizontal line time without causing any glitches at the display.
- the FIFO is preferably filled up to approximately one-half full during the first active video line of every field prior to taking any output video.
- the start of each display field follows the start of every input video field by a fixed delay that is approximately equal to one-half the amount of time for filling the entire FIFO.
- the initial delay at the start of every field is one-half a horizontal line time in this embodiment, but the initial delay may be different in other embodiments.
- the time base corrector (TBC) 72 includes a TBC controller 164 and a FIFO 166 .
- the FIFO 166 receives an input video 714 at nominally 13.5 MHz locked to the horizontal line rate of the input video and outputs a delayed input video as an output video 716 that is locked to the display clock that runs nominally at 13.5 MHz.
- the TEC controller 164 preferably generates a vertical sync (VSYNC) for display that is delayed by one-half a horizontal line from an input VSYNC.
- the TBC controller 164 preferably also generates timing signals such as NTSC or PAL standard timing signals.
- the timing signals are preferably derived from the VSYNC generated by the TEC controller and preferably include horizontal sync.
- the timing signals are not affected by the input video, and the FIFO is read out synchronously to the timing signals. Data is read out of the FIFO according to the timing at the display side while the data is written into the FIFO according to the input timing.
- a line reset resets the FIFO write pointer to signal a new line.
- a read pointer controlled by the display side is updated by the display timing.
- the process resets in step 782 at system start up.
- the system preferably checks for vertical sync (VSYNC) of the input video in step 784 .
- the system in step 786 preferably starts counting the number of incoming video samples.
- the system preferably loads the FIFO in step 788 continuously with the incoming video samples. While the FIFO is being loaded, the system in step 790 checks if enough samples have been received to fill the FIFO up to a half full state.
- the system in step 792 When enough samples have been received to fill the FIFO to the half full state, the system in step 792 preferably generates timing signals including horizontal sync to synchronize the output of the TBC to the display clock.
- the system in step 794 preferably outputs the content of the FIFO continuously in sync with the display clock.
- the system in step 796 preferably checks for another input VSYNC. When another input vertical sync is detected, the process starts counting the number of input video samples again and starts outputting output video samples when enough input video samples have been received to make the FIFO half full.
- the FIFO size may be smaller or larger.
- the minimum size acceptable is determined by the maximum expected difference in the video source sample rate and the display sample rate. Larger FIFOs allow for greater variations in sample rate timing, however at greater expense.
- the logic that generates the sync signal that initiates display video fields should incur a delay from the input video timing of one-half the delay of the entire FIFO as described above. However, it is not required that the delay be one-half the delay of the entire FIFO.
- a video scaler performs both scaling-up and scaling-down of either digital video or digitized analog video.
- the video scaler is preferably configured such that it can be used for either scaling down the size of video images prior to writing them to memory or for scaling up the size of video images after reading them from memory.
- the size of the video images are preferably downscaled prior to being written to memory so that the memory usage and the memory bandwidth demands are minimized. For similar reasons, the size of the video images are preferably upscaled after reading them from memory.
- the video scaler is preferably in the signal path between a video input and a write port of a memory controller.
- the video scaler is preferably in the signal path between a read port of the memory controller and a video compositor. Therefore, the video scaler may be seen to exist in two distinct logical places in the design, while in fact occupying only one physical implementation.
- This function is preferably achieved by arranging a multiplexing function at the input of the scaling engine, with one input to the multiplexer being connected to the video input port and the other connected to the memory read port.
- the memory write port is arranged with a multiplexer at its input, with one input to the multiplexer connected to the output of the scaling engine and the other connected to the video input port.
- the display output port is arranged with a multiplexer at its input, with one connected to the output of the scaling engine and the other input connected to the output of the memory read port.
- the video scaling engine uses a clock that is selected between the video input clock and the display output clock (display clock).
- the clock selection uses a glitch-free clock selection logic, i.e. a circuit that prevents the creation of extremely narrow clock pulses when the clock selection is changed.
- the read and write interfaces to memory both use asynchronous interfaces using FIFOs, so the memory clock domain may be distinct from both the video input clock domain and the display output clock domain.
- a flow diagram illustrates a process of alternatively upscaling or downscaling the video input 800 .
- the system in step 802 preferably selects between a downscaling operation and an upscaling operation. If the downscaling operation is selected, the system in step 804 preferably downscales the input video prior to capturing the input video in memory in step 806 . If the upscaling operation is selected in step 802 , the system in step 806 preferably captures the input video in memory without scaling it.
- step 808 outputs the downscaled video as downscaled output 810 .
- the system in step 808 sends non-scaled video in the upscale path to be upscaled in step 812 .
- the system in step 812 upscales the non-scaled video and outputs it as upscaled video output 814 .
- the video pipeline preferably supports up to one scaled video window and one passthrough video window, plus one background color, all of which are logically behind the set of graphics windows.
- the order of these windows, from back to front, is fixed as background, then passthrough, then scaled video.
- the video windows are preferably always in YUV format, although they can be in either 4:2:2 or 4:2:0 variants of YUV. Alternatively they can be in RGB or other formats.
- the digital video or the digitized analog video is provided to a video compositor using one of three signal paths, depending on processing requirements.
- the digital video and the digitized analog video are provided to the video compositor as passthrough video over a passthrough path, as upscaled video over an upscale path and a downscaled video over a downscale path.
- Either of the digital video or the analog video may be provided to the video compositor as the passthrough video while the other of the digital video or the analog video is provided as an upscaled video or a downscaled video.
- the digital video may be provided to the video compositor over the passthrough path while, at the same time, the digitized analog video is downscaled and provided to the video compositor over the downscale path as a video window.
- the scaler engine may upscale video in either the vertical or horizontal axis while downscaling video in the other axis.
- an upscale operation and a downscale operation on the same axis are not performed at the same time since only one filter is used to perform both upscaling and downscaling for each axis.
- a single video scaler 52 preferably performs both the downscaling and upscaling operations.
- signals of the downscale path only are illustrated.
- the video scaler 52 includes a scaler engine 182 , a set of line buffers 178 , a vertical coefficient memory 180 A and a horizontal coefficient memory 180 B.
- the scaler engine 182 is implemented as a set of two polyphase filters, one for each of horizontal and vertical dimensions.
- the vertical polyphase filter is a four-tap filter with programmable coefficients from the vertical coefficient memory 180 A. In other embodiments, the number of taps in the vertical polyphase filter may vary. In one embodiment of the present invention, the horizontal polyphase filter is an eight-tap filter with programmable coefficients from the horizontal coefficient memory 180 B. In other embodiments, the number of taps in the horizontal polyphase filter may vary.
- the vertical and the horizontal coefficient memories may be implemented in SRAM or any other suitable memory.
- appropriate filter coefficients are used, respectively, from the vertical and horizontal coefficient memories. Selection of filter coefficients for scaling-up and scaling-down operations are well known in the art.
- the set of line buffers 178 are used to provide input of video data to the horizontal and vertical polyphase filters.
- three line buffers are used, but the number of the line buffers may vary in other embodiments.
- each of the three line buffers is used to provide an input to one of the taps of the vertical polyphase filter with four taps.
- the input video is provided to the fourth tap of the vertical polyphase filter.
- a shift register having eight cells in series is used to provide inputs to the eight taps of the horizontal polyphase filter, each cell providing an input to one of the eight taps.
- a digital video signal 820 and a digitized analog signal video 822 are provided to a first multiplexer 168 as first and second inputs.
- the first multiplexer 168 has two outputs.
- a first output of the first multiplexer is provided to the video compositor as a pass through video 186 .
- a second output of the first multiplexer is provided to a first input of a second multiplexer 176 in the downscale path.
- the second multiplexer 176 provides either the digital video or the digitized analog video at the second multiplexer's first input to the video scaler 52 .
- the video scaler provides a downscaled video signal to a second input of a third multiplexer 162 .
- the third multiplexer provides the downscaled video to a capture FIFO 158 which stores the captured downscaled video.
- the memory controller 126 takes the captured downscaled video and stores it as a captured downscaled video image into a video FIFO 148 .
- An output of the video FIFO is coupled to a first input of a fourth multiplexer 188 .
- the fourth multiplexer provides the output of the video FIFO, which is the captured downscaled video image, as an output 824 to the graphics compositor, and this completes the downscale path.
- the digital video or the digitized analog video is downscaled first, and then captured.
- FIG. 26 is similar to FIG. 25 , but in FIG. 26 , signals of the upscale path are illustrated.
- the third, multiplexer 162 provides either the digital video 820 or the digitized analog video 822 to the capture FIFO 158 which captures and stores input as a captured video image.
- This captured video image is provided to the memory controller 126 which takes it and provides to the video FIFO 148 which stores the captured video image.
- An output of the video FIFO 148 is provided to a second input of the second multiplexer 176 .
- the second multiplexer provides the captured video image to the video scaler 52 .
- the video scaler scales up the captured video image and provides it to a second input of the fourth multiplexer 188 as an upscaled captured video image.
- the fourth multiplexer provides the upscaled captured video image as the output 824 to the video compositor.
- the digital video or the digitized analog video is captured first, and then upscaled.
- FIG. 27 is similar to FIG. 25 and FIG. 26 , but in FIG. 27 , signals of both the upscale path and the downscale path are illustrated.
- the graphics display system of the present invention is capable of processing an analog video signal, a digital video signal and graphics data simultaneously.
- the analog and digital video signals are processed in the video display pipeline while the graphics data is processed in the graphics display pipeline.
- the video compositor receives video and graphics data from the video display pipeline and the graphics display pipeline, respectively, and outputs to the video encoder (“VEC”).
- the system may employ a method of compositing a plurality of graphics images and video, which includes blending the plurality of graphics images into a blended graphics image, combining a plurality of alpha values into a plurality of composite alpha values, and blending the blended graphics image and the video using the plurality of composite alpha values.
- step 904 the video compositor blends the passthrough video and the background color with the scaled video window, using the alpha value which is associated with the scaled video window.
- the result of this blending operation is then blended with the output of the graphics display pipeline.
- the graphics output has been pre-blended in the graphics blender in step 904 and filtered in step 906 , and blended graphics contain the correct alpha value for multiplication by the video output.
- the output of the video blend function is multiplied by the video alpha which is obtained from the graphics pipeline and the resulting video and graphics pixel data stream are added together to produce the final blended result.
- each layer is blended with the composition of all of the layers behind it, beginning with L 2 being blended on top of L 1 .
- the alpha values ⁇ A(i) ⁇ are in general different for every layer and for every pixel of every layer. However, in some important applications, it is not practical to apply this formula directly, since some layers may need to be processed in spatial dimensions (e.g. 2 dimensional filtering or scaling) before they can be blended with the layer or layers behind them. While it is generally possible to blend the layers first and then perform the spatial processing, that would result in processing the layers that should not be processed if these layers are behind the subject layer that is to be processed. Processing of the layers that are not to be processed may be undesirable.
- Processing the subject layer first would generally require a substantial amount of local storage of the pixels in the subject layer, which may be prohibitively expensive. This problem is significantly exacerbated when there are multiple layers to be processed in front of one or more layers that are not to be processed. In order to implement the formula above directly, each of the layers would have to be processed first, i.e. using their own local storage and individual processing, before they could be blended with the layer behind.
- all of the layers that are to be processed are layered together first, even if there is one or more layers behind them over which they should be blended, and the combined upper layers are then blended with the other layers that are not to be processed.
- layers ⁇ 1 , 2 and 3 ⁇ may be layers that are not to be processed, while layers ⁇ 4 , 5 , 6 , 7 , and 8 ⁇ may be layers that are to undergo processing, while all 8 layers are to be blended together, using ⁇ A(i) ⁇ values that are independent for every layer and pixel.
- the layers that are to be filtered, upper layers may be the graphics windows.
- the lower layers may include the video window and passthrough video.
- all of the layers that are to be filtered are blended together from back to front using a partial blending operation.
- two or more of the upper layers may be blended together in parallel.
- the back-most of the upper layers is not in general the back-most layer of the entire operation.
- an intermediate alpha value is maintained for later use for blending with the layers that are not to be filtered (referred to as the “lower” layers).
- AR ( i ) AR ( i ⁇ 1)*(1 ⁇ A ( i ))
- R(i) represents the color value of the resulting blended pixel
- P(i) represents the color value of the current pixel
- A(i) represents the alpha value of the current pixel
- P(i ⁇ 1) represents the value at the location of the current pixel of the composition of all of the upper layers behind the current pixel, initially this represents black before any layers are blended
- AR(i) is the alpha value resulting from each instance of this operation
- AR(i ⁇ 1) represents the intermediate alpha value at the location of the current pixel determined from all of the upper layers behind the current pixel, initially this represents transparency before any layers are blended.
- AR represents the alpha value that will subsequently be multiplied by the lower layers as indicated below, and so an AR value of 1 (assuming alpha ranges from 0 to 1) indicates that the current pixel is transparent and the lower layers will be fully visible when multiplied by 1.
- the pixels of the current layer are blended using the current alpha value, and also an intermediate alpha value is calculated as the product (1 ⁇ A(i))*(AR(i ⁇ 1)).
- an intermediate alpha value is calculated as the product (1 ⁇ A(i))*(AR(i ⁇ 1)).
- the composite alpha value for each pixel of blended graphics may be calculated directly as the product of all (1-alpha value of the corresponding pixel of the graphics image on each layer)'s without generating an intermediate alpha at each stage.
- the upper layers may be processed as desired and then the result of this processing, a composite intermediate image, is blended with the lower layer or layers.
- the resulting alpha values preferably are also processed in essentially the same way as the image components.
- the lower layers can be blended in the conventional fashion, so at some point there can be a single image representing the lower layers. Therefore two images, one representing the upper layers and one representing the lower layers can be blended together. In this operation, the AR(n) value at each pixel that results from the blending of the upper layers and any subsequent processing is used to be multiplied with the composite lower layer.
- a series of images makes up the upper layers. These are created by reading pixels from memory, as in a conventional graphics display device. Each pixel is converted into a common format if it is not already in that format; in this example the YUV format is used. Each pixel also has an alpha value associated with it.
- the alpha values can come from a variety of sources, including (1) being part of the pixel value read from memory (2) an element in a color look-up table (CLUT) in cases where the pixel format uses a CLUT (3) calculated from the pixel color value, e.g. alpha as a function of Y, (4) calculated using a keying function, i.e. some pixel values are transparent (i.e.
- alpha value 0
- an alpha value may be associated with a region of the image as described externally, such as a rectangular region, described by the four corners of the rectangle, may have a single alpha value associated with it, or (6) some combination of these.
- the upper layers are preferably composited in memory storage buffers called line buffers.
- Each line buffer preferably is sized to contain pixels of one scan line.
- Each line buffer has an element for each pixel on a line, and each pixel in the line buffer has elements for the color components, in this case Y, U and V, and one for the intermediate alpha value AR.
- Each pixel of the current layer on the current line is combined with the value pre-existing in the line buffer using the formulas already described, i.e.,
- AR ( i ) AR ( i ⁇ 1)*(1 ⁇ A ( i )).
- the color value of the current pixel P(i) is multiplied by its alpha value A(i), and the pixel in the line buffer representing the same location on the line P(i ⁇ 1) is read from the line buffer, multiplied by (1 ⁇ A(i)), and added to the previous result, producing the resulting pixel value R(i).
- the alpha value at the same location in the line buffer (AR(i ⁇ 1)) is read from the buffer and multiplied by (1 ⁇ A(i)), producing AR(i).
- the results R(i) and AR(i) are then written back to the line buffer in the same location.
- 128 is subtracted from the U and V values before multiplying by alpha, and then 128 is added to the result.
- U and V values are directly multiplied by alpha, and it is ensured that at the end of the entire compositing process all of the coefficients multiplied by U and V sum to 1, so that the offset 128 value is not distorted significantly.
- Each of the layers in the group of upper layers is preferably composited into a line buffer starting with the back-most of the upper layers and progressing towards the front until the front-most of the upper layers has been composited into the line buffer.
- a single hardware block i.e., the display engine, may be used to implement the formula above for all of the upper layers.
- the graphics compositor engine preferably operates at a clock frequency that is substantially higher than the pixel display rate. In one embodiment of the present invention, the graphics compositor engine operates at 81 MHz while the pixel display rate is 13.5 MHz.
- This process repeats for all of the lines in the entire image, starting at the top scan line and progressing to the bottom.
- the scan line becomes available for use in processing such as filtering or scaling.
- processing may be performed while subsequent scan lines are being composited into other line buffers.
- Various processing operations may be selected such as anti-flutter filtering and vertical scaling.
- more than one graphics layer may be composited simultaneously, and in some such embodiments it is not necessary to use line buffers as part of the compositing process. If all upper layers are composited simultaneously, the combination of all upper layers can be available immediately without the use of intermediate storage.
- step 922 the system preferably checks for a vertical sync (VSYNC). If a VSYNC has been received, the system in step 924 preferably loads a line from the bottom most graphics window into a graphics line buffer. Then the system in step 926 preferably blends a line from the next graphics window into the line buffer. Then the system in step 928 preferably determines if the last graphics window visible on a current display line has been blended. If the last graphics window has not been blended, the system continues on with the blending system in step 926 .
- VSYNC vertical sync
- the system preferably checks in step 930 to determine if the last graphics line of a current display field has been blended. If the last graphics line has been blended, the system awaits another VSYNC in step 922 . If the last graphics line has not been blended, the system goes to the next display line in step 932 and repeats the blending process.
- a background color preferably is also blended in one embodiment of the present invention.
- the video compositor preferably displays each pixel as they are composited without saving pixels to a frame buffer or other memory.
- the system in step 958 preferably displays the passthrough video 954 outside the active window area first.
- An active window area of the NTSC standard television is inside an NTSC frame.
- An active window area of the PAL standard television is inside a PAL frame.
- the system in step 960 preferably blends the background color first.
- the system in step 962 preferably blends the portion of the passthrough video that falls within the active window area.
- the system in step 964 preferably blends the video window.
- the system in step 968 blends the graphics window on top of the composited video window and outputs composited video 970 for display.
- Interlaced displays such as televisions
- This apparent vertical motion is variously referred to as flutter, flicker, or judder.
- One embodiment of the present invention includes a method of reducing interlace flutter via automatic blending.
- This method has been designed for use in graphics displays device that composites visible objects directly onto the screen; for example, the device may use windows, window descriptors and window descriptor lists, or similar mechanisms.
- the top and bottom edges (first and last scan lines) of each object (or window) are displayed such that the alpha blend value (alpha blend factor) of these edges is adjusted to be one-half of what it would be if these same lines were not the top and bottom lines of the window.
- a window may constitute a rectangular shape, and the window may be opaque, i.e. it's alpha blend factor is 1, on a scale of 0 to 1. All lines on this window except the first and last are opaque when the window is rendered. The top and bottom lines are adjusted so that, in this case, the alpha blend value becomes 0.5, thereby causing these lines to be mixed 50% with the images that are behind them. This function occurs automatically in the preferred implementation. Since in the preferred implementation, windows are rectangular objects that are rendered directly onto the screen, the locations of the top and bottom lines of every window are already known.
- the function of dividing the alpha blend values for the top and bottom lines by two is implemented only for the top fields of the interlaced display. In another embodiment, the function of dividing the alpha blend values for the top and bottom lines by two is implemented only for the bottom fields of the interlaced display.
- the window is solid opaque white, and the image behind it is solid opaque black.
- the top and bottom edges of the window there would be a sharp contrast between black and white, and when displayed on an interlaced TV, significant flutter would be visible.
- the top and bottom lines are blended 50% with the background, resulting in a color that is halfway between black and white, or gray.
- the apparent visual location of the top and bottom edges of the object is constant, and flutter is not apparent. The same effect applies equally well for other image examples.
- the method of reducing interlace flutter of this embodiment does not require any increase in memory bandwidth, as the alternate field (the one not currently being displayed) is not read from memory, and there is no need for vertical filtering, which would have required logic and on-chip memory.
- graphic objects can be composited into the frame buffer with an alpha blend value that is adjusted to one-half of its normal value at the top and bottom edges of each object.
- Such blending can be performed in software or in a blitter that has a blending capability.
- the vertical filtering and anti-flutter filtering are performed on blended graphics by one graphics filter.
- One function of the graphics filter is low pass filtering in the vertical dimension.
- the low pass filtering may be performed in order to minimize the “flutter” effect inherent in interlaced displays such as televisions.
- the vertical downscaling or upscaling operation may be performed in order to change the pixel aspect ratio from the square pixels that are normal for computer, Internet and World Wide Web content into any of the various oblong aspect ratios that are standard for televisions as specified in ITU-R 601B.
- the system preferably includes seven line buffers. This allows for four line buffers to be used for filtering and scaling, two are available for progressing by one or two lines at the end of every line, and one for the current compositing operation.
- the alpha values in the line buffers are filtered or scaled in the same way as the YUV values, ensuring that the resulting alpha values correctly represent the desired alpha values at the proper location. Either or both of these operations, or neither, or other processing, may be performed on the contents of the line buffers.
- the result is the completed set of upper layers with the associated alpha value (product of (1 ⁇ A(i)).
- Each of the operations described above is preferably implemented digitally using conventional ASIC technology.
- the logical operations are segmented into pipeline stages, which may require temporary storage of logic values from one clock cycle to the next.
- the choice of how many pipeline stages are used in each of the operations described above is dependent on the specific ASIC technology used, the clock speed chosen, the design tools used, and the preference of the designer, and may vary without loss of generality.
- the line buffers are implemented as dual port memories allowing one read and one write cycle to occur simultaneously, facilitating the read and write operations described above while maintaining a clock frequency of 81 MHz.
- the compositing function is divided into multiple pipeline stages, and therefore the address being read from the memory is different from the address being written to the same memory during the same clock cycle.
- Each of the arithmetic operations described above in the preferred embodiment use 8 bit accuracy for each operand; this is generally sufficient for providing an accurate final result. Products are rounded to 8 bits before the result is used in subsequent additions.
- a block diagram illustrates an interaction between the line buffers 504 and a graphics filter 172 .
- the line buffers comprises a set of line buffers 1 - 7 506 a - g .
- the line buffers are controlled by a graphics line buffer controller over a line buffer control interface 502 .
- the graphics filter is a four-tap polyphase filter, so that four lines of graphics data 516 a - d are provided to the graphics filter at a time.
- the graphics filter 172 sends a line buffer release signal 516 e to the line buffers to notify that one to three line buffers are available for compositing additional graphics display lines.
- line buffers are not used, but rather all of the upper layers are composited concurrently.
- there is one graphics blender for each of the upper layers active at any one pixel and the clock rate of the graphics blender may be approximately equal to the pixel display rate.
- the clock rate of the graphics blenders may be somewhat slower or faster, if FIFO buffers are used at the output of the graphics blenders.
- Line buffers may still be needed in order to implement vertical filtering or vertical scaling, as those operations typically require more than one line of the group of upper layers to be available simultaneously, although fewer line buffers are generally required here than in the preferred embodiment.
- the unified memory architecture preferably includes a memory that is shared by a plurality of devices, and a memory request arbiter coupled to the memory, wherein the memory request arbiter performs real time scheduling of memory requests from different devices having different priorities.
- the unified memory system assures real time scheduling of tasks, some of which do not inherently have pre-determined periodic behavior and provides access to memory by requesters that are sensitive to latency and do not have determinable periodic behavior.
- two memory controllers are used in a dual memory controller system.
- the memory controllers may be 16-bit memory controllers or 32-bit memory controllers.
- Each memory controller can support different configuration of SDRAM device types and banks, or other forms of memory besides SDRAM.
- a first memory space addressed by a first memory controller is preferably adjacent and contiguous to a second memory space addressed by a second memory controller so that software applications view the first and second memory spaces as one continuous memory space.
- the first and the second memory controllers may be accessed concurrently by different clients.
- the software applications may be optimized to improve performance.
- a graphics memory may be allocated through the first memory controller while a CPU memory is allocated through the second memory controller. While a display engine is accessing the first memory controller, a CPU may access the second memory controller at the same time. Therefore, a memory access latency of the CPU is not adversely affected in this instance by memory being accessed by the display engine and vice versa.
- the CPU may also access the first memory controller at approximately the same time that the display engine is accessing the first memory controller, and the display controller can access memory from the second memory controller, thereby allowing sharing of memory across different functions, and avoiding many copy operations that may otherwise be required in conventional designs.
- a dual memory controller system services memory requests generated by a display engine 1118 , a CPU 1120 , a graphics accelerator 1124 and an input/output module 1126 are provided to a memory select block 1100 .
- the memory select block 1100 preferably routes the memory requests to a first arbiter 1102 or to a second arbiter 1106 based on the address of the requested memory.
- the first arbiter 1102 sends memory requests to a first memory controller 1104 while the second arbiter 1106 sends memory requests to a second memory controller 1108 .
- the design of arbiters for handling requests from tasks with different priorities is well known in the art.
- the first memory controller preferably sends address and control signals to a first external SDRAM and receives a first data from the first external SDRAM.
- the second memory controller preferably sends address and control signals to a second external SDRAM and receives a second data from the second external SDRAM.
- the first and second memory controllers preferably provide first and second data received, respectively, from the first and second external SDRAMs to a device that requested the received data.
- the first and second data from the first and second memory controllers are preferably multiplexed, respectively, by a first multiplexer 1110 at an input of the display engine, by a second multiplexer 1112 at an input of the CPU, by a third multiplexer 1114 at an input of the graphics accelerator and by a fourth multiplexer 1116 at an input of the I/O module.
- the multiplexers provide either the first or the second data, as selected by memory select signals provided by the memory select block, to a corresponding device that has requested memory.
- An arbiter preferably uses an improved form of real time scheduling to meet real-time latency requirements while improving performance for latency-sensitive tasks.
- First and second arbiters may be used with the flexible real time scheduling.
- the real time scheduling is preferably implemented on both the first arbiter and the second arbiter independently.
- a real-time scheduling and arbitration scheme for unified memory is implemented, such that all tasks that use the unified memory meet their real-time requirements.
- the methodology used preferably implements real-time scheduling using Rate Monotonic Scheduling (“RMS”). It is a mathematical approach that allows the construction of provably correct schedules of arbitrary numbers of real-time tasks with arbitrary periods for each of the tasks. This methodology provides for a straight forward means for proof by simulation of the worst case scenario, and this simulation is simple enough that it can be done by hand.
- RMS Rate Monotonic Scheduling
- Latency tolerance is defined as the maximum amount of time that can pass from the moment the task requests service until that task's request has been completely satisfied.
- Proof of correctness i.e. the guarantee that all tasks meet their deadlines, is constructed by analyzing the behavior of the system when all tasks request service at exactly the same time; this time is called the “critical instant”. This is the worst case scenario, which may not occur in even a very large set of simulations of normal operation, or perhaps it may never occur in normal operation, however it is presumed to be possible.
- the deadline is the maximum amount of time that may pass between the moment a task makes a request for service and the time that the service is completed, without impairing the function of the task.
- the request may occur as soon as there is enough data in the FIFO that if service is granted immediately the FIFO does not underflow (or overflow in case of a read operation supporting a data sink). If service is not completed before the FIFO overflows (or underflows in the case of a data sink) the task is impaired.
- those tasks that do not have specified real-time constraints are preferably grouped together and served with a single master task called the “sporadic server”, which itself has the lowest priority in the system.
- Arbitration within the set of tasks served by the sporadic server is not addressed by the RMS methodology, since it is not a real-time matter.
- all non-real-time tasks are served whenever there is resource available, however the latency of serving any one of them is not guaranteed.
- the period of each of the tasks is preferably determined. For those with specific bandwidth requirements (in bytes per second of memory access), the period is preferably calculated from the bandwidth and the burst size. If the deadline is different from the period for any given task, that is listed as well.
- the resource requirement when a task is serviced is listed along with the task. In this case, the resource requirement is the number of memory clock cycles required to service the memory access request.
- the tasks are sorted in order of increasing period, and the result is the set of priorities, from highest to lowest. If there are multiple tasks with the same period, they can be given different, adjacent priorities in any random relative order within the group; or they can be grouped together and served with a single priority, with round-robin arbitration between those tasks at the same priority.
- a block out timer associated with a task that does not normally have a period, is used in order to force a bounded minimum interval, similar to a period, on that task.
- a block out timer associated with the CPU has been implemented in this embodiment. If left uncontrolled, the CPU can occupy all available memory cycles, for example by causing a never-ending stream of cache misses and memory requests. At the same time, CPU performance is determined largely by “average latency of memory access”, and so the CPU performance would be less than optimal if all CPU memory accessed were consigned to a sporadic server, i.e., at the lowest priority.
- the CPU task has been converted into two logical tasks.
- a first CPU task has a very high priority for low latency, and it also has a block out timer associated with it such that once a request by the CPU is made, it cannot submit a request again until the block out timer has timed out.
- the CPU task has the top priority.
- the CPU task may have a very high priority but not the top priority.
- the timer period has been made programmable for system tuning, in order to accommodate different system configurations with different memory widths or other options.
- the block out timer is started when the CPU makes a high priority request. In another embodiment, the block out timer is started when the high priority request by the CPU is serviced. In other embodiments, the block out timer may be started at any time in the interval between the time the high priority request is made and the time the high priority request is serviced.
- a second CPU task is preferably serviced by a sporadic server in a round-robin manner. Therefore if the CPU makes a long string of memory requests, the first one is served as a high priority task, and subsequent requests are served by the low priority sporadic server whenever none of the real-time tasks have requests pending, until the CPU block out timer times out.
- the graphics accelerator and the display engine are also capable of requesting more memory cycles than are available, and so they too use similar block out timer.
- the CPU read and write functions are grouped together and treated as two tasks.
- a first task has a theoretical latency bound of 0 and a period that is programmable via a block out timer, as described above.
- a second task is considered to have no period and no deadline, and it is grouped into the set of tasks served by the sporadic server via a round robin at the lowest priority.
- the CPU uses a programmable block out timer between high priority requests in this embodiment.
- a graphics display task is considered to have a constant bandwidth of 27 MB/s, i.e., 16 bits per pixel at 13.5 MHz.
- the graphics bandwidth in one embodiment of the present invention can vary widely from much less than 27 MB/s to a much greater figure, but 27 MB/s is a reasonable figure for assuring support of a range of applications.
- the graphics display task utilizes a block out timer that enforces a period of 2.37 ⁇ s between high priority requests, while additional requests are serviced on a best-effort basis by the sporadic server in a low priority round robin manner.
- a CPU service request 1138 is preferably coupled to an input of a block out timer 1130 and a sporadic server 1136 .
- An output of the block out timer 1130 is preferably coupled to an arbiter 1132 as a high priority service request.
- Tasks 1 - 5 1134 a - e may also be coupled to the arbiter as inputs.
- An output of the arbiter is a request for service of a task that has the highest priority among all tasks that have a pending memory request.
- only the CPU service request 1138 is coupled to a block out timer.
- service requests from other tasks may be coupled to their respective block out timers.
- the block out timers are used to enforce a minimum interval between two successive accesses by any high priority task that is non-periodic but may require expedited servicing. Two or more such high priority tasks may be coupled to their respective block out timers in one embodiment of the present invention.
- Devices that are coupled to their respective block out timers as high priority tasks may include a graphics accelerator, a display engine, and other devices.
- low priority tasks 1140 a - d may be coupled to the sporadic server 1136 .
- these low priority tasks are handled in a round robin manner.
- the sporadic server sends a memory request 1142 to the arbiter for the next low priority task to be serviced.
- FIG. 34 a timing diagram illustrates CPU service requests and services in case of a continuous CPU request 1146 .
- the CPU request is generally not continuous, but FIG. 34 has been provided for illustrative purposes.
- a block out timer 1148 is started upon a high priority service request 1149 by the CPU.
- the CPU starts making the continuous service request 1146 , and a high priority service request 1149 is first made provided that the block out timer 1148 is not running at time t 0 .
- the block out timer 1148 is started.
- the memory controller finishes servicing a memory request from another task.
- the CPU is first serviced at time t 1 .
- the duration of the block out timer is programmable.
- the duration of the block out timer may be programmed to be 3 ⁇ s.
- any additional high priority CPU request 1149 is blocked out until the block out timer times out at time t 2 .
- the CPU low priority request 1150 is handled by a sporadic server in a round robin manner between time t 0 and time t 2 .
- the low priority request 1150 is active as long as the CPU service request is active. Since the CPU service request 1146 is continuous, another high priority service request 1149 is made by the CPU and the block out timer is started again as soon as the block out timer times out at time t 2 .
- the high priority service request made by the CPU at time t 2 is serviced at time t 3 when the memory controller finishes servicing another task. Until the block out timer times out at time t 4 , the CPU low priority request 1150 is handled by the sporadic server while the CPU high priority request 1149 is blocked out.
- Another high priority service request is made and the block out timer 1148 is started again when the block out timer 1148 times out at time t 4 .
- the high priority service request 1149 made by the CPU at time t 4 is serviced.
- the block out timer does not time out until time t 7 .
- the block out timer is not in the path of the CPU low priority service request and, therefore, does not block out the CPU low priority service request.
- a low priority service request made by the CPU is handled by the sporadic server, and serviced at time t 6 .
- the schedule that results from the task set and priorities above is verified by simulating the system performance starting from the “critical instant”, when all tasks request service at the same time and a previously started low priority task is already underway.
- the system is proven to meet all the real-time deadlines if all of the tasks with real-time deadlines meet their deadlines.
- all tasks make new requests at every repetition of their periods, whether or not previous requests have been satisfied.
- a timing diagram illustrates an example of a critical instant analysis.
- a task 1 1156 a task 2 1158 , a task 3 1160 and a task 4 1162 request service at the same time.
- a low priority task 1154 is being serviced. Therefore, the highest priority task, the task 1 , cannot be serviced until servicing of the low priority task has been completed.
- the task 1 When the low priority task is completed at time t 1 , the task 1 is serviced. Upon completion of the task 1 at time t 2 , the task 2 is serviced. Upon completion of the task 2 at time t 3 , the task 3 is serviced. Upon completion of the task 3 at time t 4 , the task 4 is serviced. The task 4 completes at time t 5 , which is before the start of a next set of tasks: the task 1 at t 6 , the task 2 at t 7 , the task 3 at t 8 , and the task 4 at t 9 .
- a flow diagram illustrates a process of servicing memory requests with different priorities, from the highest to the lowest.
- the system in step 1170 makes a CPU read request with the highest priority. Since a block out timer is used with the CPU read request in this example, the block out timer is started upon making the highest priority CPU read request. Then the system in step 1172 makes a graphics read request. A block out timer is also used with the graphics read request, and the block out timer is started upon making the graphics read request.
- a video window read request in step 1174 and a video capture write request in step 1176 have equal priorities. Therefore, the video window read request and the video capture write request are placed in a round robin arbitration for two tasks (clients).
- the system in step 1178 and step 1180 services a refresh request and a audio read request, respectively.
- the system places the CPU read request and the graphics read request in a round robin arbitration for five tasks (clients), respectively, in step 1182 and step 1186 .
- the system in steps 1184 , 1188 and 1190 places other lowest priority tasks such as a graphics accelerator read/write request, a DMA read/write request and a CPU write request, respectively, in this round robin arbitration with five clients.
- the system according to the present invention may employ a graphics accelerator that includes memory for graphics data, the graphics data including pixels, and a coprocessor for performing vector type operations on a plurality of components of one pixel of the graphics data.
- the preferred embodiment of the graphics display system uses a graphics accelerator that is optimized for performing real-time 3D and 2D effects on graphics and video surfaces.
- the graphics accelerator preferably incorporates specialized graphics vector arithmetic functions for maximum performance with video and real-time graphics.
- the graphics accelerator performs a range of essential graphics and video operations with performance comparable to hardwired approaches, yet it is programmable so that it can meet new and evolving application requirements with firmware downloads in the field.
- the graphics accelerator is preferably capable of 3D effects such as real-time video warping and flipping, texture mapping, and Gouraud and Phong polygon shading, as well as 2D and image effects such as blending, scaling, blitting and filling.
- the graphics accelerator and its caches are preferably completely contained in an integrated circuit chip.
- the graphics accelerator of the present invention is preferably based on a conventional RISC-type microprocessor architecture.
- the graphics accelerator preferably also includes additional features and some special instructions in the instruction set.
- the graphics accelerator is based on a MIPS R3000 class processor. In other embodiments, the graphics accelerator may be based on almost any other type of processors.
- a graphics accelerator 64 receives commands from a CPU 22 and receives graphics data from main memory 28 through a memory controller 54 .
- the graphics accelerator preferably includes a coprocessor (vector coprocessor) 1300 that performs vector type operations on pixels.
- vector type operations the R, G, and B components, or the Y, U and V components, of a pixel are processed in parallel as the three elements of a “vector”.
- the graphics accelerator may not include the vector coprocessor, and the vector coprocessor may be coupled to the graphics accelerator instead.
- the vector coprocessor 1300 obtains pixels (3-tuple vectors) via a specialized LOAD instruction.
- the LOAD instruction preferably extracts bits from a 32-bit word in memory that contains the required bits.
- the LOAD instruction also preferably packages and converts the bits into the input vector format of the coprocessor.
- the vector coprocessor 1300 writes pixels (3-tuple vectors) to memory via a specialized STORE instruction.
- the STORE instruction preferably extracts the required bits from the accumulator (output) register of the coprocessor, converts them if required, and packs them into a 32-bit word in memory in a format suitable for other uses within the IC, as explained below.
- Formats of the 32-bit word in memory preferably include an RGB16 format and a YUV format.
- R has 5 bits
- G has 6 bits
- B has 5 bits.
- the two RGB16 half-words are selected, respectively, via VectorLoadRGB 16 Left instruction and VectorLoadRGB 16 Right instruction.
- the 5 or 6 bit elements are expanded through zero expansion into 8 bit components when loaded into the coprocessor input register 1308 .
- the YUV format preferably includes YUV 4:2:2 format, which has four bytes representing two pixels packed into every 32-bit word in memory.
- the U and V elements preferably are shared between the two pixels.
- a typical packing format used to load two pixels having YUV 4:2:2 format into a 32-bit memory is YUYV, where each of first and second Y's, U and V has eight bits.
- the left pixel is preferably comprised of the first Y plus the U and V
- the right pixel is preferably comprised of the second Y plus the U and V.
- Special LOAD instructions, LoadYUVLeft and LoadYUVRight are preferably used to extract the YUV values for the left pixel and the right pixel, respectively, and put them in the coprocessor input register 1308 .
- StoreVectorAccumulatorRGB 16 the three components (R, G, and B) in the accumulator typically have 8, 10 or more significant bits each; these are rounded or dithered to create R, G, and B values with 5, 6, and 5 bits respectively, and packed into a 16 bit value. This 16 bit value is stored in memory, selecting either the appropriate 16 bit half word in memory via the store address.
- the R, G, and B components in the accumulator are rounded or dithered to create 8 bit values for each of the R, G, and B components, and these are packed into a 24 bit value.
- the 24 bit RGB value is written into memory at the memory address indicated via the store address.
- the Y, U and V components in the accumulator are dithered or rounded to create 8 bit values for each of the components.
- the StoreVectorAccumulatorYUVLeft instruction writes the Y, U and V values to the locations in the addressed memory word corresponding to the left YUV pixel, i.e. the word is arranged as YUYV, and the first Y value and the U and V values are over-written.
- the StoreVectorAccumulatorYUVRight instruction writes the Y value to the memory location corresponding to the Y component of the right YUV pixel, i.e. the second Y value in the preceding example.
- the U and V values may be combined with the U and V values already in memory creating a weighted sum of the existing and stored values and storing the result.
- the coprocessor instruction set preferably also includes a GreaterThanOREqualTo (GE) instruction.
- the GE instruction performs a greater-than-or-equal-to comparison between each element of a pair of 3-element vectors. Each element in each of the 3-element vectors has a size of one byte.
- the results of all three comparisons, one bit per each result, are placed in a result register 1310 , which may subsequently be used for a single conditional branch operation. This saves a lot of instructions (clock cycles) when performing comparisons between all the elements of two pixels.
- the graphics accelerator preferably includes a data SRAM 1302 , also called a scratch pad memory, and not a conventional data cache. In other embodiments, the graphics accelerator may not include the data SRAM, and the data SRAM may be coupled to the graphics accelerator instead.
- the data SRAM 1302 is similar to a cache that is managed in software.
- the graphics accelerator preferably also includes a DMA engine 1304 with queued commands. In other embodiments, the graphics accelerator may not include the DMA engine, and the DMA engine may be coupled to the graphics accelerator instead.
- the DMA engine 1304 is associated with the data SRAM 1302 and preferably moves data between the data SRAM 1302 and main memory 28 at the same time the graphics accelerator 64 is using the data SRAM 1302 for its load and store operations.
- the main memory 28 is the unified memory that is shared by the graphics display system, the CPU 22 , and other peripherals.
- the DMA engine 1304 preferably transfers data between the memory 28 and the data SDRAM 1302 to carry out load and store instructions. In other embodiments, the DMA engine 1304 may transfer data between the memory 28 and other components of the graphics accelerator without using the data SRAM 1302 . Using data SRAM, however, generally results in faster loading and storing operations.
- the DMA engine 1304 preferably has a queue 1306 to hold multiple DMA commands, which are executed sequentially in the order they are received.
- the queue 1306 is four instructions deep. This may be valuable because the software (firmware) may be structured so that the loop above the inner loop may instruct the DMA engine 1304 to perform a series of transfers, e.g. to get two sets of operands and write one set of results back, and then the inner loop may execute for a while; when the inner loop is done, the graphics accelerator 64 may check the command queue 1306 in the DMA engine 1304 to see if all of the DMA commands have been completed.
- the queue includes a mechanism that allows the graphics accelerator to determine when all the DMA commands have been completed.
- the graphics accelerator 64 preferably immediately proceeds to do more work, such as commanding additional DMA operations to be performed and to do processing on the new operands. If not, the graphics accelerator 64 preferably waits for the completion of DMA commands or perform some other tasks for a while.
- the graphics accelerator 64 is working on operands and producing outputs for one set of pixels, while the DMA engine 1304 is bringing in operands for the next (future) set of pixel operations, and also the DMA engine 1304 is writing back to memory the results from the previous set of pixel operations.
- the graphics accelerator 64 does not ever have to wait for DMA transfers (if the code is designed well), unlike a conventional data cache, wherein the conventional data cache gets new operands only when there is a cache miss, and it writes back results only when either the cache writes it back automatically because it needs the cache line for new operands or when there is an explicit cache line flush operation performed. Therefore, the graphics accelerator 64 of the present invention preferably reduces or eliminates period of waiting for data, unlike conventional graphics accelerators which may spend a large fraction of their time waiting for data transfer operations between the cache and main memory.
Landscapes
- Engineering & Computer Science (AREA)
- Multimedia (AREA)
- Signal Processing (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Computer Hardware Design (AREA)
- Human Computer Interaction (AREA)
- Computer Graphics (AREA)
- General Engineering & Computer Science (AREA)
- Business, Economics & Management (AREA)
- Marketing (AREA)
- Software Systems (AREA)
- Discrete Mathematics (AREA)
- Computing Systems (AREA)
- Controls And Circuits For Display Device (AREA)
- Image Processing (AREA)
- Studio Circuits (AREA)
- Transforming Electric Information Into Light Information (AREA)
- Image Generation (AREA)
Abstract
Description
- This application is a continuation of U.S. patent application Ser. No. 10/842,743, filed on May 10, 2004 which is a continuation of U.S. patent application Ser. No. 09/437,327, filed Nov. 9, 1999, now issued as U.S. Pat. No. 6,738,072, May 18, 2004, which claims the benefit of the filing date of U.S. provisional patent application No. 60/107,875, filed Nov. 9, 1998 and entitled “Graphics Chip Architecture,” the contents of which are hereby incorporated by reference. This application is related to U.S. patent application number U.S. patent application Ser. No. 09/437,208, filed Nov. 9, 1999, now issued as U.S. Pat. No. 6,570,579, May 27, 2003, and entitled “Graphics Display System,” the contents of which are hereby incorporated by reference.
- The present invention relates generally to integrated circuits, and more particularly to an integrated circuit graphics display system.
- Graphics display systems are typically used in television control electronics, such as set top boxes, integrated digital TVs, and home network computers. Graphics display systems typically include a display engine that may perform display functions. The display engine is the part of the graphics display system that receives display pixel data from any combination of locally attached video and graphics input ports, processes the data in some way, and produces final display pixels as output.
- This application includes references to both graphics and video, which reflects in certain ways the structure of the hardware itself. This split does not, however, imply the existence of any fundamental difference between graphics and video, and in fact much of the functionality is common to both. Graphics as used herein may include graphics, text and video.
- The present invention provides a graphics display system that includes a graphics filter. The graphics filter includes means for scaling graphics and means for performing anti-flutter filtering. The means for performing anti-flutter filtering is preferably the same as the means for scaling video.
-
FIG. 1 is a block diagram of an integrated circuit graphics display system according to a presently preferred embodiment of the invention; -
FIG. 2 is a block diagram of certain functional blocks of the system; -
FIG. 3 is a block diagram of an alternate embodiment of the system ofFIG. 2 that incorporates an on-chip I/O bus; -
FIG. 4 is a functional block diagram of exemplary video and graphics display pipelines; -
FIG. 5 is a more detailed block diagram of the graphics and video pipelines of the system; -
FIG. 6 is a map of an exemplary window descriptor for describing graphics windows and solid surfaces; -
FIG. 7 is a flow diagram of an exemplary process for sorting window descriptors in a window controller; -
FIG. 8 is a flow diagram of a graphics window control data passing mechanism and a color look-up table loading mechanism; -
FIG. 9 is a state diagram of a state machine in a graphics converter that may be used during processing of header packets; -
FIG. 10 is a block diagram of an embodiment of a display engine; -
FIG. 11 is a block diagram of an embodiment of a color look-up table (CLUT); -
FIG. 12 is a timing diagram of signals that may be used to load a CLUT; -
FIG. 13 is a block diagram illustrating exemplary graphics line buffers; -
FIG. 14 is a flow diagram of a system for controlling the graphics line buffers ofFIG. 13 ; -
FIG. 15 is a representation of left scrolling using a window soft horizontal scrolling mechanism; -
FIG. 16 is a representation of right scrolling using a window soft horizontal scrolling mechanism; -
FIG. 17 is a flow diagram illustrating a system that uses graphics elements or glyphs for anti-aliased text and graphics applications; -
FIG. 18 is a block diagram of certain functional blocks of a video decoder for performing video synchronization; -
FIG. 19 is a block diagram of an embodiment of a chroma-locked sample rate converter (SRC); -
FIG. 20 is a block diagram of an alternate embodiment of the chroma-locked SRC ofFIG. 19 ; -
FIG. 21 is a block diagram of an exemplary line-locked SRC; -
FIG. 22 is a block diagram of an exemplary time base corrector (TBC); -
FIG. 23 is a flow diagram of a process that employs a TBC to synchronize an input video to a display clock; -
FIG. 24 is a flow diagram of a process for video scaling in which downscaling is performed prior to capture of video in memory and upscaling is performed after reading video data out of memory; -
FIG. 25 is a detailed block diagram of components used during video scaling with signal paths involved in downscaling; -
FIG. 26 is a detailed block diagram of components used during video scaling with signal paths involved in upscaling; -
FIG. 27 is a detailed block diagram of components that may be used during video scaling with signal paths indicated for both upscaling and downscaling; -
FIG. 28 is a flow diagram of an exemplary process for blending graphics and video surfaces; -
FIG. 29 is a flow diagram of an exemplary process for blending graphics windows into a combined blended graphics output; -
FIG. 30 is a flow diagram of an exemplary process for blending graphics, video and background color; -
FIG. 31 is a block diagram of a polyphase filter that performs both anti-flutter filtering and vertical scaling of graphics windows; -
FIG. 32 is a functional block diagram of an exemplary memory service request and handling system with dual memory controllers; -
FIG. 33 is a functional block diagram of an implementation of a real time scheduling system; -
FIG. 34 is a timing diagram of an exemplary CPU servicing mechanism that has been implemented using real time scheduling; -
FIG. 35 is a timing diagram that illustrates certain principles of critical instant analysis for an implementation of real time scheduling; -
FIG. 36 is a flow diagram illustrating servicing of requests according to the priority of the task; and -
FIG. 37 is a block diagram of a graphics accelerator, which may be coupled to a CPU and a memory controller. - Referring to
FIG. 1 , the graphics display system according to the present invention is preferably contained in an integratedcircuit 10. The integrated circuit may includeinputs 12 for receivingvideo signals 14, abus 20 for connecting to aCPU 22, abus 24 for transferring data to and frommemory 28, and anoutput 30 for providing avideo output signal 32. The system may further include aninput 26 for receivingaudio input 34 and anoutput 27 for providingaudio output 36. - The graphic display system accepts video input signals that may include analog video signals, digital video signals, or both. The analog signals may be, for example, NTSC, PAL and SECAM signals or any other conventional type of analog signal. The digital signals may be in the form of decoded MPEG signals or other format of digital video. In an alternate embodiment, the system includes an on-chip decoder for decoding the MPEG or other digital video signals input to the system. Graphics data for display is produced by any suitable graphics library software, such as Direct Draw marketed by Microsoft Corporation, and is read from the
CPU 22 into thememory 28. The video output signals 32 may be analog signals, such as composite NTSC, PAL, Y/C (S-video), SECAM or other signals that include video and graphics information. In an alternate embodiment, the system provides serial digital video output to an on-chip or off-chip serializer that may encrypt the output. - The graphics
display system memory 28 is preferably a unified synchronous dynamic random access memory (SDRAM) that is shared by the system, theCPU 22 and other peripheral components. In the preferred embodiment the CPU uses the unified memory for its code and data while the graphics display system performs all graphics, video and audio functions assigned to it by software. The amount of memory and CPU performance are preferably tunable by the system designer for the desired mix of performance and memory cost. In the preferred embodiment, a set-top box is implemented with SDRAM that supports both the CPU and graphics. - Referring to
FIG. 2 , the graphics display system preferably includes avideo decoder 50,video scaler 52,memory controller 54,window controller 56,display engine 58,video compositor 60, andvideo encoder 62. The system may optionally include agraphics accelerator 64 and anaudio engine 66. The system may display graphics, passthrough video, scaled video or a combination of the different types of video and graphics. Passthrough video includes digital or analog video that is not captured in memory. The passthrough video may be selected from the analog video or the digital video by a multiplexer. Bypass video, which may come into the chip on a separate input, includes analog video that is digitized off-chip into conventional YUV (luma chroma) format by any suitable decoder, such as the BT829 decoder, available from Brooktree Corporation, San Diego, Calif. The YUV format may also be referred to as YCrCb format where Cr and Cb are equivalent to U and V, respectively. - The video decoder (VDEC) 50 preferably digitizes and processes analog input video to produce internal YUV component signals with separated luma and chroma components. In an alternate embodiment, the digitized signals may be processed in another format, such as RGB. The
VDEC 50 preferably includes asample rate converter 70 and atime base corrector 72 that together allow the system to receive non-standard video signals, such as signals from a VCR. Thetime base corrector 72 enables the video encoder to work in passthrough mode, and corrects digitized analog video in the time domain to reduce or prevent jitter. - The
video scaler 52 may perform both downscaling and upscaling of digital video and analog video as needed. In the preferred embodiment, scale factors may be adjusted continuously from a scale factor of much less than one to a scale factor of four. With both analog and digital video input, either one may be scaled while the other is displayed full size at the same time as passthrough video. Any portion of the input may be the source for video scaling. To conserve memory and bandwidth, the video scaler preferably downscales before capturing video frames to memory, and upscales after reading from memory, but preferably does not perform both upscaling and downscaling at the same time. - The
memory controller 54 preferably reads and writes video and graphics data to and from memory by using burst accesses with burst lengths that may be assigned to each task. The memory is any suitable memory such as SDRAM. In the preferred embodiment, the memory controller includes two substantially similar SDRAM controllers, one primarily for the CPU and the other primarily for the graphics display system, while either controller may be used for any and all of these functions. - The graphics display system preferably processes graphics data using logical windows, also referred to as viewports, surfaces, sprites, or canvasses, that may overlap or cover one another with arbitrary spatial relationships. Each window is preferably independent of the others. The windows may consist of any combination of image content, including anti-aliased text and graphics, patterns, GIF images, JPEG images, live video from MPEG or analog video, three dimensional graphics, cursors or pointers, control panels, menus, tickers, or any other content, all or some of which may be animated.
- Graphics windows are preferably characterized by window descriptors. Window descriptors are data structures that describe one or more parameters of the graphics window. Window descriptors may include, for example, image pixel format, pixel color type, alpha blend factor, location on the screen, address in memory, depth order on the screen, or other parameters. The system preferably supports a wide variety of pixel formats, including
RGB 16,RGB 15, YUV 4:2:2 (ITU-R 601), CLUT2, CLUT4, CLUT8 or others. In addition to each window having its own alpha blend factor, each pixel in the preferred embodiment has its own alpha value. In the preferred embodiment, window descriptors are not used for video windows. Instead, parameters for video windows, such as memory start address and window size are stored in registers associated with the video compositor. - In operation, the
window controller 56 preferably manages both the video and graphics display pipelines. The window controller preferably accesses graphics window descriptors in memory through a direct memory access (DMA)engine 76. The window controller may sort the window descriptors according to the relative depth of their corresponding windows on the display. For graphics windows, the window controller preferably sends header information to the display engine at the beginning of each window on each scan line, and sends window header packets to the display engine as needed to display a window. For video, the window controller preferably coordinates capture of non-passthrough video into memory, and transfer of video between memory and the video compositor. - The
display engine 58 preferably takes graphics information from memory and processes it for display. The display engine preferably converts the various formats of graphics data in the graphics windows into YUV component format, and blends the graphics windows to create blended graphics output having a composite alpha value that is based on alpha values for individual graphics windows, alpha values per pixel, or both. In the preferred embodiment, the display engine transfers the processed graphics information to memory buffers that are configured as line buffers. In an alternate embodiment, the buffer may include a frame buffer. In another alternate embodiment, the output of the display engine is transferred directly to a display or output block without being transferred to memory buffers. - The
video compositor 60 receives one or more types of data, such as blended graphics data, video window data, passthrough video data and background color data, and produces a blended video output. Thevideo encoder 62 encodes the blended video output from the video compositor into any suitable display format such as composite NTSC, PAL, Y/C (S-video), SECAM or other signals that may include video information, graphics information, or a combination of video and graphics information. In an alternate embodiment, the video encoder converts the blended video output of the video compositor into serial digital video output using an on-chip or off chip serializer that may encrypt the output. - The
graphics accelerator 64 preferably performs graphics operations that may require intensive CPU processing, such as operations on three dimensional graphics images. The graphics accelerator may be programmable. Theaudio engine 66 preferably supports applications that create and play audio locally within a set-top box and allow mixing of the locally created audio with audio from a digital audio source, such as MPEG or Dolby, and with digitized analog audio. The audio engine also preferably supports applications that capture digitized baseband audio via an audio capture port and store sounds in memory for later use, or that store audio to memory for temporary buffering in order to delay the audio for precise lip-syncing when frame-based video time correction is enabled. - Referring to
FIG. 3 , in an alternate embodiment of the present invention, the graphics display system further includes an I/O bus 74 connected between theCPU 22,memory 28 and one or more of a wide variety of peripheral devices, such as flash memory, ROM, MPEG decoders, cable modems or other devices. The on-chip I/O bus 74 of the present invention preferably eliminates the need for a separate interface connection, sometimes referred in the art to as a north bridge. The I/O bus preferably provides high speed access and data transfers between the CPU, the memory and the peripheral devices, and may be used to support the full complement of devices that may be used in a full featured set-top box or digital TV. In the preferred embodiment, the I/O bus is compatible with the 68000 bus definition, including both active DSACK and passive DSACK (e.g., ROM/flash devices), and it supports external bus masters and retry operations as both master and slave. The bus preferably supports any mix of 32-bit, 16-bit and 8-bit devices, and operates at a clock rate of 33 MHz. The clock rate is preferably asynchronous with (not synchronized with) the CPU clock to enable independent optimization of those subsystems. - Referring to
FIG. 4 , the graphics display system generally includes agraphics display pipeline 80 and avideo display pipeline 82. The graphics display pipeline preferably contains functional blocks, includingwindow control block 84, DMA (direct memory access)block 86, FIFO (first-in-first-out memory)block 88,graphics converter block 90, color look up table (CLUT)block 92,graphics blending block 94, static random access memory (SRAM)block 96, and filteringblock 98. The system preferably spatially processes the graphics data independently of the video data prior to blending. - In operation, the
window control block 84 obtains and stores graphics window descriptors from memory and uses the window descriptors to control the operation of the other blocks in the graphics display pipeline. The windows may be processed in any order. In the preferred embodiment, on each scan line, the system processes windows one at a time from back to front and from the left edge to the right edge of the window before proceeding to the next window. In an alternate embodiment, two or more graphics windows may be processed in parallel. In the parallel implementation, it is possible for all of the windows to be processed at once, with the entire scan line being processed left to right. Any number of other combinations may also be implemented, such as processing a set of windows at a lower level in parallel, left to right, followed by the processing of another set of windows in parallel at a higher level. - The
DMA block 86 retrieves data frommemory 110 as needed to construct the various graphics windows according to addressing information provided by the window control block. Once the display of a window begins, the DMA block preferably retains any parameters that may be needed to continue to read required data from memory. Such parameters may include, for example, the current read address, the address of the start of the next lines, the number of bytes to read per line, and the pitch. Since the pipeline preferably includes a vertical filter block for anti-flutter and scaling purposes, the DMA block preferably accesses a set of adjacent display lines in the same frame, in both fields. If the output of the system is NTSC or other form of interlaced video, the DMA preferably accesses both fields of the interlaced final display under certain conditions, such as when the vertical filter and scaling are enabled. In such a case, all lines, not just those from the current display field, are preferably read from memory and processed during every display field. In this embodiment, the effective rate of reading and processing graphics is equivalent to that of a non-interlaced display with a frame rate equal to the field rate of the interlaced display. - The
FIFO block 88 temporarily stores data read from thememory 110 by theDMA block 86, and provides the data on demand to thegraphics converter block 90. The FIFO may also serve to bridge a boundary between different clock domains in the event that the memory and DMA operate under a clock frequency or phase that differs from thegraphics converter block 90 and thegraphics blending block 94. In an alternate embodiment, the FIFO block is not needed. The FIFO block may be unnecessary, for example, if the graphics converter block processes data from memory at the rate that it is read from the memory and the memory and conversion functions are in the same clock domain. - In the preferred embodiment, the
graphics converter block 90 takes raw graphics data from the FIFO block and converts it to YUValpha (YUVa) format. Raw graphics data may include graphics data from memory that has not yet been processed by the display engine. One type of YUVa format that the system may use includes YUV 4:2:2 (i.e. two U and V samples for every four Y samples) plus an 8-bit alpha value for every pixel, which occupies overall 24 bits per pixel. Another suitable type of YUVa format includes YUV 4:4:4 plus the 8-bit alpha value per pixel, which occupies 32 bits per pixel. In an alternate embodiment, the graphics converter may convert the raw graphics data into a different format, such as RGBalpha. - The alpha value included in the YUVa output may depend on a number of factors, including alpha from chroma keying in which a transparent pixel has an alpha equal to zero, alpha per CLUT entry, alpha from Y (luma), or alpha per window where one alpha value characterizes all of the contents of a given window.
- The
graphics converter block 90 preferably accesses theCLUT 92 during conversion of CLUT formatted raw graphics data. In one embodiment of the present invention, there is only one CLUT. In an alternate embodiment, multiple CLUTs are used to process different graphics windows having graphics data with different CLUT formats. The CLUT may be rewritten by retrieving new CLUT data via the DMA block when required. In practice, it typically takes longer to rewrite the CLUT than the time available in a horizontal blanking interval, so the system preferably allows one horizontal line period to change the CLUT. Non-CLUT images may be displayed while the CLUT is being changed. The color space of the entries in the CLUT is preferably in YUV but may also be implemented in RGB. - The
graphics blending block 94 receives output from thegraphics converter block 90 and preferably blends one window at a time along the entire width of one scan line, with the back-most graphics window being processed first. The blending block uses the output from the converter block to modify the contents of theSRAM 96. The result of each pixel blend operation is a pixel in the SRAM that consists of the weighted sum of the various graphics layers up to and including the present one, and the appropriate alpha blend value for the video layers, taking into account the graphics layers up to and including the present one. - The
SRAM 96 is preferably configured as a set of graphics line buffers, where each line buffer corresponds to a single display line. The blending of graphics windows is preferably performed one graphics window at a time on the display line that is currently being composited into a line buffer. Once the display line in a line buffer has been completely composited so that all the graphics windows on that display line have been blended, the line buffer is made available to thefiltering block 98. - The
filtering block 98 preferably performs both anti-flutter filtering (AFF) and vertical sample rate conversion (SRC) using the same filter. This block takes input from the line buffers and performs finite impulse response polyphase filtering on the data. While anti-flutter filtering and vertical axis SRC are done in the vertical axis, there may be different functions, such as horizontal SRC or scaling that are performed in the horizontal axis. In the preferred embodiment, the filter takes input from only vertically adjacent pixels at one time. It multiplies each input pixel times a specified coefficient, and sums the result to produce the output. The polyphase action means that the coefficients, which are samples of an approximately continuous impulse response, may be selected from a different fractional-pixel phase of the impulse response every pixel. In an alternate embodiment, where the filter performs horizontal scaling, appropriate coefficients are selected for a finite impulse response polyphase filter to perform the horizontal scaling. In an alternate embodiment, both horizontal and vertical filtering and scaling can be performed. - The
video display pipeline 82 may include aFIFO block 100, anSRAM block 102, and avideo scaler 104. The video display pipeline portion of the architecture is similar to that of the graphics display pipeline, and it shares some elements with it. In the preferred embodiment, the video pipeline supports up to one scaled video window per scan line, one passthrough video window, and one background color, all of which are logically behind the set of graphics windows. The order of these windows, from back to front, is preferably fixed as background color, then passthrough video, then scaled video. - The video windows are preferably in YUV format, although they may be in either 4:2:2 or 4:2:0 variants or other variants of YUV, or alternatively in other formats such as RGB. The scaled video window may be scaled up in both directions by the display engine, with a factor that can range up to four in the preferred embodiment. Unlike graphics, the system generally does not have to correct for square pixel aspect ratio with video. The scaled video window may be alpha blended into passthrough video and a background color, preferably using a constant alpha value for each video signal.
- The
FIFO block 100 temporarily stores captured video windows for transfer to thevideo scaler 104. The video scaler preferably includes a filter that performs both upscaling and downscaling. The scaler function may be a set of two polyphase SRC functions, one for each dimension. The vertical SRC may be a four-tap filter with programmable coefficients in a fashion similar to the vertical filter in the graphics pipeline, and the horizontal filter may use an 8-tap SRC, also with programmable coefficients. In an alternate embodiment, a shorter horizontal filter is used, such as a 4-tap horizontal SRC for the video upscaler. Since the same filter is preferably used for downscaling, it may be desirable to use more taps than are strictly needed for upscaling to accommodate low pass filtering for higher quality downscaling. - In the preferred embodiment, the video pipeline uses a separate window controller and DMA. In an alternate embodiment, these elements may be shared. The FIFOs are logically separate but may be implemented in a common SRAM.
- The
video compositor block 108 blends the output of the graphics display pipeline, the video display pipeline, and passthrough video. The background color is preferably blended as the lowest layer on the display, followed by passthrough video, the video window and blended graphics. In the preferred embodiment, the video compositor composites windows directly to the screen line-by-line at the time the screen is displayed, thereby conserving memory and bandwidth. The video compositor may include, but preferably does not include, display frame buffers, double-buffered displays, off-screen bit maps, or blitters. - Referring to
FIG. 5 , thedisplay engine 58 preferably includesgraphics FIFO 132,graphics converter 134, RGB-to-YUV converter 136, YUV-444-to-YUV422 converter 138 andgraphics blender 140. Thegraphics FIFO 132 receives raw graphics data from memory through agraphics DMA 124 and passes it to thegraphics converter 134, which preferably converts the raw graphics data into YUV 4:4:4 format or other suitable format. Awindow controller 122 controls the transfer of raw graphics data from memory to thegraphics converter 132. The graphics converter preferably accesses the RGB-to-YUV converter 136 during conversion of RGB formatted data and thegraphics CLUT 146 during conversion of CLUT formatted data. The RGB-to-YUV converter is preferably a color space converter that converts raw graphics data in RGB space to graphics data in YUV space. The graphics CLUT 146 preferably includes aCLUT 150, which stores pixel values for CLUT-formatted graphics data, and aCLUT controller 152, which controls operation of the CLUT. - The YUV444-to-
YUV422 converter 138 converts graphics data from YUV 4:4:4 format to YUV 4:2:2 format. The term YUV 4:4:4 means, as is conventional, that for every four horizontally adjacent samples, there are four Y values, four U values, and four V values; the term YUV 4:2:2 means, as is conventional, that for every four samples, there are four Y values, two U values and two V values. The YUV444-to-YUV422 converter 138 is preferably a UV decimator that sub-samples U and V from four samples per every four samples of Y to two samples per every four samples of Y. - Graphics data in YUV 4:4:4 format and YUV 4:2:2 format preferably also includes four alpha values for every four samples. Graphics data in YUV 4:4:4 format with four alpha values for every four samples may be referred to as being in aYUV 4:4:4:4 format; graphics data in YUV 4:2:2 format with four alpha values for every four samples may be referred to as being in aYUV 4:4:2:2 format.
- The YUV444-to-YUV422 converter may also perform low-pass filtering of UV and alpha. For example, if the graphics data with YUV 4:4:4 format has higher than desired frequency content, a low pass filter in the YUV444-to-YUV422 converter may be turned on to filter out high frequency components in the U and V signals, and to perform matched filtering of the alpha values.
- The
graphics blender 140 blends the YUV 4:2:2 signals together, preferably one line at a time using alpha blending, to create a single line of graphics from all of the graphics windows on the current display line. Thefilter 170 preferably includes a single 4-tap vertical polyphase graphics filter 172, and avertical coefficient memory 174. The graphics filter may perform both anti-flutter filtering and vertical scaling. The filter preferably receives graphics data from the display engine through a set of seven line buffers 59, where four of the seven line buffers preferably provide data to the taps of the graphics filter at any given time. - In the preferred embodiment, the system may receive video input that includes one decoded MPEG video in ITU-
R 656 format and one analog video signal. The ITU-R 656decoder 160 processes the decoded MPEG video to extract timing and data information. In one embodiment, an on-chip video decoder (VDEC) 50 converts the analog video signal to a digitized video signal. In an alternate embodiment, an external VDEC such as the Brooktree BT829 decoder converts the analog video into digitized analog video and provides the digitized video to the system asbypass video 130. - Analog video or MPEG video may be provided to the video compositor as passthrough video. Alternatively, either type of video may be captured into memory and provided to the video compositor as a scaled video window. The digitized analog video signals preferably have a pixel sample rate of 13.5 MHz, contain a 16 bit data stream in YUV 4:2:2 format, and include timing signals such as top field and vertical sync signals.
- The
VDEC 50 includes a time base corrector (TBC) 72 comprising aTBC controller 164 and aFIFO 166. To provide passthrough video that is synchronized to a display clock preferably without using a frame buffer, the digitized analog video is corrected in the time domain in theTBC 72 before being blended with other graphics and video sources. During time base correction, the video input which runs nominally at 13.5 MHZ is synchronized with the display clock which runs nominally at 13.5 MHZ at the output; these two frequencies that are both nominally 13.5 MHz are not necessarily exactly the same frequency. In the TBC, the video output is preferably offset from the video input by a half scan line per field. - A
capture FIFO 158 and acapture DMA 154 preferably capture the digitized analog video signals and MPEG video. TheSDRAM controller 126 provides captured video frames to the external SDRAM. Avideo DMA 144 transfers the captured video frames to avideo FIFO 148 from the external SDRAM. - The digitized analog video signals and MPEG video are preferably scaled down to less than 100% prior to being captured and are scaled up to more than 100% after being captured. The
video scaler 52 is shared by both upscale and downscale operations. The video scaler preferably includes amultiplexer 176, a set of line buffers 178, a horizontal andvertical coefficient memory 180 and ascaler engine 182. Thescaler engine 182 preferably includes a set of two polyphase filters, one for each of horizontal and vertical dimensions. - The vertical filter preferably includes a four-tap filter with programmable filter coefficients. The horizontal filter preferably includes an eight-tap filter with programmable filter coefficients. In the preferred embodiment, three
line buffers 178 supply video signals to thescaler engine 182. The threeline buffers 178 preferably are 720×16 two port SRAM. For vertical filtering, the threeline buffers 178 may provide video signals to three of the four taps of the four-tap vertical filter while the video input provides the video signal directly to the fourth tap. For horizontal filtering, a shift register having eight cells in series may be used to provide inputs to the eight taps of the horizontal polyphase filter, each cell providing an input to one of the eight taps. - For downscaling, the
multiplexer 168 preferably provides a video signal to the video scaler prior to capture. For upscaling, thevideo FIFO 148 provides a video signal to the video scaler after capture. Since thevideo scaler 52 is shared between downscaling and upscaling filtering, downscaling and upscaling operations are not performed at the same time in this particular embodiment. - In the preferred embodiment, the
video compositor 60 blends signals from up to four different sources, which may include blended graphics from thefilter 170, video from avideo FIFO 148, passthrough video from amultiplexer 168, and background color from abackground color module 184. Alternatively, various numbers of signals may be composited, including, for example, two or more video windows. The video compositor preferably provides final output signal to thedata size converter 190, which serializes the 16-bit word sample into an 8-bit word sample at twice the clock frequency, and provides the 8-bit word sample to thevideo encoder 62. - The
video encoder 62 encodes the provided YUV 4:2:2 video data and outputs it as an output of the graphics display system in any desired analog or digital format. - Often in the creation of graphics displays, the artist or application developer has a need to include rectangular objects on the screen, with the objects having a solid color and a uniform alpha blend factor (alpha value). These regions (or objects) may be rendered with other displayed objects on top of them or beneath them. In conventional graphics devices, such solid color objects are rendered using the number of distinct pixels required to fill the region. It may be advantageous in terms of memory size and memory bandwidth to render such objects on the display directly, without expending the memory size or bandwidth required in conventional approaches.
- In the preferred embodiment, video and graphics are displayed on regions referred to as windows. Each window is preferably a rectangular area of screen bounded by starting and ending display lines and starting and ending pixels on each display line. Raw graphics data to be processed and displayed on a screen preferably resides in the external memory. In the preferred embodiment, a display engine converts raw graphics data into a pixel map with a format that is suitable for display.
- In one embodiment of the present invention, the display engine implements graphics windows of many types directly in hardware. Each of the graphics windows on the screen has its own value of various parameters, such as location on the screen, starting address in memory, depth order on the screen, pixel color type, etc. The graphics windows may be displayed such that they may overlap or cover each other, with arbitrary spatial relationships.
- In the preferred embodiment, a data structure called a window descriptor contains parameters that describe and control each graphics window. The window descriptors are preferably data structures for representing graphics images arranged in logical surfaces, or windows, for display. Each data structure preferably includes a field indicating the relative depth of the logical surface on the display, a field indicating the alpha value for the graphics in the surface, a field indicating the location of the logical surface on the display, and a field indicating the location in memory where graphics image data for the logical surface is stored.
- All of the elements that make up any given graphics display screen are preferably specified by combining all of the window descriptors of the graphics windows that make up the screen into a window descriptor list. At every display field time or a frame time, the display engine constructs the display image from the current window descriptor list. The display engine composites all of the graphics windows in the current window descriptor list into a complete screen image in accordance with the parameters in the window descriptors and the raw graphics data associated with the graphics windows.
- With the introduction of window descriptors and real-time composition of graphics windows, a graphics window with a solid color and fixed translucency may be described entirely in a window descriptor having appropriate parameters. These parameters describe the color and the translucency (alpha) just as if it were a normal graphics window. The only difference is that there is no pixel map associated with this window descriptor. The display engine generates a pixel map accordingly and performs the blending in real time when the graphics window is to be displayed.
- For example, a window consisting of a rectangular object having a constant color and a constant alpha value may be created on a screen by including a window descriptor in the window descriptor list. In this case, the window descriptor indicates the color and the alpha value of the window, and a null pixel format, i.e., no pixel values are to be read from memory. Other parameters indicate the window size and location on the screen, allowing the creation of solid color windows with any size and location. Thus, in the preferred embodiment, no pixel map is required, memory bandwidth requirements are reduced and a window of any size may be displayed.
- Another type of graphics window that the window descriptors preferably describe is an alpha-only type window. The alpha-only type windows preferably use a constant color and preferably have graphics data with 2, 4 or 8 bits per pixel. For example, an alpha-4 format may be an alpha-only format used in one of the alpha-only type windows. The alpha-4 format specifies the alpha-only type window with alpha blend values having four bits per pixel. The alpha-only type window may be particularly useful for displaying anti-aliased text.
- A window controller preferably controls transfer of graphics display information in the window descriptors to the display engine. In one embodiment, the window controller has internal memory to store eight window descriptors. In other embodiments, the window controller may have memory allocated to store more or less window descriptors. The window controller preferably reads the window descriptors from external memory via a direct memory access (DMA) module.
- The DMA module may be shared by both paths of the display pipeline as well as some of the control logic, such as the window controller and the CLUT. In order to support the display pipeline, the DMA module preferably has three channels where the graphics pipeline and the video pipeline use separate DMA modules. These may include window descriptor read, graphics data read and CLUT read. Each channel has externally accessible registers to control the start address and the number of words to read.
- Once the DMA module has completed a transfer as indicated by its start and length registers, it preferably activates a signal that indicates the transfer is complete. This allows the DMA module that sets up operations for that channel to begin setting up of another transfer. In the case of graphics data reads, the window controller preferably sets up a transfer of one line of graphics pixels and then waits for the DMA controller to indicate that the transfer of that line is complete before setting up the transfer of the next line, or of a line of another window.
- Referring to
FIG. 6 , each window descriptor preferably includes four 32-bit words (labeledWord 0 through Word 3) containing graphics window display information.Word 0 preferably includes a window operation parameter, a window format parameter and a window memory start address. The window operation parameter preferably is a 2-bit field that indicates which operation is to be performed with the window descriptor. When the window operation parameter is 00b, the window descriptor performs a normal display operation and when it is 01b, the window descriptor performs graphics color look-up table (“CLUT”) re-loading. The window operation parameter of 10b is preferably not used. The window operation parameter of 11b preferably indicates that the window descriptor is the last of a sequence of window descriptors in memory. - The window format parameter preferably is a 4-bit field that indicates a data format of the graphics data to be displayed in the graphics window. The data formats corresponding to the window format parameter is described in Table 1 below.
-
TABLE 1 Graphics Data Formats Data win_format Format Data Format Description 0000b RGB16 5-BIT RED, 6-BIT GREEN, 5-BIT BLUE 0001b RGB15 + 1 RGB15 plus one bit alpha (keying) 0010b RGBA4444 4-BIT RED, GREEN, BLUE, ALPHA 0100b CLUT2 2-bit CLUT with YUV and alpha in table 0101b CLUT4 4-bit CLUT with YUV and alpha in table 0110b CLUT8 8-bit CLUT with YUV and alpha in table 0111b ACLUT16 8-BIT ALPHA, 8-BIT CLUT INDEX 1000b ALPHA0 Single win_alpha and single RGB win_color 1001b ALPHA2 2-bit alpha with single RGB win_color 1010b ALPHA4 4-bit alpha with single RGB win_color 1011b ALPHA8 8-bit alpha with single RGB win_color 1100b YUV422 U and V are sampled at half the rate of Y 1111b RESERVED Special coding for blank line in new header, i.e., indicates an empty line - The window memory start address preferably is a 26-bit data field that indicates a starting memory address of the graphics data of the graphics window to be displayed on the screen. The window memory start address points to the first address in the corresponding external SDRAM which is accessed to display data on the graphics window defined by the window descriptor. When the window operation parameter indicates the graphics CLUT reloading operation, the window memory start address indicates a starting memory address of data to be loaded into the graphics CLUT.
-
Word 1 in the window descriptor preferably includes a window layer parameter, a window memory pitch value and a window color value. The window layer parameter is preferably a 4-bit data indicating the order of layers of graphics windows. Some of the graphics windows may be partially or completely stacked on top of each other, and the window layer parameter indicates the stacking order. The window layer parameter preferably indicates where in the stack the graphics window defined by the window descriptor should be placed. - In the preferred embodiment, a graphics window with a window layer parameter of 0000b is defined as the bottom most layer, and a graphics window with a window layer parameter of 1111b is defined as the top most layer. Preferably, up to eight graphics windows may be processed in each scan line. The window memory pitch value is preferably a 12-bit data field indicating the pitch of window memory addressing. Pitch refers to the difference in memory address between two pixels that are vertically adjacent within a window.
- The window color value preferably is a 16-bit RGB color, which is applied as a single color to the entire graphics window when the window format parameter is 1000b, 1001b, 1010b, or 1011b. Every pixel in the window preferably has the color specified by the window color value, while the alpha value is determined per pixel and per window as specified in the window descriptor and the pixel format. The engine preferably uses the window color value to implement a solid surface.
-
Word 2 in the window descriptor preferably includes an alpha type, a widow alpha value, a window y-end value and a window y-start value. Theword 2 preferably also includes two bits reserved for future definition, such as high definition television (HD) applications. The alpha type is preferably a 2-bit data field that indicates the method of selecting an alpha value for the graphics window. The alpha type of 00b indicates that the alpha value is to be selected from chroma keying. Chroma keying determines whether each pixel is opaque or transparent based on the color of the pixel. Opaque pixels are preferably considered to have an alpha value of 1.0, and transparent pixels have an alpha value of 0, both on a scale of 0 to 1. Chroma keying compares the color of each pixel to a reference color or to a range of possible colors; if the pixel matches the reference color, or if its color falls within the specified range of colors, then the pixel is determined to be transparent. Otherwise it is determined to be opaque. - The alpha type of 01b indicates that the alpha value should be derived from the graphics CLUT, using the alpha value in each entry of the CLUT. The alpha type of 10b indicates that the alpha value is to be derived from the luminance Y. The Y value that results from conversion of the pixel color to the YUV color space, if the pixel color is not already in the YUV color, is used as the alpha value for the pixel. The alpha type of 11b indicates that only a single alpha value is to be applied to the entire graphics window. The single alpha value is preferably included as the window alpha value next.
- The window alpha value preferably is an 8-bit alpha value applied to the entire graphics window. The effective alpha value for each pixel in the window is the product of the window alpha and the alpha value determined for each pixel. For example, if the window alpha value is 0.5 on a scale of 0 to 1, coded as 0x80, then the effective alpha value of every pixel in the window is one-half of the value encoded in or for the pixel itself. If the window format parameter is 1000b, i.e., a single alpha value is to be applied to the graphics window, then the per-pixel alpha value is treated as if it is 1.0, and the effective alpha value is equal to the window alpha value.
- The window y-end value preferably is a 10-bit data field that indicates the ending display line of the graphics window on the screen. The graphics window defined by the window descriptor ends at the display line indicated by the window y-end value. The window y-start value preferably is a 10-bit data field that indicates a starting display line of the graphics window on a screen. The graphics window defined by the window descriptor begins at the display line indicated in the window y-start value. Thus, a display of a graphics window can start on any display line on the screen based on the window y-start value.
-
Word 3 in the window descriptor preferably includes a window filter enable parameter, a blank start pixel value, a window x-size value and a window x-start value. In addition, theword 3 includes two bits reserved for future definition, such as HD applications. Five bits of the 32-bit word 3 are not used. The window filter enable parameter is a 1-bit field that indicates whether low pass filtering is to be enabled during YUV 4:4:4 to YUV 4:2:2 conversion. - The blank start pixel value preferably is a 4-bit parameter indicating a number of blank pixels at the beginning of each display line. The blank start pixel value preferably signifies the number of pixels of the first word read from memory, at the beginning of the corresponding graphics window, to be discarded. This field indicates the number of pixels in the first word of data read from memory that are not displayed. For example, if memory words are 32 bits wide and the pixels are 4 bits each, there are 8 possible first pixels in the first word. Using this field, 0 to 7 pixels may be skipped, making the 1st to the 8th pixel in the word appear as the first pixel, respectively. The blank start pixel value allows graphics windows to have any horizontal starting position on the screen, and may be used during soft horizontal scrolling of a graphics window.
- The window x-size value preferably is a 10-bit data field that indicates the size of a graphics window in the x direction, i.e., horizontal direction. The window x-size value preferably indicates the number of pixels of a graphics window in a display line.
- The window x-start value preferably is a 10-bit data field that indicates a starting pixel of the graphics window on a display line. The graphics window defined by the window descriptor preferably begins at the pixel indicated by the window x-start value of each display line. With the window x-start value, any pixel of a given display line can be chosen to start painting the graphics window. Therefore, there is no need to load pixels on the screen prior to the beginning of the graphics window display area with black.
- In one embodiment of the present invention, a FIFO in the graphics display path accepts raw graphics data as the raw graphics data is read from memory, at the full memory data rate using a clock of the memory controller. In this embodiment, the FIFO provides this data, initially stored in an external memory, to subsequent blocks in the graphics pipeline.
- In systems such as graphics display systems where multiple types of data may be output from one module, such as a memory controller subsystem, and used in another subsystem, such as a graphics processing subsystem, it typically becomes progressively more difficult to support a combination of dynamically varying data types and data transfer rates and FIFO buffers between the producing and consuming modules. The conventional way to address such problems is to design a logic block that understands the varying parameters of the data types in the first module and controls all of the relevant variables in the second module. This may be difficult due to variable delays between the two modules, due to the use of FIFOs between them and varying data rate, and due to the complexity of supporting a large number of data types.
- The system preferably processes graphics images for display by organizing the graphics images into windows in which the graphics images appear on the screen, obtaining data that describes the windows, sorting the data according to the depth of the window on the display, transferring graphics images from memory, and blending the graphics images using alpha values associated with the graphics images.
- In the preferred embodiment, a packet of control information called a header packet is passed from the window controller to the display engine. All of the required control information from the window controller preferably is conveyed to the display engine such that all of the relevant variables from the window controller are properly controlled in a timely fashion and such that the control is not dependent on variations in delays or data rates between the window controller and the display engine.
- A header packet preferably indicates the start of graphics data for one graphics window. The graphics data for that graphics window continues until it is completed without requiring a transfer of another header packet. A new header packet is preferably placed in the FIFO when another window is to start. The header packets may be transferred according to the order of the corresponding window descriptors in the window descriptor lists.
- In a display engine that operates according to lists of window descriptors, windows may be specified to overlap one another. At the same time, windows may start and end on any line, and there may be many windows visible on any one line. There are a large number of possible combinations of window starting and ending locations along vertical and horizontal axes and depth order locations. The system preferably indicates the depth order of all windows in the window descriptor list and implements the depth ordering correctly while accounting for all windows.
- Each window descriptor preferably includes a parameter indicating the depth location of the associated window. The range that is allowed for this parameter can be defined to be almost any useful value. In the preferred embodiment there are 16 possible depth values, ranging from 0 to 15, with 0 being the back-most (deepest, or furthest from the viewer), and 15 being the top or front-most depth. The window descriptors are ordered in the window descriptor list in order of the first display scan line where the window appears. For example if window A spans
lines 10 to 20, window B spanslines 12 to 18, and window C spanslines 5 to 20, the order of these descriptors in the list would be {C, A, B}. - In the hardware, which is a preferably a VLSI device, there is preferably on-chip memory capable of storing a number of window descriptors. In the preferred implementation, this memory can store up to 8 window descriptors on-chip, however the size of this memory may be made larger or smaller without loss of generality. Window descriptors are read from main memory into the on-chip descriptor memory in order from the start of the list, and stopping when the on-chip memory is full or when the most recently read descriptor describes a window that is not yet visible, i.e., its starting line is on a line that has a higher number than the line currently being constructed. Once a window has been displayed and is no longer visible, it may be cast out of the on-chip memory and the next descriptor in the list may read from main memory. At any given display line, the order of the window descriptors in the on-chip memory bears no particular relation to the depth order of the windows on the screen.
- The hardware that controls the compositing of windows builds up the display in layers, starting from the back-most layer. In the preferred embodiment, the back most layer is
layer 0. The hardware performs a quick search of the back-most window descriptor that has not yet been composited, regardless of its location in the on-chip descriptor memory. In the preferred embodiment, this search is performed as follows: - All 8 window descriptors are stored on chip in such a way that the depth order numbers of all of them are available simultaneously. While the depth numbers in the window descriptors are 4 bit numbers, representing 0 to 15, the on-chip memory has storage for 5 bits for the depth number. Initially the 5 bit for each descriptor is set to 0. The depth order values are compared in a hierarchy of pair-wise comparisons, and the lower of the two depth numbers in each comparison wins the comparison. That is, at the first stage of the test descriptor pairs {0, 1}, {2, 3}, {4, 5}, and {6, 7} are compared, where {0-7} represent the eight descriptors stored in the on-chip memory. This results in four depth numbers with associated descriptor numbers. At the next stage two pair-wise comparisons compare {(0, 1), (2, 3)} and {(4, 5), (6, 7)}.
- Each of these results in a depth number of the lower depth order number and the associated descriptor number. At the third stage, one pair-wise comparison finds the smallest depth number of all, and its associated descriptor number. This number points the descriptor in the on-chip memory with the lowest depth number, and therefore the greatest depth, and this descriptor is used first to render the associated window on the screen. Once this window has been rendered onto the screen for the current scan line, the fifth bit of the depth number in the on-chip memory is set to 1, thereby ensuring that the depth value number is greater than 15, and as a result this depth number will preferably never again be found to be the back-most window until all windows have been rendered on this scan line, preventing rendering this window twice.
- Once all the windows have been rendered for a given scan line, the fifth bits of all the on-chip depth numbers are again set to 0; descriptors that describe windows that are no longer visible on the screen are cast out of the on-chip memory; new descriptors are read from memory as required (that is, if all windows in the on-chip memory are visible, the next descriptor is read from memory, and this repeats until the most recently read descriptor is not yet visible on the screen), and the process of finding the back most descriptor and rendering windows onto the screen repeats.
- Referring to
FIG. 7 , window descriptors are preferably sorted by the window controller and used to transfer graphics data to the display engine. Each of window descriptors, including thewindow descriptor 0 through thewindow descriptor 7 300 a-h, preferably contains a window layer parameter. In addition, each window descriptor is preferably associated with a window line done flag indicating that the window descriptor has been processed on a current display line. - The window controller preferably performs window sorting at each display line using the window layer parameters and the window line done flags. The window controller preferably places the graphics window that corresponds to the window descriptor with the smallest window layer parameter at the bottom, while placing the graphics window that corresponds to the window descriptor with the largest window layer parameter at the top.
- The window controller preferably transfers the graphics data for the bottom-most graphics window to be processed first. The window parameters of the bottom-most window are composed into a header packet and written to the graphics FIFO. The DMA engine preferably sends a request to the memory controller to read the corresponding graphics data for this window and send the graphics data to the graphics FIFO. The graphics FIFO is then read by the display engine to compose a display line, which is then written to graphics line buffers.
- The window line done flag is preferably set true whenever the window surface has been processed on the current display line. The window line done flag and the window layer parameter may be concatenated together for sorting. The window line done flag is added to the window layer parameter as the most significant bit during sorting such that {window line done flag[4], window layer parameter[3:0]} is a five bit binary number, a window layer value, with window line done flag as the most significant bit.
- The window controller preferably selects a window descriptor with the smallest window layer value to be processed. Since the window line done flag is preferably the most significant bit of the window layer value, any window descriptor with this flag set, i.e., any window that has been processed on the current display line, will have a higher window layer value than any of the other window descriptors that have not yet been processed on the current display line. When a particular window descriptor is processed, the window line done flag associated with that particular window descriptor is preferably set high, signifying that the particular window descriptor has been processed for the current display line.
- A
sorter 304 preferably sorts all eight window descriptors after any window descriptor is processed. The sorting may be implemented using binary tree sorting or any other suitable sorting algorithm. In binary tree sorting for eight window descriptors, the window layer value for four pairs of window descriptors are compared at a first level using four comparators to choose the window descriptor that corresponds to a lower window in each pair. In the second level, two comparators are used to select the window descriptor that corresponds to the bottom most graphics window in each of two pairs. In the third and the last level, the bottom-most graphics windows from each of the two pairs are compared against each other preferably using only one comparator to select the bottom window. - A
multiplexer 302 preferably multiplexes parameters from the window descriptors. The output of the sorter, i.e., window selected to be the bottom most, is used to select the window parameters to be sent to a direct memory access (“DMA”)module 306 to be packaged in a header packet and sent to agraphics FIFO 308. The display engine preferably reads the header packet in the graphics FIFO and processes the raw graphics data based on information contained in the header packet. - The header packet preferably includes a first header word and a second header word. Corresponding graphics data is preferably transferred as graphics data words. Each of the first header word, the second header word and the graphics data words preferably includes 32 bits of information plus a data type bit. The first header word preferably includes a 1-bit data type, a 4-bit graphics type, a 1-bit first window parameter, a 1-bit top/bottom parameter, a 2-bit alpha type, an 8-bit window alpha value and a 16-bit window color value. Table 2 shows contents of the first header word.
-
TABLE 2 First Header Word Bit Position 32 31-28 27 26 25-24 23-16 15-0 Data Data graphics First top/bot alpha window window Content type type Window tom type alpha color - The 1-bit data type preferably indicates whether a 33-bit word in the FIFO is a header word or a graphics data word. A data type of 1 indicates that the associated 33-bit word is a header word while the data type of 0 indicates that the associated 33-bit word is a graphics data word. The graphics type indicates the data format of the graphics data to be displayed in the graphics window similar to the window format parameter in the
word 0 of the window descriptor, which is described in Table 1 above. In the preferred embodiment, when the graphics type is 1111, there is no window on the current display line, indicating that the current display line is empty. - The first window parameter of the first header word preferably indicates whether the window associated with that first header word is a first window on a new display line. The top/bottom parameter preferably indicates whether the current display line indicated in the first header word is at the top or the bottom edges of the window. The alpha type preferably indicates a method of selecting an alpha value individually for each pixel in the window similar to the alpha type in the
word 2 of the window descriptor. - The window alpha value preferably is an alpha value to be applied to the window as a whole and is similar to the window alpha value in the
word 2 of the window descriptor. The window color value preferably is the color of the window in 16-bit RGB format and is similar to the window color value in theword 1 of the window descriptor. - The second header word preferably includes the 1-bit data type, a 4-bit blank pixel count, a 10-bit left edge value, a 1-bit filter enable parameter and a 10-bit window size value. Table 3 shows contents of the second header word in the preferred embodiment.
-
TABLE 3 Second Header Word Bit Position 32 31-28 25-16 10 9-0 Data data Blank pixel Left filter window Content type count edge enabler size - Similar to the first header word, the second header word preferably starts with the data type indicating whether the second header word is a header word or a graphics data word. The blank pixel count preferably indicates a number of blank pixels at a left edge of the window and is similar to the blank start pixel value in the
word 3 of the window descriptor. The left edge preferably indicates a starting location of the window on a scan line, and is similar to the window x-start value in theword 3 of the window descriptor. The filter enable parameter preferably enables a filter during a conversion of graphics data from a YUV 4:4:4 format to a YUV 4:2:2 format and is similar to the window filter enable parameter inword 3 of the window descriptor. Some YUV 4:4:4 data may contain higher frequency content than others, which may be filtered by enabling a low pass filter during a conversion to the YUV 4:2:2 format. The window size value preferably indicates the actual horizontal size of the window and is similar to the window x-size value inword 3 of the window descriptor. - When the composition of the last window of the last display line is completed, an empty-line header is preferably placed into the FIFO so that the display engine may release the display line for display.
- Packetized data structures have been used primarily in the communication world where large amount of data needs to be transferred between hardware using a physical data link (e.g., wires). The idea is not known to have been used in the graphics world where localized and small data control structures need to be transferred between different design entities without requiring a large off-chip memory as a buffer. In one embodiment of the present system, header packets are used, and a general-purpose FIFO is used for routing. Routing may be accomplished in a relatively simple manner in the preferred embodiment because the write port of the FIFO is the only interface.
- In the preferred embodiment, the graphics FIFO is a synchronous 32×33 FIFO built with a static dual-port RAM with one read port and one write port. The write port preferably is synchronous to a 81 MHz memory clock while the read port may be asynchronous (not synchronized) to the memory clock. The read port is preferably synchronous to a graphics processing clock, which runs preferably at 81 MHz, but not necessarily synchronized to the memory clock. Two graphics FIFO pointers are preferably generated, one for the read port and one for the write port. In this embodiment, each graphics FIFO pointer is a 6-bit binary counter which ranges from 000000b to 111111b, i.e., from 0 to 63. The graphics FIFO is only 32 words deep and requires only 5 bits to represent each 33-bit word in the graphics FIFO. An extra bit is preferably used to distinguish between FIFO full and FIFO empty states.
- The graphics data words preferably include the 1-bit data type and 32-bit graphics data bits. The data type is 0 for the graphics data words. In order to adhere to a common design practice that generally limits the size of a DMA burst into a FIFO to half the size of the FIFO, the number of graphics data words in one DMA burst preferably does not exceed 16.
- In an alternate embodiment, a graphics display FIFO is not used. In this embodiment, the graphics converter processes data from memory at the rate that it is read from memory. The memory and conversion functions are in a same clock domain. Other suitable FIFO designs may be used.
- Referring to
FIG. 8 , a flow diagram illustrates a process for loading and processing window descriptors. First the system is preferably reset instep 310. Then the system instep 312 preferably checks for a vertical sync (“VSYNC”). When the VSYNC is received, the system instep 314 preferably proceeds to load window descriptors into the window controller from the external SDRAM or other suitable memory over the DMA channel for window descriptors. The window controller may store up to eight window descriptors in one embodiment of the present invention. - The step in
step 316 preferably sends a new line header indicating the start of a new display line. The system instep 320 preferably sorts the window descriptors in accordance with the process described in reference toFIG. 7 . Although sorting is indicated as a step in this flow diagram, sorting actually may be a continuous process of selecting the bottom-most window, i.e., the window to be processed. The system instep 322 preferably checks to determine if a starting display line of the window is greater than the line count of the current display line. If the starting display line of the window is greater than the line count, i.e., if the current display line is above the starting display line of the bottom most window, the current display line is a blank line. Thus, the system instep 318 preferably increments the line count and sends another new line header instep 316. The process of sending a new line header and sorting window descriptor continues as long as the starting display line of the bottom most (in layer order) window is below the current display line. - The display engine and the associated graphics filter preferably operate in one of two modes, a field mode and a frame mode. In both modes, raw graphics data associated with graphics windows is preferably stored in frame format, including lines from both interlaced fields in the case of an interlaced display. In the field mode, the display engine preferably skips every other display line during processing. In the field mode, therefore, the system in
step 318 preferably increments the line count by two each time to skip every other line. In the frame mode, the display engine processes every display line sequentially. In the frame mode, therefore, the system instep 318 preferably increments the line count by one each time. - When the system in
step 322 determines that the starting display of the window is greater than the line count, the system instep 324 preferably determines from the header packet whether the window descriptor is for displaying a window or re-loading the CLUT. If the window header indicates that the window descriptor is for re-loading CLUT, the system instep 328 preferably sends the CLUT data to the CLUT and turns on the CLUT write strobe to load CLUT. - If the system in
step 324 determines that the window descriptor is for displaying a window, the system instep 326 preferably sends a new window header to indicate that graphics data words for a new window on the display line are going to be transferred into the graphics FIFO. Then, the system instep 330 preferably requests the DMA module to send graphics data to the graphics FIFO over the DMA channel for graphics data. In the event the FIFO does not have sufficient space to store graphics data in a new data packet, the system preferably waits until such space is made available. - When graphics data for a display line of a current window is transferred to the FIFO, the system in
step 332 preferably determines whether the last line of the current window has been transferred. If the last line has been transferred, a window descriptor done flag associated with the current window is preferably set. The window descriptor done flag indicates that the graphics data associated with the current window descriptor has been completely transferred. When the window descriptor done flag is set, i.e., when the current window descriptor is completely processed, the system sets a window descriptor done flag instep 334. Then the system instep 336 preferably sets a new window descriptor update flag and increments a window descriptor update counter to indicate that a new window descriptor is to be copied from the external memory. - Regardless of whether the last line of the current window has been processed, the system in
step 338 preferably sets the window line done flag for the current window descriptor to signify that processing of this window descriptor on the current display line has been completed. The system instep 340 preferably checks the window line done flags associated with all eight window descriptors to determine whether they are all set, which would indicate that all the windows of the current display line have been processed. If not all window line done flags are set, the system preferably proceeds to step 320 to sort the window descriptors and repeat processing of the new bottom-most window descriptor. - If all eight window line done flags are determined to be set in
step 340, all window descriptors on the current display line have been processed. In this case, the system instep 342 preferably checks whether an all window descriptor done flag has been set to determine whether all window descriptors have been processed completely. The all window descriptor done flag is set when processing of all window descriptors in the current frame or field have been processed completely. If the all window descriptor done flag is set, the system preferably returns to step 310 to reset and awaits another VSYNC instep 312. If not all window descriptors have been processed, the system instep 344 preferably determines if the new window descriptor update flag has been set. In the preferred embodiment, this flag would have been set instep 334 if the current window descriptor has been completely processed. - When the new window descriptor update flag is set, the system in
step 352 preferably sets up the DMA to transfer a new window descriptor from the external memory. Then the system instep 350 preferably clears the new window descriptor update flag. After the system clears the new window descriptor update flag or when the new window descriptor update flag is not set in the first place, the system instep 348 preferably increments a line counter to indicate that the window descriptors for a next display line should be processed. The system instep 346 preferably clears all eight window line done flags to indicate that none of the window descriptors have been processed for the next display line. Then the system instep 316 preferably initiates processing of the new display line by sending a new line header to the FIFO. - In the preferred embodiment, the graphics converter in the display engine converts raw graphics data having various different formats into a common format for subsequent compositing with video and for display. The graphics converter preferably includes a state machine that changes state based on the content of the window data packet. Referring to
FIG. 9 , the state machine in the graphics converter preferably controls unpacking and processing of the header packets. A first headerword processing state 354 is preferably entered wherein a first window parameter of the first header word is checked (step 356) to determine if the window data packet is for a first graphics window of a new line. If the header packet is not for a first window of a new line, after the first header word is processed, the state preferably changes to a second headerword processing state 362. - If the header packet is for a first graphics window of a new line, the state machine preferably enters a
clock switch state 358. In the clock switch state, the clock for a graphics line buffer which is going to store the new line switches from a display clock to a memory clock, e.g., from a 13.5 MHz clock to a 81 MHz clock. From the clock switch state, a graphics type in the first header word is preferably checked (step 360) to determine if the header packet represents an empty line. A graphics type of 1111b preferably refers to an empty line. - If the graphics type is 1111b, the state machine enters the first header
word processing state 354, in which the first header word of the next header packet is processed. If the graphics type is not 1111b, i.e. the display line is not empty, the second header word is processed. Then the state machine preferably enters agraphics content state 364 wherein words from the FIFO are checked (step 366) one at a time to verify that they are data words. The state machine preferably remains in the graphics content state as long as each word read is a data word. While in the graphics content state, if a word received is not a data word, i.e., it is a first or second header word, then the state machine preferably enters a pipelinecomplete state 368 and then to the firstheader processing state 354 where reading and processing of the next window data packet is commenced. - Referring to
FIG. 10 , thedisplay engine 58 is preferably coupled to memory over amemory interface 370 and a CLUT over aCLUT interface 372. The display engine preferably includes thegraphics FIFO 132 which receives the header packets and the graphics data from the memory controller over the memory interface. The graphics FIFO preferably provides received raw graphics data to thegraphics converter 134 which converts the raw graphics data into the common compositing format. During the conversion of graphics format, the RGB toYUV converter 136 and data from the CLUT over theCLUT interface 372 are used to convert RGB formatted data and CLUT formatted data, respectively. - The graphics converter preferably processes all of the window layers of each scan line in half the time, or less, of an interlaced display line, due to the need to have lines from both fields available in the SRAM for use by the graphics filter when frame mode filtering is enabled. The graphics converter operates at 81 MHz in one embodiment of the present invention, and the graphics converter is able to process up to eight windows on each scan line and up to three full width windows.
- For example, with a 13.5 MHz display clock, if the graphics converter processes 81 Mpixels per second, it can convert three windows, each covering the width of the display, in half of the active display time of an interlaced scan line. In one embodiment of the present invention, the graphics converter processes all the window layers of each scan line in half the time of an interlaced display line, due to the need to have lines from both fields available in the SRAM for use by the graphics filter. In practice, there may be some more time available since the active display time leaves out the blanking time, while the graphics converter can operate continuously.
- Graphics pixels are preferably read from the FIFO in raw graphics format, using one of the multiple formats allowed in the present invention and specified in the window descriptor. Each pixel may occupy as little as two bits or as much as 16 bits in the preferred embodiment. Each pixel is converted to a YUVa24 format (also referred to as aYUV 4:4:2:2), such as two adjacent pixels sharing a UV pair and having unique Y and alpha values, and each of the Y, U, V and alpha components occupying eight bits. The conversion process is generally dependent on the pixel format type and the alpha specification method, both of which are indicated by the window descriptor for the currently active window. Preferably, the graphics converter uses the CLUT memory to convert CLUT format pixels into RGB or YUV pixels.
- Conversions of RGB pixels may require conversion to YUV, and therefore, the graphics converter preferably includes a color space converter. The color space converter preferably is accurate for all coefficients. If the converter is accurate to eight or nine bits it can be used to accurately convert eight bit per component graphics, such as CLUT entries with this level of accuracy or RGB24 images.
- The graphics converter preferably produces one converted pixel per clock cycle, even when there are multiple graphics pixels packed into one word of data from the FIFO. Preferably the graphics processing clock, which preferably runs at 81 MHz, is used during the graphics conversion. The graphics converter preferably reads data from the FIFO whenever both conditions are met, including that the converter is ready to receive more data, and the FIFO has data ready. The graphics converter preferably receives an input from a graphics blender, which is the next block in the pipeline, which indicates when the graphics blender is ready to receive more converted graphics data. The graphics converter may stall if the graphics blender is not ready, and as a result, the graphics converter may not be ready to receive graphics data from the FIFO.
- The graphics converter preferably converts the graphics data into a YUValpha (“YUVa”) format. This YUVa format includes YUV 4:2:2 values plus an 8-bit alpha value for every pixel, and as such it occupies 24 bits per pixel; this format is alternately referred to as aYUV 4:4:2:2. The YUV444-to-
YUV422 converter 138 converts graphics data with the aYUV 4:4:4:4 format from the graphics converter into graphics data with the aYUV 4:4:2:2 format and provides the data to thegraphics blender 140. The YUV444-to-YUV422 converter preferably has a capacity of performing low pass filtering to filter out high frequency components when needed. The graphics converter also sends and receives clock synchronization information to and from the graphics line buffers over aclock control interface 376. - When provided with the converted graphics data, the
graphics blender 140 preferably composites graphics windows into graphics line buffers over a graphicsline buffer interface 374. The graphics windows are alpha blended into blended graphics and preferably stored in graphics line buffers. - A color look-up table (“CLUT”) is preferably used to supply color and alpha values to the raw graphics data formatted to address information contents of the CLUT. For a window surface based display, there may be multiple graphics windows on the same display screen with different graphics formats. For graphics windows using a color look-up table (CLUT) format, it may be necessary to load specific color look-up table entries from external memory to on-chip memory before the graphics window is displayed.
- The system preferably includes a display engine that processes graphics images formatted in a plurality of formats including a color look up table (CLUT) format. The system provides a data structure that describes the graphics in a window, provides a data structure that provides an indicator to load a CLUT, sorts the data structures into a list according to the location of the window on the display, and loads conversion data into a CLUT for converting the CLUT-formatted data into a different data format according to the sequence of data structures on the list.
- In the preferred embodiment, each window on the display screen is described with a window descriptor. The same window descriptor is used to control CLUT loading as the window descriptor used to display graphics on screen. The window descriptor preferably defines the memory starting address of the graphics contents, the x position on the display screen, the width of the window, the starting vertical display line and end vertical display line, window layer, etc. The same window structure parameters and corresponding fields may be used to define the CLUT loading. For example, the graphics contents memory starting address may define CLUT memory starting address; the width of graphics window parameter may define the number of CLUT entries to be loaded; the starting vertical display line and ending vertical display line parameters may be used to define when to load the CLUT; and the window layer parameter may be used to define the priority of CLUT loading if several windows are displayed at the same time, i.e., on the same display line.
- In the preferred embodiment, only one CLUT is used. As such, the contents of the CLUT are preferably updated to display graphics windows with CLUT formatted data that is not supported by the current content of the CLUT. One of ordinary skill in the art would appreciate that it is straightforward to use more than one CLUT and switch back and forth between them for different graphics windows.
- In the preferred embodiment, the CLUT is closely associated with the graphics converter. In one embodiment of the present invention, the CLUT consists of one SRAM with 256 entries and 32 bits per entry. In other embodiments, the number of entries and bits per entry may vary. Each entry contains three color components; either RGB or YUV format, and an alpha component. For every CLUT-format pixel converted, the pixel data may be used as the address to the CLUT and the resulting value may be used by the converter to produce the YUVa (or alternatively RGBa) pixel value.
- The CLUT may be re-loaded by retrieving new CLUT data via the direct memory access module when needed. It generally takes longer to re-load the CLUT than the time available in a horizontal blanking interval. Accordingly, in the preferred embodiment, a whole scan line time is allowed to re-load the CLUT. While the CLUT is being reloaded, graphics images in non-CLUT formats may be displayed. The CLUT reloading is preferably initiated by a window descriptor that contains information regarding CLUT reloading rather than a graphics window display information.
- Referring to
FIG. 11 , thegraphics CLUT 146 preferably includes agraphics CLUT controller 400 and a static dual-port RAM (SRAM) 402. The SRAM preferably has a size of 256×32 which corresponds to 256 entries in the graphics CLUT. Each entry in the graphics CLUT preferably has 32 bits composed of Y+U+V+alpha from the most significant bit to the least significant bit. The size of each field, including Y, U, V, and alpha, is preferably eight bits. - The graphics CLUT preferably has a write port that is synchronized to a 81 MHz memory clock and a read port that may be asynchronous to the memory clock. The read port is preferably synchronous to the graphics processing clock, which runs preferably at 81 MHz, but not necessarily synchronized to the memory clock. During a read operation, the static dual-port RAM (“SRAM”) is preferably addressed by a read address which is provided by graphics data in the CLUT images. During the read operation, the graphics data is preferably output as read
data 414 when a memory address in the CLUT containing that graphics data is addressed by aread address 412. - During write operations, the window controller preferably controls the write port with a CLUT
memory request signal 404 and a CLUTmemory write signal 408.CLUT memory data 410 is also preferably provided to the graphics CLUT via the direct memory access module from the external memory. The graphics CLUT controller preferably receives the CLUT memory data and provides the received CLUT memory data to the SRAM for writing. - Referring to
FIG. 12 , an exemplary timing diagram shows different signals involved during a writing operation of the CLUT. The CLUTmemory request signal 418 is asserted when the CLUT is to be re-loaded. A rising edge of the CLUTmemory request signal 418 is used to reset a write pointer associated with the write port. Then the CLUTmemory write signal 420 is asserted to indicate the beginning of a CLUT re-loading operation. TheCLUT memory data 422 is provided synchronously to the 81MHz memory clock 416 to be written to the SRAM. The write pointer associated with the write port is updated each time the CLUT is loaded with CLUT memory data. - In the preferred embodiment, the process of reloading a CLUT is associated with the process of processing window descriptors illustrated in
FIG. 8 since CLUT re-loading is initiated by a window descriptor. As shown insteps FIG. 8 , if the window descriptor is determined to be for reloading CLUT instep 324, the system instep 328 sends the CLUT data to the CLUT. The window descriptor for the CLUT reloading may appear anywhere in the window descriptor list. Accordingly, the CLUT reloading may take place at any time whenever CLUT data is to be updated. - Using the CLUT loading mechanism in one embodiment of the present invention, more than one window with different CLUT tables may be displayed on the same display line. In this embodiment, only the minimum required entries are preferably loaded into the CLUT, instead of loading all the entries every time. The loading of only the minimum required entries may save memory bandwidth and enables more functionality. The CLUT loading mechanism is preferably relatively flexible and easy to control, making it suitable for various applications. The CLUT loading mechanism of the present invention may also simplify hardware design, as the same state machine for the window controller may be used for CLUT loading. The CLUT preferably also shares the same DMA logic and layer/priority control logic as the window controller.
- In the preferred embodiment of the present invention, the system preferably blends a plurality of graphics images using line buffers. The system initializes a line buffer by loading the line buffer with data that represents transparent black, obtains control of a line buffer for a compositing operation, composites graphics contents into the line buffer by blending the graphics contents with the existing contents of the line buffer, and repeats the step of compositing graphics contents into the line buffer until all of the graphics surfaces for the particular line have been composited.
- The graphics line buffer temporarily stores composited graphics images (blended graphics). A graphics filter preferably uses blended graphics in line buffers to perform vertical filtering and scaling operations to generate output graphics images. In the preferred embodiment, the display engine composites graphics images line by line using a clock rate that is faster than the pixel display rate, and graphics filters run at the pixel display rate. In other embodiments, multiple lines of graphics images may be composited in parallel. In still other embodiments, the line buffers may not be needed. Where line buffers are used, the system may incorporate an innovative control scheme for providing the line buffers containing blended graphics to the graphics filter and releasing the line buffers that are used up by the graphics filter.
- The line buffers are preferably built with synchronous static dual-port random access memory (“SRAM”) and dynamically switch their clocks between a memory clock and a display clock. Each line buffer is preferably loaded with graphics data using the memory clock and the contents of the line buffer is preferably provided to the graphics filter synchronously to the display clock. In one embodiment of the present invention, the memory clock is an 81 MHz clock used by the graphics converter to process graphics data while the display clock is a 13.5 MHz clock used to display graphics and video signals on a television screen. Other embodiments may use other clock speeds.
- Referring to
FIG. 13 , the graphics line buffer preferably includes a graphicsline buffer controller 500 and line buffers 504. The graphicsline buffer controller 500 preferably receives memory clock buffer control signals 508 as well as display clock buffer control signals 510. The memory clock control signals and the display clock control signals are used to synchronize the graphics line buffers to the memory clock and the display clock, respectively. The graphics line buffer controller receives aclock selection vector 514 from the display engine to control which graphics line buffers are to operate in which clock domain. The graphics line buffer controller returns a clock enable vector to the display engine to indicate clock synchronization settings in accordance with the clock selection vector. - In the preferred embodiment, the line buffers 504 include seven line buffers 506 a-g. The line buffers temporarily store lines of YUVa24 graphics pixels that are used by a subsequent graphics filter. This allows for four line buffers to be used for filtering and scaling, two are available for progressing by one or two lines at the end of every line, and one for the current compositing operation. Each line buffer may store an entire display line. Therefore, in this embodiment, the total size of the line buffers is (720 pixels/display line)*(3 bytes/pixel)*(7 lines)=15,120 bytes.
- Each of the ports to the SRAM including line buffers is 24 bits wide to accommodate graphics data in YUVa24 format in this embodiment of the present invention. The SRAM has one read port and one write port. One read port and one write port are used for the graphics blender interface, which performs a read-modify-write typically once per clock cycle. In another embodiment of the present invention, an SRAM with only one port is used. In yet another embodiment, the data stored in the line buffers may be YUVa32 (4:4:4:4), RGBa32, or other formats. Those skilled in the art would appreciate that it is straightforward to vary the number of graphics line buffers, e.g., to use different number of taps for filter, the format of graphics data or the number of read and write ports for the SRAM.
- The line buffers are preferably controlled by the graphics line buffer controller over a line
buffer control interface 502. Over this interface, the graphics line buffer controller transfers graphics data to be loaded to the line buffers. The graphics filter reads contents of the line buffers over a graphicsline buffer interface 516 and clears the line buffers by loading them with transparent black pixels prior to releasing them to be loaded with more graphics data for display. - Referring
FIG. 14 , a flow diagram of a process of using line buffers to provide composited graphics data from a display engine to a graphics filter is illustrated. After the graphics display system is reset instep 520, the system instep 522 receives a vertical sync (VSYNC) indicating a field start. Initially, all line buffers preferably operate in the memory clock domain. Accordingly, the line buffers are synchronized to the 81 MHz memory clock in one embodiment of the present invention. In other embodiments, the speed of the memory clock may be different from 81 MHz, or the line buffers may not operate in the clock domain of the main memory. The system instep 524 preferably resets all line buffers by loading them with transparent black pixels. - The system in
step 526 preferably stores composited graphics data in the line buffers. Since all buffers are cleared at every field start by the display engine to the equivalent of transparent black pixels, the graphics data may be blended the same way for any graphics window, including the first graphics window to be blended. Regardless of how many windows are composited into a line buffer, including zero windows, the result is preferably always the correct pixel data. - The system in
step 528 preferably detects a horizontal sync (HSYNC) which signifies a new display line. At the start of each display line, the graphics blender preferably receives a line buffer release signal from the graphics filter when one or more line buffers are no longer needed by the graphics filter. Since four line buffers are used with the four-tap graphics filter at any given time, one to three line buffers are preferably made available for use by the graphics blender to begin constructing new display lines in them. Once a line buffer release signal is recognized, an internal buffer usage register is updated and then clock switching is performed to enable the display engine to work on the newly released one to three line buffers. In other embodiments, the number of line buffers may be more or less than seven, and more or less than three line buffers may be released at a time. - The system in
step 534 preferably performs clock switching. Clock switching is preferably done in the memory clock domain by the display engine using a clock selection vector. Each bit of the clock selection vector preferably corresponds to one of the graphics line buffers. Therefore, in one embodiment of the present invention with seven graphics line buffers, there are seven bits in the clock selection vector. For example, a corresponding bit oflogic 1 in the clock selection vector indicates that the line buffer operates in the memory clock domain while a corresponding bit oflogic 0 indicates that the line buffer operates in the display clock domain. - Other embodiments may have different numbers of line buffers and the number of bits in the clock selection vector may vary accordingly. Clock switching logic preferably switches between the memory clock and the display clock in accordance with the clock selection vector. The clock selection vector is preferably also used to multiplex the memory clock buffer control signals and the display clock buffer control signals.
- Since there is preferably no active graphics data at field and line starts, clock switching preferably is done at the field start and the line start to accommodate the graphics filter to access graphics data in real-time. At the field and line starts, clock switching may be done without causing glitches on the display side. Clock switching typically requires a dead cycle time. A clock enable vector indicates that the graphics line buffers are ready to synchronize to the clocks again. The clock enable vector is preferably the same size at the clock selection vector. The clock enable vector is returned to the display engine to be compared with the clock selection vector.
- During clock switching, the clock selection vector is sent by the display engine to the graphics line buffer block. The clocks are preferably disabled to ensure a glitch-free clock switching. The graphics line buffers send the clock enable vector to the display engine with the clock synchronization settings requested in the clock selection vector. The display engine compares contents of the clock selection vector and the clock enable vector. When the contents match, the clock synchronization is preferably turned on again.
- After the completion of clock switching during the video inactive region, the system in
step 536 preferably provides the graphics data in the line buffers to the graphics filter for anti-flutter filtering, sample rate conversion (SRC) and display. At the end of the current display line, the system looks for a VSYNC instep 538. If the VSYNC is detected, the current field has been completed, and therefore, the system instep 530 preferably switches clocks for all line buffers to the memory clock and resets the line buffers instep 524 for display of another field. If the VSYNC is not detected instep 538, the current display line is not the last display line of the current field. The system continues to step 528 to detect another HSYNC for processing and displaying of the next display line of the current field. - Sometimes it is desirable to scroll a graphics window softly, e.g., display text that moves from left to right or from right to left smoothly on a television screen. There are some difficulties that may be encountered in conventional methods that seek to implement horizontal soft scrolling.
- Graphics memory buffers are conventionally implemented using low-cost DRAM, SDRAM, for example. Such memory devices are typically slow and may require each burst transfer to be within a page. Smooth (or soft) horizontal scrolling, however, preferably enables the starting address to be set to any arbitrary pixel. This may conflict with the transfer of data in bursts within the well-defined pages of DRAM. In addition, complex control logic may be required to monitor if page boundaries are to be crossed during the transfer of pixel maps for each step during soft horizontal scrolling.
- In the preferred embodiment, an implementation of a soft horizontal scrolling mechanism is achieved by incrementally modifying the content of a window descriptor for a particular graphics window. The window soft horizontal scrolling mechanism preferably enables positioning the contents of graphics windows on arbitrary positions on a display line.
- In an embodiment of the present invention, the soft horizontal scrolling of graphics windows is implemented based on an architecture in which each graphics window is independently stored in a normal graphics buffer memory device (SDRAM, EDO-DRAM, DRAM) as a separate object. Windows are composed on top of each other in real time as required. To scroll a window to the left or right, a special field is defined in the window descriptor that tells how many pixels are to be shifted to the left or right.
- The system according to the present invention provides a method of horizontally scrolling a display window to the left, which includes the steps of blanking out one or more pixels at a beginning of a portion of graphics data, the portion being aligned with a start address; and displaying the graphics data starting at the first non-blanked out pixel in the portion of the graphics data aligned with the start address.
- The system according to the present invention also provides a method of horizontally scrolling a display window to the right which includes the steps of moving a read pointer to a new start address that is immediately prior to a current start address, blanking out one or more pixels at a beginning of a portion of graphics data, the portion being aligned to the new start address, and displaying the graphics data starting at the first non-blanked out pixel in the portion of the graphics data aligned with the new start address.
- In practice, each graphics window is preferably addressed using an integer word address. For example, if the memory system uses 32 bit words, then the address of the start of a window is defined to be aligned to a multiple of 32 bits, even if the first pixel that is desired to be displayed is not so aligned. Each graphics window also preferably has associated with it a horizontal offset parameter, in units of pixels, that indicates a number of pixels to be ignored, starting at the indicated starting address, before the active display of the window starts. In the preferred embodiment, the horizontal offset parameter is the blank start pixel value in the
word 3 of the window descriptor. For example, if the memory system uses 32-bit words and the graphics format of a window uses 8 bits per pixel, each 32-bit word contains four pixels. In this case, the display of the window may ignore one, two or three pixels (8, 16, or 24 bits), causing an effective left shift of one, two, or three pixels. - In the embodiment illustrated by the above example, the memory system uses 32-bit words. In other embodiments, the memory system may use more or less number of bits per word, such as 16 bits per word or 64 bits per word. In addition, pixels in other embodiments may have various different number of bits per pixel, such as 1, 2, 4, 8, 16, 24 and 32.
- Referring to
FIG. 15 , in the preferred embodiment, a first pixel (e.g., the first 8 bits) 604 of a 32-bit word 600, which is aligned to the start address, is blanked out. The remaining three 8-bit pixels, other than the blanked out first pixel, are effectively shifted to the left by one pixel. Prior to blanking out, aread pointer 602 points to the first bit of the 32-bit word. After blanking out, theread pointer 602 points to the ninth bit of the 32-bit word. - Further, a shift of four pixels is implemented by changing the start address by one to the next 32-bit word. Shifts of any number of pixels are thereby implemented by a combination of adjusting the starting word address and adjusting the pixel shift amount. The same mechanism may be used for any number of bits per pixel (1, 2, 4, etc.) and any memory word size.
- To shift a pixel or pixels to the right, the shifting cannot be achieved simply by blanking some of the bits at the start address since any blanking at the start will simply have an effect of shifting pixels to the left. Further, the shifting to the right cannot be achieved by blanking some of the bits at the end of the last data word of a display line since display of a window starts at the start address regardless of the position of the last pixel to be displayed.
- Therefore, in one embodiment of the present invention, when the graphics display is to be shifted to the right, a read pointer pointing at the start address is preferably moved to an address that is just before the start address, thereby making that address the new start address. Then, a portion of the data word aligned with the new start address is blanked out. This provides the effect of shifting the graphics display to the right.
- For example, a memory system may use 32-bit words and the graphics format of a window may use 2 bits per pixel, e.g., a
CLUT 2 format. If the graphics display is to be shifted by a pixel to the right, the read pointer is moved to an address that is just before the start address, and that address becomes a new start address. Then, the first 30 bits of the 32-bit word that is aligned with the new start address are blanked out. In this case, blanking out of a portion of the 32-bit word that is aligned with the new start address has the effect of shifting the graphics display to the right. - Referring to
FIG. 16 , a 32-bit word 610 that is aligned with the starting address is shifted to the right by one pixel. The 32-bit word 610 has aCLUT 2 format, and therefore contains 16 pixels. Aread pointer 612 points at the beginning of the 32-bit word 610. To shift the pixels in the 32-bit word 610 to the right, an address that is just before the start address is made a new start address. A 32-bit data word 618 is aligned with the new start address. Then, the first 30 bits (15 pixels) 616 of the 32-bit data word 618 aligned with the new start address are blanked out. Theread pointer 612 points at a new location, which is the 31st bit of the new start address. The 31St bit and the 32nd bit of the new start address may constitute apixel 618. Insertion of thepixel 618 in front of 16 pixels of the 32-bit data word 610 effectively shifts those 16 pixels to the right by one pixel. - TV-based applications, such as interactive program guides, enhanced TV, TV navigators, and web browsing on TV frequently require the display of text and line-oriented graphics on the display. A graphical element or glyph generally represents an image of text or graphics. Graphical element may refer to text glyphs or graphics. In conventional methods of displaying text on TV or computer displays, graphical elements are rendered as arrays of pixels (picture elements) with two states for every pixel, i.e. the foreground and background colors.
- In some cases the background color is transparent, allowing video or other graphics to show through. Due to the relatively low resolution of most present day TVs, diagonal and round edges of graphical elements generally show a stair-stepped appearance which may be undesirable; and fine details are constrained to appear as one or more complete pixels (dots), which may not correspond well to the desired appearance. The interlaced nature of TV displays causes horizontal edges of graphical elements, or any portion of graphical elements with a significant vertical gradient, to show a “fluttering” appearance with conventional methods.
- Some conventional methods blend the edges of graphical elements with background colors in a frame buffer, by first reading the color in the frame buffer at every pixel where the graphical element will be written, combining that value with the foreground color of the graphical element, and writing the result back to the frame buffer memory. This method requires there to be a frame buffer; it requires the frame buffer to use a color format that supports such blending operations, such as RGB24 or RGB16, and it does not generally support the combination of graphical elements over full motion video, as such functionality may require repeating the read, combine and write back function of all pixels of all graphical elements for every frame or field of the video in a timely manner.
- The system preferably displays a graphical element by filtering the graphical element with a low pass filter to generate a multi-level value per pixel at an intended final display resolution and uses the multi-level values as alpha blend values for the graphical element in the subsequent compositing stage.
- In one embodiment of the present invention, a method of displaying graphical elements on televisions and other displays is used. A deep color frame buffer with, for example, 16, 24, or 32 bits per pixel, is not required to implement this method since this method is effective with as few as two bits per pixel. Thus, this method may result in a significant reduction in both the memory space and the memory bandwidth required to display text and graphics. The method preferably provides high quality when compared with conventional methods of anti-aliased text, and produces higher display quality than is available with conventional methods that do not support anti-aliased text.
- Referring to
FIG. 17 , a flow diagram illustrates a process of providing very high quality display of graphical elements in one embodiment of the present invention. First, the bi-level graphical elements are filtered by the system instep 652. The graphical elements are preferably initially rendered by the system instep 650 at a significantly higher resolution than the intended final display resolution, for example, four times the final resolution in both horizontal and vertical axes. The filter may be any suitable low pass filter, such as a “box” filter. The result of the filtering operation is a multi-level value per pixel at the intended display resolution. - The number of levels may be reduced to fit the number of bits used in the succeeding steps. The system in
step 654 determines whether the number of levels are to be reduced by reducing the number of bits used. If the system determines that the number of levels are to be reduced, the system instep 656 preferably reduces the number of bits. For example, the result of box-filtering 4×4 super-sampled graphical elements normally results in 17 possible levels; these may be converted through truncation or other means to 16 levels to match a 4 bit representation, or eight levels to match a 3 bit representation, or four levels to match a 2 bit representation. The filter may provide a required vertical axis low pass filter function to provide anti-flutter filter effect for interlaced display. - In
step 658, the system preferably uses the resulting multi-level values, either with or without reduction in the number of bits, as alpha blend values, which are preferably pixel alpha component values, for the graphical elements in a subsequent compositing stage. The multi-level graphical element pixels are preferably written into a graphics display buffer where the values are used as alpha blend values when the display buffer is composited with other graphics and video images. - In an alternate embodiment, the display buffer is defined to have a constant foreground color consistent with the desired foreground color of the text or graphics, and the value of every pixel in the display buffer is defined to be the alpha blend value for that pixel. For example, an Alpha-4 format specifies four bits per pixel of alpha blend value in a graphics window, where the 4 bits define alpha blend values of 0/16, 1/16, 2/16, . . . , 13/16, 14/16, and 16/16. The
value 15/16 is skipped in this example in order to obtain the endpoint values of 0 and 16/16 (1) without requiring the use of an additional bit. In this example format, the display window has a constant foreground color which is specified in the window descriptor. - In another alternate embodiment, the alpha blend value per pixel is specified for every pixel in the graphical element by choosing a CLUT index for every pixel, where the CLUT entry associated with every index contains the desired alpha blend value as part of the CLUT contents. For example, a graphical element with a constant foreground color and 4 bits of alpha per pixel can be encoded in a
CLUT 4 format such that every pixel of the display buffer is defined to be a 4 bit CLUT index, and each of the associated 16 CLUT entries has the appropriate alpha blend value (0/16, 1/16, 2/16, . . . , 14/16, 16/16) as well as the (same) constant foreground color in the color portion of the CLUT entries. - In yet another alternate embodiment, the alpha per pixel values are used to form the alpha portion of color+alpha pixels in the display buffer, such as alphaRGB(4,4,4,4) with 4 bits for each of alpha, Red, Green, and Blue, or alphaRGB32 with 8 bits for each component. This format does not require the use of a CLUT.
- In still another alternate embodiment, the graphical element may or may not have a constant foreground color. The various foreground colors are processed using a low-pass filter as described earlier, and the outline of the entire graphical element (including all colors other than the background) is separately filtered also using a low pass filter as described. The filtered foreground color is used as either the direct color value in, e.g., an alphaRGB format (or other color space, such as alphaYUV) or as the color choice in a CLUT format, and the result of filtering the outline is used as the alpha per pixel value in either a direct color format such as alphaRGB or as the choice of alpha value per CLUT entry in a CLUT format.
- The graphical elements are displayed on the TV screen by compositing the display buffer containing the graphical elements with optionally other graphics and video contents while blending the subject display buffer with all layers behind it using the alpha per pixel values created in the preceding steps. Additionally, the translucency or opacity of the entire graphical element may be varied by specifying the alpha value of the display buffer via such means as the window alpha value that may be specified in a window descriptor.
- When a composite video signal (analog video) is received into the system, it is preferably digitized and separated into YUV (luma and chroma) components for processing. Samples taken for YUV are preferably synchronized to a display clock for compositing with graphics data at the video compositor. Mixing or overlaying of graphics with decoded analog video may require synchronizing the two image sources exactly. Undesirable artifacts such as jitter may be visible on the display unless a synchronization mechanism is implemented to correctly synchronize the samples from the analog video to the display clock. In addition, analog video often does not adhere strictly to the television standards such as NTSC and PAL. For example, analog video which originates in VCRs may have synchronization signals that are not aligned with chroma reference signals and also may have inconsistent line periods. Thus, the synchronization mechanism preferably should correctly synchronize samples from non-standard analog videos as well.
- The system, therefore, preferably includes a video synchronizing mechanism that includes a first sample rate converter for converting a sampling rate of a stream of video samples to a first converted rate, a filter for processing at least some of the video samples with the first converted rate, and a second sample rate converter for converting the first converted rate to a second converted rate.
- Referring to
FIG. 18 , thevideo decoder 50 preferably samples and synchronizes the analog video input. The video receiver preferably receives ananalog video signal 706 into an analog-to-digital converter (ADC) 700 where the analog video is digitized. Thedigitized analog video 708 is preferably sub-sampled by a chroma-locked sample rate converter (SRC) 708. A sampledvideo signal 710 is provided to an adaptive 2H comb filter/chroma demodulator/luma processor 702 to be separated into YUV (luma and chroma) components. In the 2H comb filter/chroma demodulator/luma processor 702, the chroma components are demodulated. In addition, the luma component is preferably processed by noise reduction, coring and detail enhancement operations. The adaptive 2H comb filter provides the sampledvideo 712, which has been separated into luma and chroma components and processed, to a line-lockedSRC 704. The luma and chroma components of the sample video is preferably sub-sampled once again by the line-locked SRC and thesub-sampled video 714 is provided to a time base corrector (TBC) 72. The time base corrector preferably provides anoutput video signal 716 that is synchronized to a display clock of the graphics display system. In one embodiment of the present invention, the display clock runs at a nominal 13.5 MHz. - The synchronization mechanism preferably includes the chroma-locked
SRC 70, the line-lockedSRC 704 and theTBC 72. The chroma-locked SRC outputs samples that are locked to chroma subcarrier and its reference bursts while the line-locked SRC outputs samples that are locked to horizontal syncs. In the preferred embodiment, samples of analog video are over-sampled by theADC 700 and then down-sampled by the chroma-locked SRC to four times the chroma sub-carrier frequency (Fsc). The down-sampled samples are down-sampled once again by the line-locked SRC to line-locked samples with an effective sample rate of nominally 13.5 MHz. The time base corrector is used to align these samples to the display clock, which runs nominally at 13.5 MHz. - Analog composite video has a chroma signal frequency interleaved in frequency with the luma signal. In an NTSC standard video, this chroma signal is modulated on to the Fsc of approximately 3.579545 MHz, or exactly 227.5 times the horizontal line rate. The luma signal covers a frequency span of zero to approximately 4.2 MHz. One method for separating the luma from the chroma is to sample the video at a rate that is a multiple of the chroma sub-carrier frequency, and use a comb filter on the sampled data. This method generally imposes a limitation that the sampling frequency is a multiple of the chroma sub-carrier frequency (Fsc).
- Using such a chroma-locked sampling frequency generally imposes significant costs and complications on the implementation, as it may require the creation of a sample clock of the correct frequency, which itself may require a stable, low noise controllable oscillator (e.g. a VCXO) in a control loop that locks the VCXO to the chroma burst frequency. Different sample frequencies are typically required for different video standards with different chroma subcarrier frequencies. Sampling at four times the subcarrier frequency, i.e. 14.318 MHz for NTSC standard and 17.72 MHz for PAL standard, generally requires more anti-alias filtering before digitization than is required when sampling at higher frequencies such as 27 MHz. In addition, such a chroma-locked clock frequency is often unrelated to the other frequencies in a large scale digital device, requiring multiple clock domains and asynchronous internal interfaces.
- In the preferred embodiment, however, the samples are not taken at a frequency that is a multiple of Fsc. Rather, in the preferred embodiment, an integrated circuit takes samples of the analog video at a frequency that is essentially arbitrary and that is greater than four times the Fsc (4Fsc=14.318 MHz). The sampling frequency preferably is 27 MHz and preferably is not locked to the input video signal in phase or frequency. The sampled video data then goes through the chroma-locked SRC that down-samples the data to an effective sampling rate of 4Fsc. This and all subsequent operations are preferably performed in digital processing in a single integrated circuit.
- The effective sample rate of 4Fsc does not require a clock frequency that is actually at 4Fsc, rather the clock frequency can be almost any higher frequency, such as 27 MHz, and valid samples occur on some clock cycles while the overall rate of valid samples is equal to 4Fsc. The down-sampling (decimation) rate of the SRC is preferably controlled by a chroma phase and frequency tracking module. The chroma phase and frequency tracking module looks at the output of the SRC during the color burst time interval and continuously adjusts the decimation rate in order to align the color burst phase and frequency. The chroma phase and frequency tracking module is implemented as a logical equivalent of a phase locked loop (PLL), where the chroma burst phase and frequency are compared in a phase detector to the effective sample rate, which is intended to be 4Fsc, and the phase and frequency error terms are used to control the SRC decimation rate.
- The decimation function is applied to the incoming sampled video, and therefore the decimation function controls the chroma burst phase and frequency that is applied to the phase detector. This system is a closed feedback loop (control loop) that functions in much the same way as a conventional PLL, and its operating parameters are readily designed in the same way as those of PLLs.
- Referring to
FIG. 19 , the chroma-lockedSRC 70 preferably includes a sample rate converter (SRC) 730, achroma tracker 732 and a low pass filter (LPF). TheSRC 730 is preferably a polyphase filter having time-varying coefficients. The SRC is preferably implemented with 35 phases and the conversion ratio of 35/66. TheSRC 730 preferably interpolates by exactly 35 and decimates by (66+epsilon), i.e. the decimation rate is preferably adjustable within a range determined by the minimum and maximum values of epsilon, generally a small range. Epsilon is a first adjustment value, which is used to adjust the decimation rate of a first sample rate converter, i.e., the chroma-locked sample rate converter. - Epsilon is preferably generated by the control loop comprising the
chroma tracker 732 and theLPF 734, and it can be negative, positive or zero. When the output samples of theSRC 730 are exactly frequency and phase locked to the color sub-carrier then epsilon is zero. The chroma tracker tracks phase and frequency of the chroma bursts and compares them against an expected pattern. In one embodiment of the present invention, the conversion rate of the chroma-locked SRC is adjusted so that, in effect, the SRC samples the chroma burst at exactly four times per chroma sub-carrier cycle. The SRC takes the samples atphases 0 degrees, 90 degrees, 180 degrees and 270 degrees of the chroma sub-carrier cycle. This means that a sample is taken at every cycle of the color sub-carrier at a zero crossing, a positive peak, zero crossing and a negative peak, (0, +1, 0, −1). If the pattern obtained from the samples is different from (0, +1, 0, −1), this difference is detected and the conversion ratio needs to be adjusted inside the control loop. - When the output samples of the chroma-locked SRC are lower in frequency or behind in phase, e.g., the pattern looks like (−1, 0, +1, 0), then the
chroma tracker 732 will make epsilon negative. When epsilon is negative, the sample rate conversion ratio is higher than the nominal 35/66, and this has the effect of increasing the frequency or advancing the phase of samples at the output of the chroma-locked SRC. When the output samples of the chroma-locked SRC are higher in frequency or leading in phase, e.g., the pattern looks like (+1, 0, −1, 0), then thechroma tracker 732 will make epsilon positive. When epsilon is positive, the sample rate conversion ratio is lower than the nominal 35/66, and this has the effect of decreasing the frequency or retarding the phase of samples out of the chroma-locked SRC. The chroma tracker provideserror signal 736 to theLPF 734 that filters the error signal to filter out high frequency components and provides the filtered error signal to the SRC to complete the control loop. - The sampling clock may run at the system clock frequency or at the clock frequency of the destination of the decoded digital video. If the sampling clock is running at the system clock, the cost of the integrated circuit may be lower than one that has a system clock and a sub-carrier locked video decoder clock. A one clock integrated circuit may also cause less noise or interference to the analog-to-digital converter on the IC. The system is preferably all digital, and does not require an external crystal or a voltage controlled oscillator.
- Referring to
FIG. 20 , an alternate embodiment of the chroma-lockedSRC 70 preferably varies the sampling rate while the conversion rate is held constant. A voltage controlled oscillator (e.g., VCXO) 760 varies the sampling rate by providing asampling frequency signal 718 to theADC 700. The conversion rate in this embodiment is fixed at 35/66 in theSRC 750 which is the ratio between four times the chroma sub-carrier frequency and 27 MHz. - In this embodiment, the chroma burst signal at the output of the chroma-locked SRC is compared with the expected chroma burst signal in a
chroma tracker 752. The error signals 756 from the comparison between the converted chroma burst and the expected chroma burst are passed through alow pass filter 754 and then filtered error signals 758 are provided to theVCXO 760 to control the oscillation frequency of the VCXO. The oscillation frequency of the VCXO changes in response to the voltage level of the provided error signals. Use of input voltage to control the oscillation frequency of a VCXO is well known in the art. The system as described here is a form of a phase locked loop (PLL), the design and use of which is well known in the art. - After the completion of chroma-luma separation and other processing to the chroma and luma components, the samples with the effective sample rate of 4 Fsc (i.e. 4 times the chroma subcarrier frequency) are preferably decimated to samples with a sample rate of nominally 13.5 MHz through the use of a second sample rate converter. Since this sample rate is less than the electrical clock frequency of the digital integrated circuit in the preferred embodiment, only some clock cycles carry valid data. In this embodiment, the sample rate is preferably converted to 13.5 MHz, and is locked to the horizontal line rate through the use of horizontal sync signals. Thus, the second sample rate converter is a line-locked sample rate converter (SRC).
- The line-locked sample rate converter converts the current line of video to a constant (Pout) number of pixels. This constant number of pixels Pout is normally 858 for ITU-R BT.601 applications and 780 for NTSC square pixel applications. The current line of video may have a variable number of pixels (Pin). In order to do this conversion from a chroma-locked sample rate, the following steps are performed. The number of input samples Pin of the current line of video is accurately measured. This line measurement is used to calculate the sample rate conversion ratio needed to convert the line to exactly Pout samples. An adjustment value to the sample rate conversion ratio is passed to a sample rate converter module in the line-locked SRC to implement the calculated sample rate conversion ratio for the current line. The sample conversion ratio is calculated only once for each line. Preferably, the line-locked SRC also scales YUV components to the proper amplitudes required by ITU-R BT.601.
- The number of samples detected in a horizontal line may be more or less if the input video is a non-standard video. For example, if the incoming video is from a VCR, and the sampling rate is four times the color sub—carrier frequency (4Fsc), then the number of samples taken between two horizontal syncs may be more or less than 910, where 910 is the number of samples per line that is obtained when sampling NTSC standard video at a sampling frequency of 4Fsc. For example, the horizontal line time from a VCR may vary if the video tape has been stretched.
- The horizontal line time may be accurately measured by detecting two successive horizontal syncs. Each horizontal sync is preferably detected at the leading edge of the horizontal sync. In other embodiments, the horizontal syncs may be detected by other means. For example, the shape of the entire horizontal sync may be looked at for detection. In the preferred embodiment, the sample rate for each line of video has been converted to four times the color sub-carrier frequency (4Fsc) by the chroma-locked sample rate converter. The measurement of the horizontal line time is preferably done at two levels of accuracy, an integer pixel accuracy and a sub-sample accuracy.
- The integer pixel accuracy is preferably done by counting the integer number of pixels that occur between two successive sync edges. The sync edge is presumed to be detected when the data crosses some threshold value. For example, in one embodiment of the present invention, the analog-to-digital converter (ADC) is a 10-bit ADC, i.e., converts an input analog signal into a digital signal with (2̂10−1=1023) scale levels. In this embodiment, the threshold value is chosen to represent an appropriate slicing level for horizontal sync in the 10-bit number system of the ADC; a typical value for this threshold is 128. The negative peak (or a sync tip) of the digitized video signal normally occurs during the sync pulses. The threshold level would normally be set such that it occurs at approximately the mid-point of the sync pulses. The threshold level may be automatically adapted by the video decoder, or it may be set explicitly via a register or other means.
- The horizontal sync tracker preferably detects the horizontal sync edge to a sub-sample accuracy of ( 1/16)th of a pixel in order to more accurately calculate the sample rate conversion. The incoming samples generally do not include a sample taken exactly at the threshold value for detecting horizontal sync edges. The horizontal sync tracker preferably detects two successive samples, one of which has a value lower than the threshold value and the other of which has a value higher than the threshold value.
- After the integer pixel accuracy is determined (sync edge has been detected) the sub-pixel calculation is preferably started. The sync edge of a horizontal sync is generally not a vertical line, but has a slope. In order to remove noise, the video signal goes through a low pass filter. The low pass filter generally decreases sharpness of the transition, i.e., the low pass filter may make the transition from a low level to a high level last longer.
- The horizontal sync tracker preferably uses a sub-sample interpolation technique to obtain an accurate measurement of sync edge location by drawing a straight line between the two successive samples of the horizontal sync signal just above and just below the presumed threshold value to determine where the threshold value has been crossed.
- Three values are preferably used to determine the sub-sample accuracy. The three values are the threshold level (T), the value of the sample that crossed the threshold level (V2) and the value of the previous sample that did not cross the threshold level (V1). The sub-sample value is the ratio of T−V1)/(V2−V1). In the present embodiment a division is not performed. The difference (V2−V1) is divided by 16 to make a variable called DELTA. V1 is then incremented by DELTA until it exceeds the threshold T. The number of times that DELTA is added to V1 in order to make it exceed the threshold (T) is the sub-pixel accuracy in terms of 1/16th of a pixel.
- For example, if the threshold value T is presumed to be 146 scale levels, and if the values V1 and V2 of the two successive samples are 140 and 156, respectively, the DELTA is calculated to be 1, and the crossing of the threshold value is determined through interpolation to be six DELTAs away from the first of the two successive samples. Thus, if the sample with
value 140 is the nth sample and the sample with the value 156 is the (n+1)th sample, the (n+( 6/16))th sample would have had the threshold value. Since the horizontal sync preferably is presumed to be detected at the threshold value of the sync edge, a fractional sample, i.e., 6/16 sample, is added to the number of samples counted between two successive horizontal syncs. - In order to sample rate convert the current number of input pixels Pin to the desired output pixels Pout, the sample rate converter module has a sample rate conversion ratio of Pin/Pout. The sample rate converter module in the preferred embodiment of the line-locked sample rate converter is a polyphase filter with time-varying coefficients. There is a fixed number of phases (I) in the polyphase filter. In the preferred embodiment, the number of phases (I) is 33. The control for the polyphase filter is the decimation rate (d_act) and a reset phase signal. The line measurement Pin is sent to a module that converts it to a decimation rate d_act such that I/d_act (33/d_act) is equal to Pin/Pout. The decimation rate d_act is calculated as follows: d_act=(I/Pout)*Pin.
- If the input video line is the standardized length of time and the four times the color sub-carrier is the standardized frequency then Pin will be exactly 910 samples. This gives a sample rate conversion ratio of (858/910). In the present embodiment the number of phases (the interpolation rate) is 33. Therefore the nominal decimation rate for NTSC is 35 (=(33/858)*910). This decimation rate d_act may then be sent to the sample rate converter module. A reset phase signal is sent to the sample rate converter module after the sub-sample calculation has been done and the sample rate converter module starts processing the current video line. In the preferred embodiment, only the active portion of video is processed and sent on to a time base corrector. This results in a savings of memory needed. Only 720 samples of active video are produced as ITU-R BT.601 output sample rates. In other embodiments, the entire horizontal line may be processed and produced as output.
- In the preferred embodiment, the calculation of the decimation rate d_act is done somewhat differently from the equation d_act=(I/Pout)*Pin. The results are the same, but there are savings to hardware. The current line length, Pin, will have a relatively small variance with respect to the nominal line length. Pin is nominally 910. It typically varies by less than 62. For NTSC, this variation is less than 5 microseconds. The following calculation is done: d_act=((I/Pout)*(Pin−Pin_nominal))+d_act_nominal
- This preferably results in a hardware savings for the same level of accuracy. The difference (Pin−Pin_nominal) may be represented by fewer bits than are required to represent Pin so a smaller multiplier can be used. For NTSC, d_act_nominal is 35 and Pin_nominal is 910. The value (I/Pout)*(Pin−Pin_nominal) may now be called a delta_dec (delta decimation rate) or a second adjustment value.
- Therefore, in order to maintain the output sample rate of 858 samples per horizontal line, the conversion rate applied preferably is 33/(35+delta_dec) where the samples are interpolated by 33 and decimated by (35+delta_dec). A horizontal sync tracker preferably detects horizontal syncs, accurately counts the number of samples between two successive horizontal syncs and generates delta_dec.
- If the number of samples between two successive horizontal syncs is greater than 910, the horizontal sync tracker generates a positive delta_dec to keep the output sample rate at 858 samples per horizontal line. On the other hand, if the number of samples between two successive horizontal syncs is less than 910, the horizontal sync tracker generates a negative delta_dec to keep the output sample rate at 858 samples per horizontal line.
- For PAL standard video, the horizontal sync tracker generates the delta_dec to keep the output sample rate at 864 samples per horizontal line.
- In summary, the position of each horizontal sync pulse is determined to sub-pixel accuracy by interpolating between two successive samples, one of which being immediately below the threshold value and the other being immediately above the threshold value. The number of samples between the two successive horizontal sync pulses is preferably calculated to sub-sample accuracy by determining the positions of two successive horizontal sync pulses, both to sub-pixel accuracy. When calculating delta_dec, the horizontal sync tracker preferably uses the difference between 910 and the number of samples between two successive horizontal syncs to reduce the amount of hardware needed.
- In an alternate embodiment, the decimation rate adjustment value, delta_dec, which is calculated for each line, preferably goes through a low pass filter before going to the sample rate converter module. One of the benefits of this method is filtering of variations in the line lengths of adjacent lines where the variations may be caused by noise that affects the accuracy of the measurement of the sync pulse positions.
- In another alternative embodiment, the input sample clock is not free running, but is instead line-locked to the input analog video, preferably 27 MHz. The chroma-locked sample rate converter converts the 27 MHz sampled data to a sample rate of four times the color sub-carrier frequency. The analog video signal is demodulated to luma and chroma component video signals, preferably using a comb filter. The luma and chroma component video signals are then sent to the line-locked sample rate converter where they are preferably converted to a sample rate of 13.5 MHz. In this embodiment the 13.5 MHz sample rate at the output may be exactly one-half of the 27 MHz sample rate at the input. The conversion ratio of the line-locked sample rate converter is preferably exactly one-half of the inverse of the conversion ratio performed by the chroma-locked sample rate converter.
- Referring to
FIG. 21 , the line-lockedSRC 704 preferably includes anSRC 770 which preferably is a polyphase filter with time varying coefficients. The number of phases is preferably fixed at 33 while the nominal decimation rate is 35. In other words, the conversion ratio used is preferably 33/(35+delta_dec) where delta_dec may be positive or negative. The delta_dec is a second adjustment value, which is used to adjust the decimation rate of the second sample rate converter. Preferably, the actual decimation rate and phase are automatically adjusted for each horizontal line so that the number of samples per horizontal line is 858 (720 active Y samples and 360 active U and V samples) and the phase of the active video samples is aligned properly with the horizontal sync signals. - In the preferred embodiment, the decimation (down-sampling) rate of the SRC is preferably controlled by a
horizontal sync tracker 772. Preferably, the horizontal sync tracker adjusts the decimation rate once per horizontal line in order to result in a correct number and phase of samples in the interval between horizontal syncs. The horizontal sync tracker preferably provides the adjusted decimation rate to theSRC 770 to adjust the conversion ratio. The decimation rate is preferably calculated to achieve a sub-sample accuracy of 1/16. Preferably, the line-lockedSRC 704 also includes aYUV scaler 780 to scale YUV components to the proper amplitudes required by ITU-R BT.601. - The time base corrector (TBC) preferably synchronizes the samples having the line-locked sample rate of nominally 13.5 MHz to the display clock that runs nominally at 13.5 MHz. Since the samples at the output of the TBC are synchronized to the display clock, passthrough video may be provided to the video compositor without being captured first.
- To produce samples at the sample rate of nominally 13.5 MHz, the composite video may be sampled in any conventional way with a clock rate that is generally used in the art. Preferably, the composite video is sampled initially at 27 MHz, down sampled to the sample rate of 14.318 MHz by the chroma-locked SRC, and then down sampled to the sample rate of nominally 13.5 MHz by the line-locked SRC. During conversion of the sample rates, the video decoder uses for timing the 27 MHz clock that was used for input sampling. The 27 MHz clock, being free-running, is not locked to the line rate nor to the chroma frequency of the incoming video.
- In the preferred embodiment, the decoded video samples are stored in a FIFO the size of one display line of active video at 13.5 MHz, i.e., 720 samples with 16 bits per sample or 1440 bytes. Thus, the maximum delay amount of this FIFO is one display line time with a normal, nominal delay of one-half a display line time. In the preferred embodiment, video samples are outputted from the FIFO at the display clock rate that is nominally 13.5 MHz. Except for vertical syncs of the input video, the display clock rate is unrelated to the timing of the input video. In alternate embodiments, larger or smaller FIFOs may be used.
- Even though the effective sample rate and the display clock rate are both nominally 13.5 MHz the rate of the sampled video entering the FIFO and the display rate are generally different. This discrepancy is due to differences between the actual frequencies of the effective input sample rate and the display clock. For example, the effective input sample rate is nominally 13.5 MHz but it is locked to operate at 858 times the line rate of the video input, while the display clock operates nominally at 13.5 MHz independently of the line rate of the video input.
- Since the rates of data entering and leaving the FIFO are typically different, the FIFO will tend to either fill up or become empty, depending on relative rates of the entering and leaving data. In one embodiment of the present invention, video is displayed with an initial delay of one-half a horizontal line time at the start of every field. This allows the input and output rates to differ up to the point where the input and output horizontal phases may change by up to one—half a horizontal line time without causing any glitches at the display.
- The FIFO is preferably filled up to approximately one-half full during the first active video line of every field prior to taking any output video. Thus, the start of each display field follows the start of every input video field by a fixed delay that is approximately equal to one-half the amount of time for filling the entire FIFO. As such, the initial delay at the start of every field is one-half a horizontal line time in this embodiment, but the initial delay may be different in other embodiments.
- Referring to
FIG. 22 , the time base corrector (TBC) 72 includes aTBC controller 164 and aFIFO 166. TheFIFO 166 receives aninput video 714 at nominally 13.5 MHz locked to the horizontal line rate of the input video and outputs a delayed input video as anoutput video 716 that is locked to the display clock that runs nominally at 13.5 MHz. The initial delay between the input video and the delayed input video is half a horizontal line period of active video, e.g., 53.5 μs per active video in a horizontal line/2=26.75 μs for NTSC standard video. - The
TEC controller 164 preferably generates a vertical sync (VSYNC) for display that is delayed by one-half a horizontal line from an input VSYNC. TheTBC controller 164 preferably also generates timing signals such as NTSC or PAL standard timing signals. The timing signals are preferably derived from the VSYNC generated by the TEC controller and preferably include horizontal sync. The timing signals are not affected by the input video, and the FIFO is read out synchronously to the timing signals. Data is read out of the FIFO according to the timing at the display side while the data is written into the FIFO according to the input timing. A line reset resets the FIFO write pointer to signal a new line. A read pointer controlled by the display side is updated by the display timing. - As long as the accumulated change in FIFO fullness, in either direction, is less than one-half a video line, the FIFO will generally neither underflow nor overflow during the video field. This ensures correct operation when the display clock frequency is anywhere within a fairly broad range centered on the nominal frequency. Since the process is repeated every field, the FIFO fullness changes do not accumulate beyond one field time.
- Referring to
FIG. 23 , a flow diagram of a process using theTEC 72 is illustrated. The process resets instep 782 at system start up. The system preferably checks for vertical sync (VSYNC) of the input video instep 784. After receiving the input VSYNC, the system instep 786 preferably starts counting the number of incoming video samples. The system preferably loads the FIFO instep 788 continuously with the incoming video samples. While the FIFO is being loaded, the system instep 790 checks if enough samples have been received to fill the FIFO up to a half full state. - When enough samples have been received to fill the FIFO to the half full state, the system in
step 792 preferably generates timing signals including horizontal sync to synchronize the output of the TBC to the display clock. The system instep 794 preferably outputs the content of the FIFO continuously in sync with the display clock. The system instep 796 preferably checks for another input VSYNC. When another input vertical sync is detected, the process starts counting the number of input video samples again and starts outputting output video samples when enough input video samples have been received to make the FIFO half full. - In other embodiments of the present invention, the FIFO size may be smaller or larger. The minimum size acceptable is determined by the maximum expected difference in the video source sample rate and the display sample rate. Larger FIFOs allow for greater variations in sample rate timing, however at greater expense. For any chosen FIFO size, the logic that generates the sync signal that initiates display video fields should incur a delay from the input video timing of one-half the delay of the entire FIFO as described above. However, it is not required that the delay be one-half the delay of the entire FIFO.
- In certain applications of graphics and video display hardware, it may be necessary or desirable to scale the size of a motion video image either upwards or downwards. It may also be desirable to minimize memory usage and memory bandwidth demands. Therefore it is desirable to scale down before writing to memory, and to scale up after reading from memory, rather than the other way around in either case. Conventionally there is either be separate hardware to scale down before writing to memory and to scale up after reading from memory, or else all scaling is done in one location or the other, such as before writing to memory, even if the scaling direction is upwards.
- In the preferred embodiment, a video scaler performs both scaling-up and scaling-down of either digital video or digitized analog video. The video scaler is preferably configured such that it can be used for either scaling down the size of video images prior to writing them to memory or for scaling up the size of video images after reading them from memory. The size of the video images are preferably downscaled prior to being written to memory so that the memory usage and the memory bandwidth demands are minimized. For similar reasons, the size of the video images are preferably upscaled after reading them from memory.
- In the former case, the video scaler is preferably in the signal path between a video input and a write port of a memory controller. In the latter case, the video scaler is preferably in the signal path between a read port of the memory controller and a video compositor. Therefore, the video scaler may be seen to exist in two distinct logical places in the design, while in fact occupying only one physical implementation.
- This function is preferably achieved by arranging a multiplexing function at the input of the scaling engine, with one input to the multiplexer being connected to the video input port and the other connected to the memory read port. The memory write port is arranged with a multiplexer at its input, with one input to the multiplexer connected to the output of the scaling engine and the other connected to the video input port. The display output port is arranged with a multiplexer at its input, with one connected to the output of the scaling engine and the other input connected to the output of the memory read port.
- In the preferred embodiment, there are different clock domains associated with the video input and the display output functions of the chip. The video scaling engine uses a clock that is selected between the video input clock and the display output clock (display clock). The clock selection uses a glitch-free clock selection logic, i.e. a circuit that prevents the creation of extremely narrow clock pulses when the clock selection is changed. The read and write interfaces to memory both use asynchronous interfaces using FIFOs, so the memory clock domain may be distinct from both the video input clock domain and the display output clock domain.
- Referring to
FIG. 24 , a flow diagram illustrates a process of alternatively upscaling or downscaling thevideo input 800. The system instep 802 preferably selects between a downscaling operation and an upscaling operation. If the downscaling operation is selected, the system instep 804 preferably downscales the input video prior to capturing the input video in memory instep 806. If the upscaling operation is selected instep 802, the system instep 806 preferably captures the input video in memory without scaling it. - Then the system in
step 808 outputs the downscaled video as downscaledoutput 810. The system instep 808, however, sends non-scaled video in the upscale path to be upscaled instep 812. The system instep 812 upscales the non-scaled video and outputs it as upscaledvideo output 814. - The video pipeline preferably supports up to one scaled video window and one passthrough video window, plus one background color, all of which are logically behind the set of graphics windows. The order of these windows, from back to front, is fixed as background, then passthrough, then scaled video. The video windows are preferably always in YUV format, although they can be in either 4:2:2 or 4:2:0 variants of YUV. Alternatively they can be in RGB or other formats.
- When digital video, e.g., MPEG is provided to the graphics display system or when analog video is digitized, the digital video or the digitized analog video is provided to a video compositor using one of three signal paths, depending on processing requirements. The digital video and the digitized analog video are provided to the video compositor as passthrough video over a passthrough path, as upscaled video over an upscale path and a downscaled video over a downscale path.
- Either of the digital video or the analog video may be provided to the video compositor as the passthrough video while the other of the digital video or the analog video is provided as an upscaled video or a downscaled video. For example, the digital video may be provided to the video compositor over the passthrough path while, at the same time, the digitized analog video is downscaled and provided to the video compositor over the downscale path as a video window. In one embodiment of the present invention where the scaler engine is shared between the upscale path and the downscale path, the scaler engine may upscale video in either the vertical or horizontal axis while downscaling video in the other axis. However, in this embodiment, an upscale operation and a downscale operation on the same axis are not performed at the same time since only one filter is used to perform both upscaling and downscaling for each axis.
- Referring to
FIG. 24 asingle video scaler 52 preferably performs both the downscaling and upscaling operations. In particular, signals of the downscale path only are illustrated. Thevideo scaler 52 includes ascaler engine 182, a set of line buffers 178, avertical coefficient memory 180A and ahorizontal coefficient memory 180B. Thescaler engine 182 is implemented as a set of two polyphase filters, one for each of horizontal and vertical dimensions. - In one embodiment of the present invention, the vertical polyphase filter is a four-tap filter with programmable coefficients from the
vertical coefficient memory 180A. In other embodiments, the number of taps in the vertical polyphase filter may vary. In one embodiment of the present invention, the horizontal polyphase filter is an eight-tap filter with programmable coefficients from thehorizontal coefficient memory 180B. In other embodiments, the number of taps in the horizontal polyphase filter may vary. - The vertical and the horizontal coefficient memories may be implemented in SRAM or any other suitable memory. Depending on the operation to be performed, e.g. a vertical or horizontal axis, and scaling-up or scaling-down, appropriate filter coefficients are used, respectively, from the vertical and horizontal coefficient memories. Selection of filter coefficients for scaling-up and scaling-down operations are well known in the art.
- The set of line buffers 178 are used to provide input of video data to the horizontal and vertical polyphase filters. In this embodiment, three line buffers are used, but the number of the line buffers may vary in other embodiments. In this embodiment, each of the three line buffers is used to provide an input to one of the taps of the vertical polyphase filter with four taps. The input video is provided to the fourth tap of the vertical polyphase filter. A shift register having eight cells in series is used to provide inputs to the eight taps of the horizontal polyphase filter, each cell providing an input to one of the eight taps.
- In this embodiment, a
digital video signal 820 and a digitizedanalog signal video 822 are provided to afirst multiplexer 168 as first and second inputs. Thefirst multiplexer 168 has two outputs. A first output of the first multiplexer is provided to the video compositor as a pass throughvideo 186. A second output of the first multiplexer is provided to a first input of asecond multiplexer 176 in the downscale path. - In the downscale path, the
second multiplexer 176 provides either the digital video or the digitized analog video at the second multiplexer's first input to thevideo scaler 52. The video scaler provides a downscaled video signal to a second input of athird multiplexer 162. The third multiplexer provides the downscaled video to acapture FIFO 158 which stores the captured downscaled video. Thememory controller 126 takes the captured downscaled video and stores it as a captured downscaled video image into avideo FIFO 148. An output of the video FIFO is coupled to a first input of afourth multiplexer 188. The fourth multiplexer provides the output of the video FIFO, which is the captured downscaled video image, as anoutput 824 to the graphics compositor, and this completes the downscale path. Thus, in the downscale path, either the digital video or the digitized analog video is downscaled first, and then captured. -
FIG. 26 is similar toFIG. 25 , but inFIG. 26 , signals of the upscale path are illustrated. In the upscale path, the third,multiplexer 162 provides either thedigital video 820 or thedigitized analog video 822 to thecapture FIFO 158 which captures and stores input as a captured video image. This captured video image is provided to thememory controller 126 which takes it and provides to thevideo FIFO 148 which stores the captured video image. - An output of the
video FIFO 148 is provided to a second input of thesecond multiplexer 176. The second multiplexer provides the captured video image to thevideo scaler 52. The video scaler scales up the captured video image and provides it to a second input of thefourth multiplexer 188 as an upscaled captured video image. The fourth multiplexer provides the upscaled captured video image as theoutput 824 to the video compositor. Thus, in the upscale path, either the digital video or the digitized analog video is captured first, and then upscaled. - Referring to
FIG. 27 ,FIG. 27 is similar toFIG. 25 andFIG. 26 , but inFIG. 27 , signals of both the upscale path and the downscale path are illustrated. - The graphics display system of the present invention is capable of processing an analog video signal, a digital video signal and graphics data simultaneously. In the graphics display system, the analog and digital video signals are processed in the video display pipeline while the graphics data is processed in the graphics display pipeline. After the processing of the video signals and the graphics data have been completed, they are blended together at a video compositor. The video compositor receives video and graphics data from the video display pipeline and the graphics display pipeline, respectively, and outputs to the video encoder (“VEC”).
- The system may employ a method of compositing a plurality of graphics images and video, which includes blending the plurality of graphics images into a blended graphics image, combining a plurality of alpha values into a plurality of composite alpha values, and blending the blended graphics image and the video using the plurality of composite alpha values.
- Referring to
FIG. 28 , a flow diagram of a process of blending video and graphics surfaces is illustrated. The graphics display system resets instep 902. Instep 904, the video compositor blends the passthrough video and the background color with the scaled video window, using the alpha value which is associated with the scaled video window. The result of this blending operation is then blended with the output of the graphics display pipeline. The graphics output has been pre-blended in the graphics blender instep 904 and filtered instep 906, and blended graphics contain the correct alpha value for multiplication by the video output. The output of the video blend function is multiplied by the video alpha which is obtained from the graphics pipeline and the resulting video and graphics pixel data stream are added together to produce the final blended result. - In general, during blending of different layers of graphics and/or video, every layer {L1, L2, L3 . . . Ln}, where L1 is the back-most layer, each layer is blended with the composition of all of the layers behind it, beginning with L2 being blended on top of L1. The intermediate result R(i) from the blending of pixels P(i) of layer L(i) over the pixels P(i−1) of layer L(i−1) using alpha value A(i) is: R(i)=A(i)*P(i)+(1−A(i))*P(i−1).
- The alpha values {A(i)} are in general different for every layer and for every pixel of every layer. However, in some important applications, it is not practical to apply this formula directly, since some layers may need to be processed in spatial dimensions (e.g. 2 dimensional filtering or scaling) before they can be blended with the layer or layers behind them. While it is generally possible to blend the layers first and then perform the spatial processing, that would result in processing the layers that should not be processed if these layers are behind the subject layer that is to be processed. Processing of the layers that are not to be processed may be undesirable.
- Processing the subject layer first would generally require a substantial amount of local storage of the pixels in the subject layer, which may be prohibitively expensive. This problem is significantly exacerbated when there are multiple layers to be processed in front of one or more layers that are not to be processed. In order to implement the formula above directly, each of the layers would have to be processed first, i.e. using their own local storage and individual processing, before they could be blended with the layer behind.
- In the preferred embodiment, rather than blending all the layers from back to front, all of the layers that are to be processed (e.g. filtered) are layered together first, even if there is one or more layers behind them over which they should be blended, and the combined upper layers are then blended with the other layers that are not to be processed. For example, layers {1, 2 and 3} may be layers that are not to be processed, while layers {4, 5, 6, 7, and 8} may be layers that are to undergo processing, while all 8 layers are to be blended together, using {A(i)} values that are independent for every layer and pixel. The layers that are to be filtered, upper layers, may be the graphics windows. The lower layers may include the video window and passthrough video.
- In the preferred embodiment, all of the layers that are to be filtered (referred to as “upper” layers) are blended together from back to front using a partial blending operation. In an alternate embodiment, two or more of the upper layers may be blended together in parallel. The back-most of the upper layers is not in general the back-most layer of the entire operation.
- In the preferred embodiment, at each stage of the blending, an intermediate alpha value is maintained for later use for blending with the layers that are not to be filtered (referred to as the “lower” layers).
- The formula that represents the preferred blending scheme is:
-
R(i)=A(i)*P(i)+(1−A(i))*P(i−1) -
and -
AR(i)=AR(i−1)*(1−A(i)) - where R(i) represents the color value of the resulting blended pixel, P(i) represents the color value of the current pixel, A(i) represents the alpha value of the current pixel, P(i−1) represents the value at the location of the current pixel of the composition of all of the upper layers behind the current pixel, initially this represents black before any layers are blended, AR(i) is the alpha value resulting from each instance of this operation, and AR(i−1) represents the intermediate alpha value at the location of the current pixel determined from all of the upper layers behind the current pixel, initially this represents transparency before any layers are blended. AR represents the alpha value that will subsequently be multiplied by the lower layers as indicated below, and so an AR value of 1 (assuming alpha ranges from 0 to 1) indicates that the current pixel is transparent and the lower layers will be fully visible when multiplied by 1.
- In other words, in the preferred embodiment, at each stage of blending the upper layers, the pixels of the current layer are blended using the current alpha value, and also an intermediate alpha value is calculated as the product (1−A(i))*(AR(i−1)). The key differences between this and the direct evaluation of the conventional formula are: (1) the calculation of the product of the set of {(1−A(i))} for the upper layers, and (2) a virtual transparent black layer is used to initialize the process for blending the upper layers, since the lower layers that would normally be blended with the upper layers are not used at this point in this process.
- The calculation of the product of the sets of {(1−A(i)} for the upper layers is implemented, in the preferred embodiment, by repeatedly calculating AR(i)=AR(i−1)*(1−A(i)) at each layer, such that when all layers {i} have been processed, the result is that AR=the product of all (1−A(i)) values for all upper layers. Alternatively in other embodiments, the composite alpha value for each pixel of blended graphics may be calculated directly as the product of all (1-alpha value of the corresponding pixel of the graphics image on each layer)'s without generating an intermediate alpha at each stage.
- To complete the blending process of the entire series of layers, including the upper and lower layers, once the upper layers have been blended together as described above, they may be processed as desired and then the result of this processing, a composite intermediate image, is blended with the lower layer or layers. In addition, the resulting alpha values preferably are also processed in essentially the same way as the image components. The lower layers can be blended in the conventional fashion, so at some point there can be a single image representing the lower layers. Therefore two images, one representing the upper layers and one representing the lower layers can be blended together. In this operation, the AR(n) value at each pixel that results from the blending of the upper layers and any subsequent processing is used to be multiplied with the composite lower layer.
- Mathematically this latter operation is as follows: let L(u) be the composite upper layer resulting from the process described above and after any processing, let AR(u) be the composite alpha value of the upper layers resulting from the process above and after any processing, let L(l) be the composite lower layer that results from blending all lower layers in the conventional fashion and after any processing, and let Result be the final result of blending all the upper and lower layers, after any processing. Then, Result=L(u)+AR(u)*L(1). L(u) does not need to be multiplied by any additional alpha values, since all such multiplication operations were already performed at an earlier stage.
- In the preferred embodiment, a series of images makes up the upper layers. These are created by reading pixels from memory, as in a conventional graphics display device. Each pixel is converted into a common format if it is not already in that format; in this example the YUV format is used. Each pixel also has an alpha value associated with it. The alpha values can come from a variety of sources, including (1) being part of the pixel value read from memory (2) an element in a color look-up table (CLUT) in cases where the pixel format uses a CLUT (3) calculated from the pixel color value, e.g. alpha as a function of Y, (4) calculated using a keying function, i.e. some pixel values are transparent (i.e. alpha=0) and others are opaque (alpha=1) based on a comparison of the pixel value with a set of reference values, (5) an alpha value may be associated with a region of the image as described externally, such as a rectangular region, described by the four corners of the rectangle, may have a single alpha value associated with it, or (6) some combination of these.
- The upper layers are preferably composited in memory storage buffers called line buffers. Each line buffer preferably is sized to contain pixels of one scan line. Each line buffer has an element for each pixel on a line, and each pixel in the line buffer has elements for the color components, in this case Y, U and V, and one for the intermediate alpha value AR. Before compositing of each line begins, the appropriate line buffer is initialized to represent a transparent black having already been composited into the buffer; that is, the YUV value is set to the value that represents black (i.e. Y=0, U=V=128) and the alpha value AR is set to represent (1−transparent)=(1−0)=1.
- Each pixel of the current layer on the current line is combined with the value pre-existing in the line buffer using the formulas already described, i.e.,
-
R(i)=A(i)*P(i)+(1−A(i))*P(i−1) -
and -
AR(i)=AR(i−1)*(1−A(i)). - In other words, the color value of the current pixel P(i) is multiplied by its alpha value A(i), and the pixel in the line buffer representing the same location on the line P(i−1) is read from the line buffer, multiplied by (1−A(i)), and added to the previous result, producing the resulting pixel value R(i). Also, the alpha value at the same location in the line buffer (AR(i−1)) is read from the buffer and multiplied by (1−A(i)), producing AR(i). The results R(i) and AR(i) are then written back to the line buffer in the same location.
- When multiplying a YUV value by an alpha value between 0 and 1, the offset nature of the U and V values should preferably be accounted for. In other words, U=V=128 represents a lack of color and it is the value that should result from a YUV color value being multiplied by 0. This can be done in at least two ways. In one embodiment of the present invention, 128 is subtracted from the U and V values before multiplying by alpha, and then 128 is added to the result. In another embodiment, U and V values are directly multiplied by alpha, and it is ensured that at the end of the entire compositing process all of the coefficients multiplied by U and V sum to 1, so that the offset 128 value is not distorted significantly.
- Each of the layers in the group of upper layers is preferably composited into a line buffer starting with the back-most of the upper layers and progressing towards the front until the front-most of the upper layers has been composited into the line buffer. In this way, a single hardware block, i.e., the display engine, may be used to implement the formula above for all of the upper layers. In this arrangement, the graphics compositor engine preferably operates at a clock frequency that is substantially higher than the pixel display rate. In one embodiment of the present invention, the graphics compositor engine operates at 81 MHz while the pixel display rate is 13.5 MHz.
- This process repeats for all of the lines in the entire image, starting at the top scan line and progressing to the bottom. Once the compositing of each scan line into a line buffer has been completed, the scan line becomes available for use in processing such as filtering or scaling. Such processing may be performed while subsequent scan lines are being composited into other line buffers. Various processing operations may be selected such as anti-flutter filtering and vertical scaling.
- In alternative embodiments more than one graphics layer may be composited simultaneously, and in some such embodiments it is not necessary to use line buffers as part of the compositing process. If all upper layers are composited simultaneously, the combination of all upper layers can be available immediately without the use of intermediate storage.
- Referring to
FIG. 29 , a flow diagram of a process of blending graphics windows is illustrated. The system preferably resets instep 920. Instep 922, the system preferably checks for a vertical sync (VSYNC). If a VSYNC has been received, the system instep 924 preferably loads a line from the bottom most graphics window into a graphics line buffer. Then the system instep 926 preferably blends a line from the next graphics window into the line buffer. Then the system instep 928 preferably determines if the last graphics window visible on a current display line has been blended. If the last graphics window has not been blended, the system continues on with the blending system instep 926. - If the last window of the current display line has been reached, the system preferably checks in
step 930 to determine if the last graphics line of a current display field has been blended. If the last graphics line has been blended, the system awaits another VSYNC instep 922. If the last graphics line has not been blended, the system goes to the next display line instep 932 and repeats the blending process. - Referring to
FIG. 30 , a flow diagram of a process of receiving blendedgraphics 950, avideo window 952 and apassthrough video 954 and blending them. A background color preferably is also blended in one embodiment of the present invention. Asstep 956 indicates, the video compositor preferably displays each pixel as they are composited without saving pixels to a frame buffer or other memory. - When the video signals and graphics data are blended in the video compositor, the system in
step 958 preferably displays thepassthrough video 954 outside the active window area first. There are 525 scan lines in each frame and 858 pixels in each scan line of NTSC standard television signals, when a sample rate of 13.5 MHz is used, per ITU-R Bt.601. An active window area of the NTSC standard television is inside an NTSC frame. There are 625 scan lines per frame and 864 pixels in each scan line of PAL standard television, when using the ITU-R Bt.601 standard sample rate of 13.5 MHz. An active window area of the PAL standard television is inside a PAL frame. - Within the active window area, the system in
step 960 preferably blends the background color first. On top of the background color, the system instep 962 preferably blends the portion of the passthrough video that falls within the active window area. On top of the passthrough window, the system instep 964 preferably blends the video window. Finally, the system instep 968 blends the graphics window on top of the composited video window and outputs compositedvideo 970 for display. - Interlaced displays, such as televisions, have an inherent tendency to display an apparent vertical motion at the horizontal edges of displayed objects, with horizontal lines, and on other points on the display where there is a sharp contrast gradient along the vertical axis. This apparent vertical motion is variously referred to as flutter, flicker, or judder.
- While some image elements can be designed specifically for display on interlaced TVs or filtered before they are displayed, when multiple such image objects are combined onto one screen, there are still visible flutter artifacts at the horizontal top and bottom edges of these objects. While it is also possible to include filters in hardware to minimize visible flutter of the display, such filters are costly in that they require higher memory bandwidth from the display memory, since both even and odd fields should preferably be read from memory for every display field, and they tend to require additional logic and memory on-chip.
- One embodiment of the present invention includes a method of reducing interlace flutter via automatic blending. This method has been designed for use in graphics displays device that composites visible objects directly onto the screen; for example, the device may use windows, window descriptors and window descriptor lists, or similar mechanisms. The top and bottom edges (first and last scan lines) of each object (or window) are displayed such that the alpha blend value (alpha blend factor) of these edges is adjusted to be one-half of what it would be if these same lines were not the top and bottom lines of the window.
- For example, a window may constitute a rectangular shape, and the window may be opaque, i.e. it's alpha blend factor is 1, on a scale of 0 to 1. All lines on this window except the first and last are opaque when the window is rendered. The top and bottom lines are adjusted so that, in this case, the alpha blend value becomes 0.5, thereby causing these lines to be mixed 50% with the images that are behind them. This function occurs automatically in the preferred implementation. Since in the preferred implementation, windows are rectangular objects that are rendered directly onto the screen, the locations of the top and bottom lines of every window are already known.
- In one embodiment, the function of dividing the alpha blend values for the top and bottom lines by two is implemented only for the top fields of the interlaced display. In another embodiment, the function of dividing the alpha blend values for the top and bottom lines by two is implemented only for the bottom fields of the interlaced display.
- In the preferred embodiment, there exists also the ability to alpha blend each window with the windows behind it, and this alpha value can be adjusted for every pixel, and therefore for every scan line. These characteristics of the application design are used advantageously, as the flutter reduction effect is implemented by controlling the alpha blend function using information that is readily available from the window control logic.
- In a specific illustrative example, the window is solid opaque white, and the image behind it is solid opaque black. In the absence of the disclosed method, at the top and bottom edges of the window there would be a sharp contrast between black and white, and when displayed on an interlaced TV, significant flutter would be visible. Using the disclosed method, the top and bottom lines are blended 50% with the background, resulting in a color that is halfway between black and white, or gray. When displayed on an interlaced TV, the apparent visual location of the top and bottom edges of the object is constant, and flutter is not apparent. The same effect applies equally well for other image examples.
- The method of reducing interlace flutter of this embodiment does not require any increase in memory bandwidth, as the alternate field (the one not currently being displayed) is not read from memory, and there is no need for vertical filtering, which would have required logic and on-chip memory.
- The same function can alternatively be implemented in different graphics hardware designs. For example in designs using a frame buffer (conventional design), graphic objects can be composited into the frame buffer with an alpha blend value that is adjusted to one-half of its normal value at the top and bottom edges of each object. Such blending can be performed in software or in a blitter that has a blending capability.
- In the preferred embodiment, the vertical filtering and anti-flutter filtering are performed on blended graphics by one graphics filter. One function of the graphics filter is low pass filtering in the vertical dimension. The low pass filtering may be performed in order to minimize the “flutter” effect inherent in interlaced displays such as televisions. The vertical downscaling or upscaling operation may be performed in order to change the pixel aspect ratio from the square pixels that are normal for computer, Internet and World Wide Web content into any of the various oblong aspect ratios that are standard for televisions as specified in ITU-R 601B. In order to be able to perform vertical scaling of the upper layers the system preferably includes seven line buffers. This allows for four line buffers to be used for filtering and scaling, two are available for progressing by one or two lines at the end of every line, and one for the current compositing operation.
- When scaling or filtering are performed, the alpha values in the line buffers are filtered or scaled in the same way as the YUV values, ensuring that the resulting alpha values correctly represent the desired alpha values at the proper location. Either or both of these operations, or neither, or other processing, may be performed on the contents of the line buffers.
- Once the optional processing of the contents of the line buffers has been completed, the result is the completed set of upper layers with the associated alpha value (product of (1−A(i)). These results are used directly for compositing the upper layers with the lower layers, using the formula: Result=L(u)−AR(u)*L(1) as explained in detail in reference to blending of graphics and video. If the lower layers require any processing independent of processing required for the upper layers or for the resulting image, the lower layers are processed before being combined with the upper layers; however in one embodiment of the present invention, no such processing is required.
- Each of the operations described above is preferably implemented digitally using conventional ASIC technology. As part of the normal ASIC technology the logical operations are segmented into pipeline stages, which may require temporary storage of logic values from one clock cycle to the next. The choice of how many pipeline stages are used in each of the operations described above is dependent on the specific ASIC technology used, the clock speed chosen, the design tools used, and the preference of the designer, and may vary without loss of generality. In the preferred embodiment the line buffers are implemented as dual port memories allowing one read and one write cycle to occur simultaneously, facilitating the read and write operations described above while maintaining a clock frequency of 81 MHz. In this embodiment the compositing function is divided into multiple pipeline stages, and therefore the address being read from the memory is different from the address being written to the same memory during the same clock cycle.
- Each of the arithmetic operations described above in the
preferred embodiment use 8 bit accuracy for each operand; this is generally sufficient for providing an accurate final result. Products are rounded to 8 bits before the result is used in subsequent additions. - Referring to
FIG. 31 , a block diagram illustrates an interaction between the line buffers 504 and agraphics filter 172. The line buffers comprises a set of line buffers 1-7 506 a-g. The line buffers are controlled by a graphics line buffer controller over a linebuffer control interface 502. In one embodiment of the present invention, the graphics filter is a four-tap polyphase filter, so that four lines ofgraphics data 516 a-d are provided to the graphics filter at a time. The graphics filter 172 sends a linebuffer release signal 516 e to the line buffers to notify that one to three line buffers are available for compositing additional graphics display lines. - In another embodiment, line buffers are not used, but rather all of the upper layers are composited concurrently. In this case, there is one graphics blender for each of the upper layers active at any one pixel, and the clock rate of the graphics blender may be approximately equal to the pixel display rate. The clock rate of the graphics blenders may be somewhat slower or faster, if FIFO buffers are used at the output of the graphics blenders.
- The mathematical formulas implemented are the same as in the first embodiment described. The major difference is that instead of performing the compositing function iteratively by reading and writing a line buffer, all layers are composited concurrently and the result of the series of compositor blocks is immediately available for processing, if required, and for blending with the lower layers, and line buffers are not used for purposes of compositing.
- Line buffers may still be needed in order to implement vertical filtering or vertical scaling, as those operations typically require more than one line of the group of upper layers to be available simultaneously, although fewer line buffers are generally required here than in the preferred embodiment. Using multiple graphics blenders operating at approximately the pixel rate simplifies the implementation in applications where the pixel rate is relatively fast for the ASIC technology used, for example in HDTV video and graphics systems where the pixel rate is 74.25 MHz.
- Recently, improvements to memory fabrication technologies have resulted in denser memory chips. However memory chip bandwidth has not been increasing as rapidly. The bandwidth of a memory chip is a measure of how fast contents of the memory chip can be accessed for reading or writing. As a result of increased memory density without necessarily a commensurate increase in bandwidth, in many conventional system designs multiple memory devices are used for different functions, and memory space in some memory modules may go unused or is wasted. In the preferred embodiment, a unified memory architecture is used. In the unified memory architecture, all the tasks (also referred to as “clients”), including CPU, display engine and IO devices, share the same memory.
- The unified memory architecture preferably includes a memory that is shared by a plurality of devices, and a memory request arbiter coupled to the memory, wherein the memory request arbiter performs real time scheduling of memory requests from different devices having different priorities. The unified memory system assures real time scheduling of tasks, some of which do not inherently have pre-determined periodic behavior and provides access to memory by requesters that are sensitive to latency and do not have determinable periodic behavior.
- In an alternate embodiment, two memory controllers are used in a dual memory controller system. The memory controllers may be 16-bit memory controllers or 32-bit memory controllers. Each memory controller can support different configuration of SDRAM device types and banks, or other forms of memory besides SDRAM. A first memory space addressed by a first memory controller is preferably adjacent and contiguous to a second memory space addressed by a second memory controller so that software applications view the first and second memory spaces as one continuous memory space. The first and the second memory controllers may be accessed concurrently by different clients. The software applications may be optimized to improve performance.
- For example, a graphics memory may be allocated through the first memory controller while a CPU memory is allocated through the second memory controller. While a display engine is accessing the first memory controller, a CPU may access the second memory controller at the same time. Therefore, a memory access latency of the CPU is not adversely affected in this instance by memory being accessed by the display engine and vice versa. In this example, the CPU may also access the first memory controller at approximately the same time that the display engine is accessing the first memory controller, and the display controller can access memory from the second memory controller, thereby allowing sharing of memory across different functions, and avoiding many copy operations that may otherwise be required in conventional designs.
- Referring to
FIG. 32 , a dual memory controller system services memory requests generated by adisplay engine 1118, aCPU 1120, agraphics accelerator 1124 and an input/output module 1126 are provided to a memoryselect block 1100. The memoryselect block 1100 preferably routes the memory requests to afirst arbiter 1102 or to asecond arbiter 1106 based on the address of the requested memory. Thefirst arbiter 1102 sends memory requests to afirst memory controller 1104 while thesecond arbiter 1106 sends memory requests to asecond memory controller 1108. The design of arbiters for handling requests from tasks with different priorities is well known in the art. - The first memory controller preferably sends address and control signals to a first external SDRAM and receives a first data from the first external SDRAM. The second memory controller preferably sends address and control signals to a second external SDRAM and receives a second data from the second external SDRAM. The first and second memory controllers preferably provide first and second data received, respectively, from the first and second external SDRAMs to a device that requested the received data.
- The first and second data from the first and second memory controllers are preferably multiplexed, respectively, by a
first multiplexer 1110 at an input of the display engine, by asecond multiplexer 1112 at an input of the CPU, by athird multiplexer 1114 at an input of the graphics accelerator and by afourth multiplexer 1116 at an input of the I/O module. The multiplexers provide either the first or the second data, as selected by memory select signals provided by the memory select block, to a corresponding device that has requested memory. - An arbiter preferably uses an improved form of real time scheduling to meet real-time latency requirements while improving performance for latency-sensitive tasks. First and second arbiters may be used with the flexible real time scheduling. The real time scheduling is preferably implemented on both the first arbiter and the second arbiter independently.
- When using a unified memory, memory latencies caused by competing memory requests by different tasks should preferably be addressed. In the preferred embodiment, a real-time scheduling and arbitration scheme for unified memory is implemented, such that all tasks that use the unified memory meet their real-time requirements. With this innovative use of the unified memory architecture and real-time scheduling, a single unified memory is provided to the CPU and other devices of the graphics display system without compromising quality of graphics or other operations and while simultaneously minimizing the latency experienced by the CPU.
- The methodology used preferably implements real-time scheduling using Rate Monotonic Scheduling (“RMS”). It is a mathematical approach that allows the construction of provably correct schedules of arbitrary numbers of real-time tasks with arbitrary periods for each of the tasks. This methodology provides for a straight forward means for proof by simulation of the worst case scenario, and this simulation is simple enough that it can be done by hand. RMS, as normally applied, makes a number of simplifying assumptions in the creation of a priority list.
- In the normal RMS assumptions, all tasks are assumed to have constant periods, such that a request for service is made by the task with stated period, and all tasks have a latency tolerance that equals that task's period. Latency tolerance is defined as the maximum amount of time that can pass from the moment the task requests service until that task's request has been completely satisfied. During implementation of one embodiment of the present invention, the above assumptions have been modified, as described below.
- In the RMS method, all tasks are generally listed along with their periods. They are then ordered by period, from the shortest to the longest, and priorities are assigned in that order. Multiple tasks with identical periods can be in any relative order. In other words, the relative order amongst them can be decided by, for example, flipping a coin.
- Proof of correctness, i.e. the guarantee that all tasks meet their deadlines, is constructed by analyzing the behavior of the system when all tasks request service at exactly the same time; this time is called the “critical instant”. This is the worst case scenario, which may not occur in even a very large set of simulations of normal operation, or perhaps it may never occur in normal operation, however it is presumed to be possible. As each task is serviced, it uses the shared resource, memory clock cycles in the present invention, in the degree stated by that task. If all tasks meet their deadlines, the system is guaranteed to meet all tasks' deadlines under all conditions, since the critical instant analysis simulates the worst case.
- When the lowest priority real-time task meets its deadline, without any higher priority tasks missing their deadlines, then all tasks are proven to meet their deadlines. As soon as any task in this simulation fails to meet its deadline, the test has failed and the task set cannot be guaranteed, and therefore the design should preferably be changed in order to guarantee proper operation under worst case conditions.
- In the RMS methodology, real-time tasks are assumed to have periodic requests, and the period and the latency tolerance are assumed to have the same value. Since the requests may not be in fact periodic, it is clearer to speak in terms of “minimum interval” rather than period. That is, any task is assumed to be guaranteed not to make two consecutive requests with an interval between them that is any shorter than the minimum interval.
- The deadline, or the latency tolerance, is the maximum amount of time that may pass between the moment a task makes a request for service and the time that the service is completed, without impairing the function of the task. For example, in a data path with a constant rate source (or sink), a FIFO, and memory access from the FIFO, the request may occur as soon as there is enough data in the FIFO that if service is granted immediately the FIFO does not underflow (or overflow in case of a read operation supporting a data sink). If service is not completed before the FIFO overflows (or underflows in the case of a data sink) the task is impaired.
- In the RMS methodology, those tasks that do not have specified real-time constraints are preferably grouped together and served with a single master task called the “sporadic server”, which itself has the lowest priority in the system. Arbitration within the set of tasks served by the sporadic server is not addressed by the RMS methodology, since it is not a real-time matter. Thus, all non-real-time tasks are served whenever there is resource available, however the latency of serving any one of them is not guaranteed.
- To implement real-time scheduling based on the RMS methodology, first, all of the tasks or clients that need to access memory are preferably listed, not necessarily in any particular order. Next, the period of each of the tasks is preferably determined. For those with specific bandwidth requirements (in bytes per second of memory access), the period is preferably calculated from the bandwidth and the burst size. If the deadline is different from the period for any given task, that is listed as well. The resource requirement when a task is serviced is listed along with the task. In this case, the resource requirement is the number of memory clock cycles required to service the memory access request. The tasks are sorted in order of increasing period, and the result is the set of priorities, from highest to lowest. If there are multiple tasks with the same period, they can be given different, adjacent priorities in any random relative order within the group; or they can be grouped together and served with a single priority, with round-robin arbitration between those tasks at the same priority.
- In practice, the tasks sharing the unified memory do not all have true periodic behavior. In one embodiment of the present invention, a block out timer, associated with a task that does not normally have a period, is used in order to force a bounded minimum interval, similar to a period, on that task. For example a block out timer associated with the CPU has been implemented in this embodiment. If left uncontrolled, the CPU can occupy all available memory cycles, for example by causing a never-ending stream of cache misses and memory requests. At the same time, CPU performance is determined largely by “average latency of memory access”, and so the CPU performance would be less than optimal if all CPU memory accessed were consigned to a sporadic server, i.e., at the lowest priority.
- In this embodiment, the CPU task has been converted into two logical tasks. A first CPU task has a very high priority for low latency, and it also has a block out timer associated with it such that once a request by the CPU is made, it cannot submit a request again until the block out timer has timed out. In this embodiment, the CPU task has the top priority. In other embodiments, the CPU task may have a very high priority but not the top priority. The timer period has been made programmable for system tuning, in order to accommodate different system configurations with different memory widths or other options.
- In one embodiment of the present invention, the block out timer is started when the CPU makes a high priority request. In another embodiment, the block out timer is started when the high priority request by the CPU is serviced. In other embodiments, the block out timer may be started at any time in the interval between the time the high priority request is made and the time the high priority request is serviced.
- A second CPU task is preferably serviced by a sporadic server in a round-robin manner. Therefore if the CPU makes a long string of memory requests, the first one is served as a high priority task, and subsequent requests are served by the low priority sporadic server whenever none of the real-time tasks have requests pending, until the CPU block out timer times out. In one embodiment of the present invention, the graphics accelerator and the display engine are also capable of requesting more memory cycles than are available, and so they too use similar block out timer.
- For example, the CPU read and write functions are grouped together and treated as two tasks. A first task has a theoretical latency bound of 0 and a period that is programmable via a block out timer, as described above. A second task is considered to have no period and no deadline, and it is grouped into the set of tasks served by the sporadic server via a round robin at the lowest priority. The CPU uses a programmable block out timer between high priority requests in this embodiment.
- For another example, a graphics display task is considered to have a constant bandwidth of 27 MB/s, i.e., 16 bits per pixel at 13.5 MHz. However, the graphics bandwidth in one embodiment of the present invention can vary widely from much less than 27 MB/s to a much greater figure, but 27 MB/s is a reasonable figure for assuring support of a range of applications. For example, in one embodiment of the present invention, the graphics display task utilizes a block out timer that enforces a period of 2.37 μs between high priority requests, while additional requests are serviced on a best-effort basis by the sporadic server in a low priority round robin manner.
- Referring to
FIG. 33 , a block diagram illustrates an implementation of a real-time scheduling using an RMS methodology. ACPU service request 1138 is preferably coupled to an input of a block outtimer 1130 and asporadic server 1136. An output of the block outtimer 1130 is preferably coupled to anarbiter 1132 as a high priority service request. Tasks 1-5 1134 a-e may also be coupled to the arbiter as inputs. An output of the arbiter is a request for service of a task that has the highest priority among all tasks that have a pending memory request. - In
FIG. 33 , only theCPU service request 1138 is coupled to a block out timer. In other embodiments, service requests from other tasks may be coupled to their respective block out timers. The block out timers are used to enforce a minimum interval between two successive accesses by any high priority task that is non-periodic but may require expedited servicing. Two or more such high priority tasks may be coupled to their respective block out timers in one embodiment of the present invention. Devices that are coupled to their respective block out timers as high priority tasks may include a graphics accelerator, a display engine, and other devices. - In addition to the
CPU request 1138, low priority tasks 1140 a-d may be coupled to thesporadic server 1136. In the sporadic server, these low priority tasks are handled in a round robin manner. The sporadic server sends amemory request 1142 to the arbiter for the next low priority task to be serviced. - Referring to
FIG. 34 , a timing diagram illustrates CPU service requests and services in case of acontinuous CPU request 1146. In practice, the CPU request is generally not continuous, butFIG. 34 has been provided for illustrative purposes. In the example represented inFIG. 34 , a block outtimer 1148 is started upon a highpriority service request 1149 by the CPU. At time t0, the CPU starts making thecontinuous service request 1146, and a highpriority service request 1149 is first made provided that the block outtimer 1148 is not running at time t0. When the high priority service request is made, the block outtimer 1148 is started. Between time t0 and time t1, the memory controller finishes servicing a memory request from another task. The CPU is first serviced at time t1. In the preferred embodiment, the duration of the block out timer is programmable. For example, the duration of the block out timer may be programmed to be 3 μs. - Any additional high
priority CPU request 1149 is blocked out until the block out timer times out at time t2. Instead, the CPUlow priority request 1150 is handled by a sporadic server in a round robin manner between time t0 and time t2. Thelow priority request 1150 is active as long as the CPU service request is active. Since theCPU service request 1146 is continuous, another highpriority service request 1149 is made by the CPU and the block out timer is started again as soon as the block out timer times out at time t2. The high priority service request made by the CPU at time t2 is serviced at time t3 when the memory controller finishes servicing another task. Until the block out timer times out at time t4, the CPUlow priority request 1150 is handled by the sporadic server while the CPUhigh priority request 1149 is blocked out. - Another high priority service request is made and the block out
timer 1148 is started again when the block outtimer 1148 times out at time t4. At time t5, the highpriority service request 1149 made by the CPU at time t4 is serviced. The block out timer does not time out until time t7. However, the block out timer is not in the path of the CPU low priority service request and, therefore, does not block out the CPU low priority service request. Thus, while the block out timer is still running, a low priority service request made by the CPU is handled by the sporadic server, and serviced at time t6. - When the block out
timer 1148 times out at time t7, it is started again and yet another high priority service request is made by the CPU, since the CPU service request is continuous. The highpriority service request 1149 made by the CPU at time t7 is serviced at time t8. When the block out timer times out at time t9, the high priority service request is once again made by the CPU and the block out timer is started again. - The schedule that results from the task set and priorities above is verified by simulating the system performance starting from the “critical instant”, when all tasks request service at the same time and a previously started low priority task is already underway. The system is proven to meet all the real-time deadlines if all of the tasks with real-time deadlines meet their deadlines. Of course, in order to perform this simulation accurately, all tasks make new requests at every repetition of their periods, whether or not previous requests have been satisfied.
- Referring to
FIG. 35 , a timing diagram illustrates an example of a critical instant analysis. At time t0, atask 1 1156, atask 2 1158, atask 3 1160 and atask 4 1162 request service at the same time. Further, at time t0, alow priority task 1154 is being serviced. Therefore, the highest priority task, thetask 1, cannot be serviced until servicing of the low priority task has been completed. - When the low priority task is completed at time t1, the
task 1 is serviced. Upon completion of thetask 1 at time t2, thetask 2 is serviced. Upon completion of thetask 2 at time t3, thetask 3 is serviced. Upon completion of thetask 3 at time t4, thetask 4 is serviced. Thetask 4 completes at time t5, which is before the start of a next set of tasks: thetask 1 at t6, thetask 2 at t7, thetask 3 at t8, and thetask 4 at t9. - For example, referring to
FIG. 36 , a flow diagram illustrates a process of servicing memory requests with different priorities, from the highest to the lowest. The system instep 1170 makes a CPU read request with the highest priority. Since a block out timer is used with the CPU read request in this example, the block out timer is started upon making the highest priority CPU read request. Then the system instep 1172 makes a graphics read request. A block out timer is also used with the graphics read request, and the block out timer is started upon making the graphics read request. - A video window read request in
step 1174 and a video capture write request instep 1176 have equal priorities. Therefore, the video window read request and the video capture write request are placed in a round robin arbitration for two tasks (clients). The system instep 1178 andstep 1180 services a refresh request and a audio read request, respectively. - While respective block out timers for the CPU read request and the graphics read request are active, the system places the CPU read request and the graphics read request in a round robin arbitration for five tasks (clients), respectively, in
step 1182 andstep 1186. The system insteps - Displaying of graphics generally requires a large amount of processing. If all processing of graphics is performed by a CPU, the processing requirements may unduly burden the CPU since the CPU generally also performs many other tasks. Therefore, many systems that perform graphics processing use a dedicated processor, which is typically referred to as a graphics accelerator.
- The system according to the present invention may employ a graphics accelerator that includes memory for graphics data, the graphics data including pixels, and a coprocessor for performing vector type operations on a plurality of components of one pixel of the graphics data.
- The preferred embodiment of the graphics display system uses a graphics accelerator that is optimized for performing real-time 3D and 2D effects on graphics and video surfaces. The graphics accelerator preferably incorporates specialized graphics vector arithmetic functions for maximum performance with video and real-time graphics. The graphics accelerator performs a range of essential graphics and video operations with performance comparable to hardwired approaches, yet it is programmable so that it can meet new and evolving application requirements with firmware downloads in the field. The graphics accelerator is preferably capable of 3D effects such as real-time video warping and flipping, texture mapping, and Gouraud and Phong polygon shading, as well as 2D and image effects such as blending, scaling, blitting and filling. The graphics accelerator and its caches are preferably completely contained in an integrated circuit chip.
- The graphics accelerator of the present invention is preferably based on a conventional RISC-type microprocessor architecture. The graphics accelerator preferably also includes additional features and some special instructions in the instruction set. In the preferred embodiment, the graphics accelerator is based on a MIPS R3000 class processor. In other embodiments, the graphics accelerator may be based on almost any other type of processors.
- Referring to
FIG. 37 , agraphics accelerator 64 receives commands from aCPU 22 and receives graphics data frommain memory 28 through amemory controller 54. The graphics accelerator preferably includes a coprocessor (vector coprocessor) 1300 that performs vector type operations on pixels. In vector type operations, the R, G, and B components, or the Y, U and V components, of a pixel are processed in parallel as the three elements of a “vector”. In alternate embodiments, the graphics accelerator may not include the vector coprocessor, and the vector coprocessor may be coupled to the graphics accelerator instead. Thevector coprocessor 1300 obtains pixels (3-tuple vectors) via a specialized LOAD instruction. - The LOAD instruction preferably extracts bits from a 32-bit word in memory that contains the required bits. The LOAD instruction also preferably packages and converts the bits into the input vector format of the coprocessor. The
vector coprocessor 1300 writes pixels (3-tuple vectors) to memory via a specialized STORE instruction. The STORE instruction preferably extracts the required bits from the accumulator (output) register of the coprocessor, converts them if required, and packs them into a 32-bit word in memory in a format suitable for other uses within the IC, as explained below. - Formats of the 32-bit word in memory preferably include an RGB16 format and a YUV format. When the pixels are formatted in RGB16 format, R has 5 bits, G has 6 bits, and B has 5 bits. Thus, there are 16 bits in each RGB16 pixel and there are two RGB16 half-words in every 32-bit word in memory. The two RGB16 half-words are selected, respectively, via VectorLoadRGB16Left instruction and VectorLoadRGB16Right instruction. The 5 or 6 bit elements are expanded through zero expansion into 8 bit components when loaded into the
coprocessor input register 1308. - The YUV format preferably includes YUV 4:2:2 format, which has four bytes representing two pixels packed into every 32-bit word in memory. The U and V elements preferably are shared between the two pixels. A typical packing format used to load two pixels having YUV 4:2:2 format into a 32-bit memory is YUYV, where each of first and second Y's, U and V has eight bits. The left pixel is preferably comprised of the first Y plus the U and V, and the right pixel is preferably comprised of the second Y plus the U and V. Special LOAD instructions, LoadYUVLeft and LoadYUVRight, are preferably used to extract the YUV values for the left pixel and the right pixel, respectively, and put them in the
coprocessor input register 1308. - Special STORE instructions, StoreVectorAccumulatorRGB16, StoreVectorAccumulatorRGB24, StoreVectorAccumulatorYUVLeft, and StoreVectorAccumulatorYUVRight, preferably convert the contents of the accumulator, otherwise referred to as the output register of the coprocessor, into a chosen format for storage in memory. In the case of StoreVectorAccumulatorRGB16, the three components (R, G, and B) in the accumulator typically have 8, 10 or more significant bits each; these are rounded or dithered to create R, G, and B values with 5, 6, and 5 bits respectively, and packed into a 16 bit value. This 16 bit value is stored in memory, selecting either the appropriate 16 bit half word in memory via the store address.
- In the case of StoreVectorAccumulatorRGB24, the R, G, and B components in the accumulator are rounded or dithered to create 8 bit values for each of the R, G, and B components, and these are packed into a 24 bit value. The 24 bit RGB value is written into memory at the memory address indicated via the store address. In the cases of StoreVectorAccumulatorYUVLeft and StoreVectorAccumulatorYUVRight, the Y, U and V components in the accumulator are dithered or rounded to create 8 bit values for each of the components.
- In the preferred embodiment, the StoreVectorAccumulatorYUVLeft instruction writes the Y, U and V values to the locations in the addressed memory word corresponding to the left YUV pixel, i.e. the word is arranged as YUYV, and the first Y value and the U and V values are over-written. In the preferred embodiment, the StoreVectorAccumulatorYUVRight instruction writes the Y value to the memory location corresponding to the Y component of the right YUV pixel, i.e. the second Y value in the preceding example. In other embodiments the U and V values may be combined with the U and V values already in memory creating a weighted sum of the existing and stored values and storing the result.
- The coprocessor instruction set preferably also includes a GreaterThanOREqualTo (GE) instruction. The GE instruction performs a greater-than-or-equal-to comparison between each element of a pair of 3-element vectors. Each element in each of the 3-element vectors has a size of one byte. The results of all three comparisons, one bit per each result, are placed in a
result register 1310, which may subsequently be used for a single conditional branch operation. This saves a lot of instructions (clock cycles) when performing comparisons between all the elements of two pixels. - The graphics accelerator preferably includes a
data SRAM 1302, also called a scratch pad memory, and not a conventional data cache. In other embodiments, the graphics accelerator may not include the data SRAM, and the data SRAM may be coupled to the graphics accelerator instead. Thedata SRAM 1302 is similar to a cache that is managed in software. The graphics accelerator preferably also includes aDMA engine 1304 with queued commands. In other embodiments, the graphics accelerator may not include the DMA engine, and the DMA engine may be coupled to the graphics accelerator instead. TheDMA engine 1304 is associated with thedata SRAM 1302 and preferably moves data between thedata SRAM 1302 andmain memory 28 at the same time thegraphics accelerator 64 is using thedata SRAM 1302 for its load and store operations. In the preferred embodiment, themain memory 28 is the unified memory that is shared by the graphics display system, theCPU 22, and other peripherals. - The
DMA engine 1304 preferably transfers data between thememory 28 and thedata SDRAM 1302 to carry out load and store instructions. In other embodiments, theDMA engine 1304 may transfer data between thememory 28 and other components of the graphics accelerator without using thedata SRAM 1302. Using data SRAM, however, generally results in faster loading and storing operations. - The
DMA engine 1304 preferably has aqueue 1306 to hold multiple DMA commands, which are executed sequentially in the order they are received. In the preferred embodiment, thequeue 1306 is four instructions deep. This may be valuable because the software (firmware) may be structured so that the loop above the inner loop may instruct theDMA engine 1304 to perform a series of transfers, e.g. to get two sets of operands and write one set of results back, and then the inner loop may execute for a while; when the inner loop is done, thegraphics accelerator 64 may check thecommand queue 1306 in theDMA engine 1304 to see if all of the DMA commands have been completed. The queue includes a mechanism that allows the graphics accelerator to determine when all the DMA commands have been completed. If all of the DMA commands have been completed, thegraphics accelerator 64 preferably immediately proceeds to do more work, such as commanding additional DMA operations to be performed and to do processing on the new operands. If not, thegraphics accelerator 64 preferably waits for the completion of DMA commands or perform some other tasks for a while. - Typically, the
graphics accelerator 64 is working on operands and producing outputs for one set of pixels, while theDMA engine 1304 is bringing in operands for the next (future) set of pixel operations, and also theDMA engine 1304 is writing back to memory the results from the previous set of pixel operations. In this way, thegraphics accelerator 64 does not ever have to wait for DMA transfers (if the code is designed well), unlike a conventional data cache, wherein the conventional data cache gets new operands only when there is a cache miss, and it writes back results only when either the cache writes it back automatically because it needs the cache line for new operands or when there is an explicit cache line flush operation performed. Therefore, thegraphics accelerator 64 of the present invention preferably reduces or eliminates period of waiting for data, unlike conventional graphics accelerators which may spend a large fraction of their time waiting for data transfer operations between the cache and main memory. - Although this invention has been described in certain specific embodiments, many additional modifications and variations would be apparent to those skilled in the art. It is therefore to be understood that this invention may be practiced otherwise than as specifically described. Thus, the present embodiments of the invention should be considered in all respects as illustrative and not restrictive, the scope of the invention to be determined by the appended claims and their equivalents.
Claims (2)
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US12/953,774 US20110292082A1 (en) | 1998-11-09 | 2010-11-24 | Graphics Display System With Anti-Flutter Filtering And Vertical Scaling Feature |
Applications Claiming Priority (8)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US10787598P | 1998-11-09 | 1998-11-09 | |
US09/437,327 US6738072B1 (en) | 1998-11-09 | 1999-11-09 | Graphics display system with anti-flutter filtering and vertical scaling feature |
US10/842,743 US6879330B2 (en) | 1998-11-09 | 2004-05-10 | Graphics display system with anti-flutter filtering and vertical scaling feature |
US11/097,028 US7098930B2 (en) | 1998-11-09 | 2005-04-01 | Graphics display system with anti-flutter filtering and vertical scaling feature |
US11/511,042 US7310104B2 (en) | 1998-11-09 | 2006-08-28 | Graphics display system with anti-flutter filtering and vertical scaling feature |
US11/959,315 US7554562B2 (en) | 1998-11-09 | 2007-12-18 | Graphics display system with anti-flutter filtering and vertical scaling feature |
US12/494,909 US20100171762A1 (en) | 1998-11-09 | 2009-06-30 | Graphics display system with anti-flutter filtering and vertical scaling feature |
US12/953,774 US20110292082A1 (en) | 1998-11-09 | 2010-11-24 | Graphics Display System With Anti-Flutter Filtering And Vertical Scaling Feature |
Related Parent Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US12/494,909 Continuation US20100171762A1 (en) | 1998-11-09 | 2009-06-30 | Graphics display system with anti-flutter filtering and vertical scaling feature |
Publications (1)
Publication Number | Publication Date |
---|---|
US20110292082A1 true US20110292082A1 (en) | 2011-12-01 |
Family
ID=22318929
Family Applications (47)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US09/437,326 Expired - Fee Related US6661427B1 (en) | 1998-11-09 | 1999-11-09 | Graphics display system with video scaler |
US09/437,206 Expired - Lifetime US6380945B1 (en) | 1998-11-09 | 1999-11-09 | Graphics display system with color look-up table loading mechanism |
US09/437,581 Expired - Lifetime US6630945B1 (en) | 1998-11-09 | 1999-11-09 | Graphics display system with graphics window control mechanism |
US09/437,579 Expired - Lifetime US6501480B1 (en) | 1998-11-09 | 1999-11-09 | Graphics accelerator |
US09/437,580 Expired - Fee Related US7911483B1 (en) | 1998-11-09 | 1999-11-09 | Graphics display system with window soft horizontal scrolling mechanism |
US09/437,325 Expired - Fee Related US6608630B1 (en) | 1998-11-09 | 1999-11-09 | Graphics display system with line buffer control scheme |
US09/437,209 Expired - Lifetime US6189064B1 (en) | 1998-11-09 | 1999-11-09 | Graphics display system with unified memory architecture |
US09/437,205 Expired - Lifetime US6927783B1 (en) | 1998-11-09 | 1999-11-09 | Graphics display system with anti-aliased text and graphics feature |
US09/437,348 Expired - Lifetime US6700588B1 (en) | 1998-11-09 | 1999-11-09 | Apparatus and method for blending graphics and video surfaces |
US09/437,207 Expired - Lifetime US6744472B1 (en) | 1998-11-09 | 1999-11-09 | Graphics display system with video synchronization feature |
US09/437,716 Expired - Lifetime US6731295B1 (en) | 1998-11-09 | 1999-11-09 | Graphics display system with window descriptors |
US09/437,208 Expired - Lifetime US6570579B1 (en) | 1998-11-09 | 1999-11-09 | Graphics display system |
US09/437,327 Expired - Lifetime US6738072B1 (en) | 1998-11-09 | 1999-11-09 | Graphics display system with anti-flutter filtering and vertical scaling feature |
US09/712,736 Expired - Lifetime US6529935B1 (en) | 1998-11-09 | 2000-11-14 | Graphics display system with unified memory architecture |
US10/017,784 Expired - Fee Related US6819330B2 (en) | 1998-11-09 | 2001-11-30 | Graphics display System with color look-up table loading mechanism |
US10/282,822 Expired - Lifetime US6762762B2 (en) | 1998-11-09 | 2002-10-28 | Graphics accelerator |
US10/322,059 Expired - Lifetime US6721837B2 (en) | 1998-11-09 | 2002-12-17 | Graphics display system with unified memory architecture |
US10/423,364 Expired - Fee Related US7057622B2 (en) | 1998-11-09 | 2003-04-25 | Graphics display system with line buffer control scheme |
US10/622,194 Expired - Lifetime US7530027B2 (en) | 1998-11-09 | 2003-07-18 | Graphics display system with graphics window control mechanism |
US10/670,627 Expired - Fee Related US7538783B2 (en) | 1998-11-09 | 2003-09-25 | Graphics display system with video scaler |
US10/712,809 Expired - Lifetime US7002602B2 (en) | 1998-11-09 | 2003-11-13 | Apparatus and method for blending graphics and video surfaces |
US10/762,937 Expired - Fee Related US7598962B2 (en) | 1998-11-09 | 2004-01-21 | Graphics display system with window descriptors |
US10/762,975 Expired - Lifetime US7209992B2 (en) | 1998-11-09 | 2004-01-22 | Graphics display system with unified memory architecture |
US10/763,087 Expired - Fee Related US9077997B2 (en) | 1998-11-09 | 2004-01-22 | Graphics display system with unified memory architecture |
US10/770,851 Expired - Lifetime US7015928B2 (en) | 1998-11-09 | 2004-02-03 | Graphics display system with color look-up table loading mechanism |
US10/842,743 Expired - Lifetime US6879330B2 (en) | 1998-11-09 | 2004-05-10 | Graphics display system with anti-flutter filtering and vertical scaling feature |
US10/847,122 Expired - Lifetime US7227582B2 (en) | 1998-11-09 | 2004-05-17 | Graphics display system with video synchronization feature |
US10/889,820 Abandoned US20040246257A1 (en) | 1998-11-09 | 2004-07-13 | Graphics accelerator |
US11/097,028 Expired - Lifetime US7098930B2 (en) | 1998-11-09 | 2005-04-01 | Graphics display system with anti-flutter filtering and vertical scaling feature |
US11/106,038 Expired - Fee Related US7184058B2 (en) | 1998-11-09 | 2005-04-14 | Graphics display system with anti-aliased text and graphics feature |
US11/511,042 Expired - Lifetime US7310104B2 (en) | 1998-11-09 | 2006-08-28 | Graphics display system with anti-flutter filtering and vertical scaling feature |
US11/617,468 Expired - Fee Related US7746354B2 (en) | 1998-11-09 | 2006-12-28 | Graphics display system with anti-aliased text and graphics feature |
US11/738,870 Expired - Fee Related US7545438B2 (en) | 1998-11-09 | 2007-04-23 | Graphics display system with video synchronization feature |
US11/959,139 Expired - Fee Related US7554553B2 (en) | 1998-11-09 | 2007-12-18 | Graphics display system with anti-flutter filtering and vertical scaling feature |
US11/959,315 Expired - Fee Related US7554562B2 (en) | 1998-11-09 | 2007-12-18 | Graphics display system with anti-flutter filtering and vertical scaling feature |
US12/268,036 Expired - Fee Related US8078981B2 (en) | 1998-11-09 | 2008-11-10 | Graphics display system with graphics window control mechanism |
US12/472,235 Expired - Fee Related US7920151B2 (en) | 1998-11-09 | 2009-05-26 | Graphics display system with video scaler |
US12/494,864 Abandoned US20100171761A1 (en) | 1998-11-09 | 2009-06-30 | Graphics display system with anti-flutter filtering and vertical scaling feature |
US12/494,909 Abandoned US20100171762A1 (en) | 1998-11-09 | 2009-06-30 | Graphics display system with anti-flutter filtering and vertical scaling feature |
US12/540,633 Abandoned US20090295815A1 (en) | 1998-11-09 | 2009-08-13 | Graphics display system with window descriptors |
US12/905,617 Expired - Fee Related US8390635B2 (en) | 1998-11-09 | 2010-10-15 | Graphics accelerator |
US12/953,168 Expired - Fee Related US8164601B2 (en) | 1998-11-09 | 2010-11-23 | Graphics display system with anti-flutter filtering and vertical scaling feature |
US12/953,774 Abandoned US20110292082A1 (en) | 1998-11-09 | 2010-11-24 | Graphics Display System With Anti-Flutter Filtering And Vertical Scaling Feature |
US13/079,845 Expired - Lifetime US8493415B2 (en) | 1998-11-09 | 2011-04-05 | Graphics display system with video scaler |
US13/195,115 Expired - Fee Related US8848792B2 (en) | 1998-11-09 | 2011-08-01 | Video and graphics system with video scaling |
US13/782,081 Expired - Fee Related US9111369B2 (en) | 1998-11-09 | 2013-03-01 | Graphics accelerator |
US14/790,923 Expired - Fee Related US9575665B2 (en) | 1998-11-09 | 2015-07-02 | Graphics display system with unified memory architecture |
Family Applications Before (42)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US09/437,326 Expired - Fee Related US6661427B1 (en) | 1998-11-09 | 1999-11-09 | Graphics display system with video scaler |
US09/437,206 Expired - Lifetime US6380945B1 (en) | 1998-11-09 | 1999-11-09 | Graphics display system with color look-up table loading mechanism |
US09/437,581 Expired - Lifetime US6630945B1 (en) | 1998-11-09 | 1999-11-09 | Graphics display system with graphics window control mechanism |
US09/437,579 Expired - Lifetime US6501480B1 (en) | 1998-11-09 | 1999-11-09 | Graphics accelerator |
US09/437,580 Expired - Fee Related US7911483B1 (en) | 1998-11-09 | 1999-11-09 | Graphics display system with window soft horizontal scrolling mechanism |
US09/437,325 Expired - Fee Related US6608630B1 (en) | 1998-11-09 | 1999-11-09 | Graphics display system with line buffer control scheme |
US09/437,209 Expired - Lifetime US6189064B1 (en) | 1998-11-09 | 1999-11-09 | Graphics display system with unified memory architecture |
US09/437,205 Expired - Lifetime US6927783B1 (en) | 1998-11-09 | 1999-11-09 | Graphics display system with anti-aliased text and graphics feature |
US09/437,348 Expired - Lifetime US6700588B1 (en) | 1998-11-09 | 1999-11-09 | Apparatus and method for blending graphics and video surfaces |
US09/437,207 Expired - Lifetime US6744472B1 (en) | 1998-11-09 | 1999-11-09 | Graphics display system with video synchronization feature |
US09/437,716 Expired - Lifetime US6731295B1 (en) | 1998-11-09 | 1999-11-09 | Graphics display system with window descriptors |
US09/437,208 Expired - Lifetime US6570579B1 (en) | 1998-11-09 | 1999-11-09 | Graphics display system |
US09/437,327 Expired - Lifetime US6738072B1 (en) | 1998-11-09 | 1999-11-09 | Graphics display system with anti-flutter filtering and vertical scaling feature |
US09/712,736 Expired - Lifetime US6529935B1 (en) | 1998-11-09 | 2000-11-14 | Graphics display system with unified memory architecture |
US10/017,784 Expired - Fee Related US6819330B2 (en) | 1998-11-09 | 2001-11-30 | Graphics display System with color look-up table loading mechanism |
US10/282,822 Expired - Lifetime US6762762B2 (en) | 1998-11-09 | 2002-10-28 | Graphics accelerator |
US10/322,059 Expired - Lifetime US6721837B2 (en) | 1998-11-09 | 2002-12-17 | Graphics display system with unified memory architecture |
US10/423,364 Expired - Fee Related US7057622B2 (en) | 1998-11-09 | 2003-04-25 | Graphics display system with line buffer control scheme |
US10/622,194 Expired - Lifetime US7530027B2 (en) | 1998-11-09 | 2003-07-18 | Graphics display system with graphics window control mechanism |
US10/670,627 Expired - Fee Related US7538783B2 (en) | 1998-11-09 | 2003-09-25 | Graphics display system with video scaler |
US10/712,809 Expired - Lifetime US7002602B2 (en) | 1998-11-09 | 2003-11-13 | Apparatus and method for blending graphics and video surfaces |
US10/762,937 Expired - Fee Related US7598962B2 (en) | 1998-11-09 | 2004-01-21 | Graphics display system with window descriptors |
US10/762,975 Expired - Lifetime US7209992B2 (en) | 1998-11-09 | 2004-01-22 | Graphics display system with unified memory architecture |
US10/763,087 Expired - Fee Related US9077997B2 (en) | 1998-11-09 | 2004-01-22 | Graphics display system with unified memory architecture |
US10/770,851 Expired - Lifetime US7015928B2 (en) | 1998-11-09 | 2004-02-03 | Graphics display system with color look-up table loading mechanism |
US10/842,743 Expired - Lifetime US6879330B2 (en) | 1998-11-09 | 2004-05-10 | Graphics display system with anti-flutter filtering and vertical scaling feature |
US10/847,122 Expired - Lifetime US7227582B2 (en) | 1998-11-09 | 2004-05-17 | Graphics display system with video synchronization feature |
US10/889,820 Abandoned US20040246257A1 (en) | 1998-11-09 | 2004-07-13 | Graphics accelerator |
US11/097,028 Expired - Lifetime US7098930B2 (en) | 1998-11-09 | 2005-04-01 | Graphics display system with anti-flutter filtering and vertical scaling feature |
US11/106,038 Expired - Fee Related US7184058B2 (en) | 1998-11-09 | 2005-04-14 | Graphics display system with anti-aliased text and graphics feature |
US11/511,042 Expired - Lifetime US7310104B2 (en) | 1998-11-09 | 2006-08-28 | Graphics display system with anti-flutter filtering and vertical scaling feature |
US11/617,468 Expired - Fee Related US7746354B2 (en) | 1998-11-09 | 2006-12-28 | Graphics display system with anti-aliased text and graphics feature |
US11/738,870 Expired - Fee Related US7545438B2 (en) | 1998-11-09 | 2007-04-23 | Graphics display system with video synchronization feature |
US11/959,139 Expired - Fee Related US7554553B2 (en) | 1998-11-09 | 2007-12-18 | Graphics display system with anti-flutter filtering and vertical scaling feature |
US11/959,315 Expired - Fee Related US7554562B2 (en) | 1998-11-09 | 2007-12-18 | Graphics display system with anti-flutter filtering and vertical scaling feature |
US12/268,036 Expired - Fee Related US8078981B2 (en) | 1998-11-09 | 2008-11-10 | Graphics display system with graphics window control mechanism |
US12/472,235 Expired - Fee Related US7920151B2 (en) | 1998-11-09 | 2009-05-26 | Graphics display system with video scaler |
US12/494,864 Abandoned US20100171761A1 (en) | 1998-11-09 | 2009-06-30 | Graphics display system with anti-flutter filtering and vertical scaling feature |
US12/494,909 Abandoned US20100171762A1 (en) | 1998-11-09 | 2009-06-30 | Graphics display system with anti-flutter filtering and vertical scaling feature |
US12/540,633 Abandoned US20090295815A1 (en) | 1998-11-09 | 2009-08-13 | Graphics display system with window descriptors |
US12/905,617 Expired - Fee Related US8390635B2 (en) | 1998-11-09 | 2010-10-15 | Graphics accelerator |
US12/953,168 Expired - Fee Related US8164601B2 (en) | 1998-11-09 | 2010-11-23 | Graphics display system with anti-flutter filtering and vertical scaling feature |
Family Applications After (4)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US13/079,845 Expired - Lifetime US8493415B2 (en) | 1998-11-09 | 2011-04-05 | Graphics display system with video scaler |
US13/195,115 Expired - Fee Related US8848792B2 (en) | 1998-11-09 | 2011-08-01 | Video and graphics system with video scaling |
US13/782,081 Expired - Fee Related US9111369B2 (en) | 1998-11-09 | 2013-03-01 | Graphics accelerator |
US14/790,923 Expired - Fee Related US9575665B2 (en) | 1998-11-09 | 2015-07-02 | Graphics display system with unified memory architecture |
Country Status (6)
Country | Link |
---|---|
US (47) | US6661427B1 (en) |
EP (2) | EP1145218B1 (en) |
AT (1) | ATE267439T1 (en) |
AU (1) | AU1910800A (en) |
DE (1) | DE69917489T2 (en) |
WO (1) | WO2000028518A2 (en) |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20090168885A1 (en) * | 2007-12-29 | 2009-07-02 | Yong Peng | Two-dimensional interpolation architecture for motion compensation in multiple video standards |
US11445227B2 (en) | 2018-06-12 | 2022-09-13 | Ela KLIOTS SHAPIRA | Method and system for automatic real-time frame segmentation of high resolution video streams into constituent features and modifications of features in each frame to simultaneously create multiple different linear views from same video source |
Families Citing this family (627)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US6886055B2 (en) * | 1997-12-15 | 2005-04-26 | Clearcube Technology, Inc. | Computer on a card with a remote human interface |
WO2000000893A2 (en) * | 1998-06-30 | 2000-01-06 | Koninklijke Philips Electronics N.V. | Memory arrangement based on rate conversion |
US8165155B2 (en) * | 2004-07-01 | 2012-04-24 | Broadcom Corporation | Method and system for a thin client and blade architecture |
US6853385B1 (en) * | 1999-11-09 | 2005-02-08 | Broadcom Corporation | Video, audio and graphics decode, composite and display system |
US6573905B1 (en) * | 1999-11-09 | 2003-06-03 | Broadcom Corporation | Video and graphics system with parallel processing of graphics windows |
US6636222B1 (en) * | 1999-11-09 | 2003-10-21 | Broadcom Corporation | Video and graphics system with an MPEG video decoder for concurrent multi-row decoding |
US6768774B1 (en) * | 1998-11-09 | 2004-07-27 | Broadcom Corporation | Video and graphics system with video scaling |
US8863134B2 (en) * | 1998-11-09 | 2014-10-14 | Broadcom Corporation | Real time scheduling system for operating system |
US7916795B2 (en) * | 1998-11-09 | 2011-03-29 | Broadcom Corporation | Method and system for vertical filtering using window descriptors |
US6798420B1 (en) | 1998-11-09 | 2004-09-28 | Broadcom Corporation | Video and graphics system with a single-port RAM |
AU1910800A (en) * | 1998-11-09 | 2000-05-29 | Broadcom Corporation | Graphics display system |
US6661422B1 (en) | 1998-11-09 | 2003-12-09 | Broadcom Corporation | Video and graphics system with MPEG specific data transfer commands |
US7982740B2 (en) * | 1998-11-09 | 2011-07-19 | Broadcom Corporation | Low resolution graphics mode support using window descriptors |
US7446774B1 (en) * | 1998-11-09 | 2008-11-04 | Broadcom Corporation | Video and graphics system with an integrated system bridge controller |
US7365757B1 (en) * | 1998-12-17 | 2008-04-29 | Ati International Srl | Method and apparatus for independent video and graphics scaling in a video graphics system |
US6329996B1 (en) * | 1999-01-08 | 2001-12-11 | Silicon Graphics, Inc. | Method and apparatus for synchronizing graphics pipelines |
US20030185455A1 (en) * | 1999-02-04 | 2003-10-02 | Goertzen Kenbe D. | Digital image processor |
US20030142875A1 (en) * | 1999-02-04 | 2003-07-31 | Goertzen Kenbe D. | Quality priority |
US6823129B1 (en) * | 2000-02-04 | 2004-11-23 | Quvis, Inc. | Scaleable resolution motion image recording and storage system |
US7623140B1 (en) * | 1999-03-05 | 2009-11-24 | Zoran Corporation | Method and apparatus for processing video and graphics data to create a composite output image having independent and separate layers of video and graphics |
US8698840B2 (en) * | 1999-03-05 | 2014-04-15 | Csr Technology Inc. | Method and apparatus for processing video and graphics data to create a composite output image having independent and separate layers of video and graphics display planes |
JP2000276127A (en) * | 1999-03-23 | 2000-10-06 | Hitachi Ltd | Information processor and display controller |
JP3725368B2 (en) * | 1999-05-17 | 2005-12-07 | インターナショナル・ビジネス・マシーンズ・コーポレーション | Image display selection method, computer system, and recording medium |
US6559859B1 (en) * | 1999-06-25 | 2003-05-06 | Ati International Srl | Method and apparatus for providing video signals |
US11682027B2 (en) * | 1999-07-19 | 2023-06-20 | Expedited Dual Commerce Llc | System and method for expediting fulfillments of online ordered deliverables by expedited-service area pickups in specified time-windows and by delivery to specific locations |
FR2797979B1 (en) * | 1999-08-24 | 2002-05-24 | St Microelectronics Sa | ANTI-FLICKER FILTERING METHOD AND SYSTEM |
US7028096B1 (en) * | 1999-09-14 | 2006-04-11 | Streaming21, Inc. | Method and apparatus for caching for streaming data |
US8913667B2 (en) * | 1999-11-09 | 2014-12-16 | Broadcom Corporation | Video decoding system having a programmable variable-length decoder |
US6538656B1 (en) | 1999-11-09 | 2003-03-25 | Broadcom Corporation | Video and graphics system with a data transport processor |
US6975324B1 (en) * | 1999-11-09 | 2005-12-13 | Broadcom Corporation | Video and graphics system with a video transport processor |
US9668011B2 (en) | 2001-02-05 | 2017-05-30 | Avago Technologies General Ip (Singapore) Pte. Ltd. | Single chip set-top box system |
JP3950926B2 (en) * | 1999-11-30 | 2007-08-01 | エーユー オプトロニクス コーポレイション | Image display method, host device, image display device, and display interface |
CA2394352C (en) | 1999-12-14 | 2008-07-15 | Scientific-Atlanta, Inc. | System and method for adaptive decoding of a video signal with coordinated resource allocation |
WO2001045426A1 (en) * | 1999-12-14 | 2001-06-21 | Broadcom Corporation | Video, audio and graphics decode, composite and display system |
US20020026642A1 (en) * | 1999-12-15 | 2002-02-28 | Augenbraun Joseph E. | System and method for broadcasting web pages and other information |
JP3262772B2 (en) * | 1999-12-17 | 2002-03-04 | 株式会社ナムコ | Image generation system and information storage medium |
GB9930306D0 (en) * | 1999-12-22 | 2000-02-09 | Koninkl Philips Electronics Nv | Broadcast enhancement system and method |
US7483042B1 (en) * | 2000-01-13 | 2009-01-27 | Ati International, Srl | Video graphics module capable of blending multiple image layers |
EP1170942A4 (en) * | 2000-01-24 | 2007-03-07 | Matsushita Electric Ind Co Ltd | Image synthesizing device, recorded medium, and program |
US20010016010A1 (en) * | 2000-01-27 | 2001-08-23 | Lg Electronics Inc. | Apparatus for receiving digital moving picture |
US6654835B1 (en) * | 2000-03-23 | 2003-11-25 | International Business Machines Corporation | High bandwidth data transfer employing a multi-mode, shared line buffer |
US6903748B1 (en) * | 2000-04-11 | 2005-06-07 | Apple Computer, Inc. | Mechanism for color-space neutral (video) effects scripting engine |
US6891533B1 (en) * | 2000-04-11 | 2005-05-10 | Hewlett-Packard Development Company, L.P. | Compositing separately-generated three-dimensional images |
US6781600B2 (en) * | 2000-04-14 | 2004-08-24 | Picsel Technologies Limited | Shape processor |
US7450114B2 (en) * | 2000-04-14 | 2008-11-11 | Picsel (Research) Limited | User interface systems and methods for manipulating and viewing digital documents |
US7576730B2 (en) | 2000-04-14 | 2009-08-18 | Picsel (Research) Limited | User interface systems and methods for viewing and manipulating digital documents |
US6518970B1 (en) * | 2000-04-20 | 2003-02-11 | Ati International Srl | Graphics processing device with integrated programmable synchronization signal generation |
US6891893B2 (en) * | 2000-04-21 | 2005-05-10 | Microsoft Corp. | Extensible multimedia application program interface and related methods |
US7649943B2 (en) * | 2000-04-21 | 2010-01-19 | Microsoft Corporation | Interface and related methods facilitating motion compensation in media processing |
US7634011B2 (en) * | 2000-04-21 | 2009-12-15 | Microsoft Corporation | Application program interface (API) facilitating decoder control of accelerator resources |
US6940912B2 (en) * | 2000-04-21 | 2005-09-06 | Microsoft Corporation | Dynamically adaptive multimedia application program interface and related methods |
US6766281B1 (en) * | 2000-05-12 | 2004-07-20 | S3 Graphics Co., Ltd. | Matched texture filter design for rendering multi-rate data samples |
US6828983B1 (en) * | 2000-05-12 | 2004-12-07 | S3 Graphics Co., Ltd. | Selective super-sampling/adaptive anti-aliasing of complex 3D data |
US6825852B1 (en) * | 2000-05-16 | 2004-11-30 | Adobe Systems Incorporated | Combining images including transparency by selecting color components |
US6798418B1 (en) | 2000-05-24 | 2004-09-28 | Advanced Micro Devices, Inc. | Graphics subsystem including a RAMDAC IC with digital video storage interface for connection to a graphics bus |
US6968305B1 (en) * | 2000-06-02 | 2005-11-22 | Averant, Inc. | Circuit-level memory and combinational block modeling |
US7210099B2 (en) | 2000-06-12 | 2007-04-24 | Softview Llc | Resolution independent vector display of internet content |
US7405734B2 (en) * | 2000-07-18 | 2008-07-29 | Silicon Graphics, Inc. | Method and system for presenting three-dimensional computer graphics images using multiple graphics processing units |
US6573946B1 (en) * | 2000-08-31 | 2003-06-03 | Intel Corporation | Synchronizing video streams with different pixel clock rates |
EP1199888B1 (en) * | 2000-10-19 | 2006-09-13 | Sanyo Electric Co., Ltd. | Image data output device and receiving device |
US7119815B1 (en) | 2000-10-31 | 2006-10-10 | Intel Corporation | Analyzing alpha values for flicker filtering |
JP2004513578A (en) * | 2000-10-31 | 2004-04-30 | コーニンクレッカ フィリップス エレクトロニクス エヌ ヴィ | Method and apparatus for creating a video scene containing graphic elements |
US6903753B1 (en) * | 2000-10-31 | 2005-06-07 | Microsoft Corporation | Compositing images from multiple sources |
US6725446B1 (en) * | 2000-11-01 | 2004-04-20 | Digital Integrator, Inc. | Information distribution method and system |
US6791553B1 (en) * | 2000-11-17 | 2004-09-14 | Hewlett-Packard Development Company, L.P. | System and method for efficiently rendering a jitter enhanced graphical image |
US6985162B1 (en) * | 2000-11-17 | 2006-01-10 | Hewlett-Packard Development Company, L.P. | Systems and methods for rendering active stereo graphical data as passive stereo |
US6864894B1 (en) * | 2000-11-17 | 2005-03-08 | Hewlett-Packard Development Company, L.P. | Single logical screen system and method for rendering graphical data |
US6680739B1 (en) * | 2000-11-17 | 2004-01-20 | Hewlett-Packard Development Company, L.P. | Systems and methods for compositing graphical data |
US6882346B1 (en) * | 2000-11-17 | 2005-04-19 | Hewlett-Packard Development Company, L.P. | System and method for efficiently rendering graphical data |
US6870539B1 (en) * | 2000-11-17 | 2005-03-22 | Hewlett-Packard Development Company, L.P. | Systems for compositing graphical data |
US7358974B2 (en) * | 2001-01-29 | 2008-04-15 | Silicon Graphics, Inc. | Method and system for minimizing an amount of data needed to test data against subarea boundaries in spatially composited digital video |
US20020105592A1 (en) * | 2001-02-05 | 2002-08-08 | Conexant Systems, Inc. | System and method for processing HDTV format video signals |
US6909836B2 (en) * | 2001-02-07 | 2005-06-21 | Autodesk Canada Inc. | Multi-rate real-time players |
US7123307B1 (en) * | 2001-02-23 | 2006-10-17 | Silicon Image, Inc. | Clock jitter limiting scheme in video transmission through multiple stages |
US7057627B2 (en) * | 2001-03-05 | 2006-06-06 | Broadcom Corporation | Video and graphics system with square graphics pixels |
JP3903721B2 (en) | 2001-03-12 | 2007-04-11 | ソニー株式会社 | Information transmitting apparatus and method, information receiving apparatus and method, information transmitting / receiving system and method, recording medium, and program |
JP3610915B2 (en) * | 2001-03-19 | 2005-01-19 | 株式会社デンソー | Processing execution apparatus and program |
US7239324B2 (en) * | 2001-03-23 | 2007-07-03 | Microsoft Corporation | Methods and systems for merging graphics for display on a computing device |
US7038690B2 (en) * | 2001-03-23 | 2006-05-02 | Microsoft Corporation | Methods and systems for displaying animated graphics on a computing device |
US6919900B2 (en) * | 2001-03-23 | 2005-07-19 | Microsoft Corporation | Methods and systems for preparing graphics for display on a computing device |
US6774952B1 (en) * | 2001-04-19 | 2004-08-10 | Lsi Logic Corporation | Bandwidth management |
US20020188424A1 (en) * | 2001-04-20 | 2002-12-12 | Grinstein Georges G. | Method and system for data analysis |
WO2002095601A1 (en) * | 2001-05-22 | 2002-11-28 | Koninklijke Philips Electronics N.V. | Method and system for accelerated access to a memory |
US20090031419A1 (en) * | 2001-05-24 | 2009-01-29 | Indra Laksono | Multimedia system and server and methods for use therewith |
DE60232840D1 (en) * | 2001-06-06 | 2009-08-20 | Thomson Licensing | Video signal processing system capable of processing additional information |
US6943844B2 (en) * | 2001-06-13 | 2005-09-13 | Intel Corporation | Adjusting pixel clock |
US20040010652A1 (en) * | 2001-06-26 | 2004-01-15 | Palmchip Corporation | System-on-chip (SOC) architecture with arbitrary pipeline depth |
US7092035B1 (en) * | 2001-07-09 | 2006-08-15 | Lsi Logic Corporation | Block move engine with scaling and/or filtering for video or graphics |
US7224404B2 (en) * | 2001-07-30 | 2007-05-29 | Samsung Electronics Co., Ltd. | Remote display control of video/graphics data |
US7016418B2 (en) * | 2001-08-07 | 2006-03-21 | Ati Technologies, Inc. | Tiled memory configuration for mapping video data and method thereof |
US7253818B2 (en) * | 2001-08-07 | 2007-08-07 | Ati Technologies, Inc. | System for testing multiple devices on a single system and method thereof |
US6828987B2 (en) * | 2001-08-07 | 2004-12-07 | Ati Technologies, Inc. | Method and apparatus for processing video and graphics data |
US6887157B2 (en) * | 2001-08-09 | 2005-05-03 | Igt | Virtual cameras and 3-D gaming environments in a gaming machine |
US8267767B2 (en) | 2001-08-09 | 2012-09-18 | Igt | 3-D reels and 3-D wheels in a gaming machine |
US7367885B2 (en) * | 2001-08-09 | 2008-05-06 | Igt | 3-D text in a gaming machine |
US7909696B2 (en) | 2001-08-09 | 2011-03-22 | Igt | Game interaction in 3-D gaming environments |
US8002623B2 (en) * | 2001-08-09 | 2011-08-23 | Igt | Methods and devices for displaying multiple game elements |
US7901289B2 (en) * | 2001-08-09 | 2011-03-08 | Igt | Transparent objects on a gaming machine |
US6989840B1 (en) * | 2001-08-31 | 2006-01-24 | Nvidia Corporation | Order-independent transparency rendering system and method |
US20030068038A1 (en) * | 2001-09-28 | 2003-04-10 | Bedros Hanounik | Method and apparatus for encrypting data |
JP2003116004A (en) * | 2001-10-04 | 2003-04-18 | Seiko Epson Corp | Image file containing transparent part |
GB0124630D0 (en) * | 2001-10-13 | 2001-12-05 | Picsel Res Ltd | "Systems and methods for generating visual representations of graphical data and digital document processing" |
AU2002341280A1 (en) * | 2001-10-19 | 2003-04-28 | Koninklijke Philips Electronics N.V. | Method of and display processing unit for displaying a colour image and a display apparatus comprising such a display processing unit |
NL1019365C2 (en) * | 2001-11-14 | 2003-05-15 | Tno | Determination of a movement of a background in a series of images. |
US20030095447A1 (en) * | 2001-11-20 | 2003-05-22 | Koninklijke Philips Electronics N.V. | Shared memory controller for display processor |
US6857033B1 (en) * | 2001-12-27 | 2005-02-15 | Advanced Micro Devices, Inc. | I/O node for a computer system including an integrated graphics engine and an integrated I/O hub |
DE10164337A1 (en) * | 2001-12-28 | 2003-07-17 | Broadcasttelevision Systems Me | Method for generating an image composed of two stored images |
US6587390B1 (en) * | 2001-12-31 | 2003-07-01 | Lsi Logic Corporation | Memory controller for handling data transfers which exceed the page width of DDR SDRAM devices |
US7274857B2 (en) | 2001-12-31 | 2007-09-25 | Scientific-Atlanta, Inc. | Trick modes for compressed video streams |
US6996277B2 (en) * | 2002-01-07 | 2006-02-07 | Xerox Corporation | Image type classification using color discreteness features |
US6856358B1 (en) * | 2002-01-16 | 2005-02-15 | Etron Technology, Inc. | Phase-increase induced backporch decrease (PIBD) phase recovery method for video signal processing |
US20030137523A1 (en) * | 2002-01-22 | 2003-07-24 | International Business Machines Corporation | Enhanced blending unit performance in graphics system |
US6876369B2 (en) * | 2002-01-22 | 2005-04-05 | International Business Machines Corp. | Applying translucent filters according to visual disability needs in a network environment |
US6803922B2 (en) | 2002-02-14 | 2004-10-12 | International Business Machines Corporation | Pixel formatter for two-dimensional graphics engine of set-top box system |
US6996186B2 (en) * | 2002-02-22 | 2006-02-07 | International Business Machines Corporation | Programmable horizontal filter with noise reduction and image scaling for video encoding system |
US6909432B2 (en) * | 2002-02-27 | 2005-06-21 | Hewlett-Packard Development Company, L.P. | Centralized scalable resource architecture and system |
US6924799B2 (en) | 2002-02-28 | 2005-08-02 | Hewlett-Packard Development Company, L.P. | Method, node, and network for compositing a three-dimensional stereo image from a non-stereo application |
US7849172B2 (en) * | 2002-03-01 | 2010-12-07 | Broadcom Corporation | Method of analyzing non-preemptive DRAM transactions in real-time unified memory architectures |
US6819331B2 (en) * | 2002-03-01 | 2004-11-16 | Broadcom Corporation | Method and apparatus for updating a color look-up table |
US7080177B2 (en) * | 2002-03-01 | 2006-07-18 | Broadcom Corporation | System and method for arbitrating clients in a hierarchical real-time DRAM system |
US20030164842A1 (en) * | 2002-03-04 | 2003-09-04 | Oberoi Ranjit S. | Slice blend extension for accumulation buffering |
JP2003280982A (en) * | 2002-03-20 | 2003-10-03 | Seiko Epson Corp | Data transfer device for multi-dimensional memory, data transfer program for multi-dimensional memory and data transfer method for multi-dimensional memory |
US7050113B2 (en) * | 2002-03-26 | 2006-05-23 | International Business Machines Corporation | Digital video data scaler and method |
US7034897B2 (en) | 2002-04-01 | 2006-04-25 | Broadcom Corporation | Method of operating a video decoding system |
US8284844B2 (en) | 2002-04-01 | 2012-10-09 | Broadcom Corporation | Video decoding system supporting multiple standards |
US7302503B2 (en) * | 2002-04-01 | 2007-11-27 | Broadcom Corporation | Memory access engine having multi-level command structure |
US8401084B2 (en) | 2002-04-01 | 2013-03-19 | Broadcom Corporation | System and method for multi-row decoding of video with dependent rows |
US7918730B2 (en) | 2002-06-27 | 2011-04-05 | Igt | Trajectory-based 3-D games of chance for video gaming machines |
US6870542B2 (en) * | 2002-06-28 | 2005-03-22 | Nvidia Corporation | System and method for filtering graphics data on scanout to a monitor |
US20040015551A1 (en) * | 2002-07-18 | 2004-01-22 | Thornton Barry W. | System of co-located computers with content and/or communications distribution |
KR100472464B1 (en) * | 2002-07-20 | 2005-03-10 | 삼성전자주식회사 | Apparatus and method for serial scaling |
US6982727B2 (en) * | 2002-07-23 | 2006-01-03 | Broadcom Corporation | System and method for providing graphics using graphical engine |
FR2842977A1 (en) * | 2002-07-24 | 2004-01-30 | Total Immersion | METHOD AND SYSTEM FOR ENABLING A USER TO MIX REAL-TIME SYNTHESIS IMAGES WITH VIDEO IMAGES |
KR100451554B1 (en) * | 2002-08-30 | 2004-10-08 | 삼성전자주식회사 | System on chip processor for multimedia |
US7782398B2 (en) * | 2002-09-04 | 2010-08-24 | Chan Thomas M | Display processor integrated circuit with on-chip programmable logic for implementing custom enhancement functions |
US7480010B2 (en) * | 2002-09-04 | 2009-01-20 | Denace Enterprise Co., L.L.C. | Customizable ASIC with substantially non-customizable portion that supplies pixel data to a mask-programmable portion in multiple color space formats |
US7202908B2 (en) * | 2002-09-04 | 2007-04-10 | Darien K. Wallace | Deinterlacer using both low angle and high angle spatial interpolation |
US7533402B2 (en) | 2002-09-30 | 2009-05-12 | Broadcom Corporation | Satellite set-top box decoder for simultaneously servicing multiple independent programs for display on independent display device |
US7230987B2 (en) * | 2002-09-30 | 2007-06-12 | Broadcom Corporation | Multiple time-base clock for processing multiple satellite signals |
US7002595B2 (en) * | 2002-10-04 | 2006-02-21 | Broadcom Corporation | Processing of color graphics data |
US6826301B2 (en) * | 2002-10-07 | 2004-11-30 | Infocus Corporation | Data transmission system and method |
WO2004034698A1 (en) * | 2002-10-09 | 2004-04-22 | Matsushita Electric Industrial Co., Ltd. | Information processor |
US8259121B2 (en) * | 2002-10-22 | 2012-09-04 | Broadcom Corporation | System and method for processing data using a network |
US9377987B2 (en) * | 2002-10-22 | 2016-06-28 | Broadcom Corporation | Hardware assisted format change mechanism in a display controller |
US7990390B2 (en) | 2002-10-22 | 2011-08-02 | Broadcom Corporation | Multi-pass system and method supporting multiple streams of video |
US7336283B2 (en) * | 2002-10-24 | 2008-02-26 | Hewlett-Packard Development Company, L.P. | Efficient hardware A-buffer using three-dimensional allocation of fragment memory |
DE10251463A1 (en) * | 2002-11-05 | 2004-05-19 | BSH Bosch und Siemens Hausgeräte GmbH | Electrically-driven water circulation pump for automobile engine, ship, laundry machine or dish washing machine, with circulated liquid providing cooling of rotor of drive motor |
US7477205B1 (en) * | 2002-11-05 | 2009-01-13 | Nvidia Corporation | Method and apparatus for displaying data from multiple frame buffers on one or more display devices |
US8737810B2 (en) * | 2002-11-15 | 2014-05-27 | Thomson Licensing | Method and apparatus for cropping of subtitle elements |
EP1576809B1 (en) | 2002-11-15 | 2007-06-20 | Thomson Licensing | Method and apparatus for composition of subtitles |
US7061495B1 (en) * | 2002-11-18 | 2006-06-13 | Ati Technologies, Inc. | Method and apparatus for rasterizer interpolation |
US7796133B1 (en) | 2002-11-18 | 2010-09-14 | Ati Technologies Ulc | Unified shader |
US8933945B2 (en) * | 2002-11-27 | 2015-01-13 | Ati Technologies Ulc | Dividing work among multiple graphics pipelines using a super-tiling technique |
US7633506B1 (en) | 2002-11-27 | 2009-12-15 | Ati Technologies Ulc | Parallel pipeline graphics system |
US7330195B2 (en) * | 2002-12-18 | 2008-02-12 | Hewlett-Packard Development Company, L.P. | Graphic pieces for a border image |
US7546544B1 (en) * | 2003-01-06 | 2009-06-09 | Apple Inc. | Method and apparatus for creating multimedia presentations |
US7840905B1 (en) | 2003-01-06 | 2010-11-23 | Apple Inc. | Creating a theme used by an authoring application to produce a multimedia presentation |
KR101015412B1 (en) * | 2003-01-17 | 2011-02-22 | 톰슨 라이센싱 | Electronic apparatus generating video signals and process for generating video signals |
US7466855B2 (en) | 2003-02-11 | 2008-12-16 | Research In Motion Limited | Display processing system and method |
JP3672561B2 (en) * | 2003-02-14 | 2005-07-20 | 三菱電機株式会社 | Moving picture synthesizing apparatus, moving picture synthesizing method, and information terminal apparatus with moving picture synthesizing function |
US7986358B1 (en) * | 2003-02-25 | 2011-07-26 | Matrox Electronic Systems, Ltd. | Bayer image conversion using a graphics processing unit |
US7489362B2 (en) | 2003-03-04 | 2009-02-10 | Broadcom Corporation | Television functionality on a chip |
US7679629B2 (en) * | 2003-08-15 | 2010-03-16 | Broadcom Corporation | Methods and systems for constraining a video signal |
US7313764B1 (en) * | 2003-03-06 | 2007-12-25 | Apple Inc. | Method and apparatus to accelerate scrolling for buffered windows |
GB2413720B (en) * | 2003-03-14 | 2006-08-02 | British Broadcasting Corp | Video processing |
US7969451B2 (en) * | 2003-03-27 | 2011-06-28 | International Business Machines Corporation | Method and apparatus for dynamically sizing color tables |
WO2004090860A1 (en) * | 2003-04-01 | 2004-10-21 | Matsushita Electric Industrial Co., Ltd. | Video combining circuit |
CA2463228C (en) | 2003-04-04 | 2012-06-26 | Evertz Microsystems Ltd. | Apparatus, systems and methods for packet based transmission of multiple data signals |
US9330060B1 (en) * | 2003-04-15 | 2016-05-03 | Nvidia Corporation | Method and device for encoding and decoding video image data |
US7800699B2 (en) | 2003-04-16 | 2010-09-21 | Nvidia Corporation | 3:2 Pulldown detection |
US7667710B2 (en) | 2003-04-25 | 2010-02-23 | Broadcom Corporation | Graphics display system with line buffer control scheme |
EP2369590B1 (en) * | 2003-04-28 | 2015-02-25 | Panasonic Corporation | Playback apparatus, playback method, recording medium, recording apparatus, recording method for recording a video stream and graphics with window information over graphics display |
TW200425024A (en) * | 2003-05-08 | 2004-11-16 | Ind Tech Res Inst | Driver system of display |
US7365796B1 (en) | 2003-05-20 | 2008-04-29 | Pixelworks, Inc. | System and method for video signal decoding using digital signal processing |
US7532254B1 (en) | 2003-05-20 | 2009-05-12 | Pixelworks, Inc. | Comb filter system and method |
US7701512B1 (en) | 2003-05-20 | 2010-04-20 | Pixelworks, Inc. | System and method for improved horizontal and vertical sync pulse detection and processing |
US7304688B1 (en) | 2003-05-20 | 2007-12-04 | Pixelworks, Inc. | Adaptive Y/C separator |
US7605867B1 (en) * | 2003-05-20 | 2009-10-20 | Pixelworks, Inc. | Method and apparatus for correction of time base errors |
US7420625B1 (en) | 2003-05-20 | 2008-09-02 | Pixelworks, Inc. | Fuzzy logic based adaptive Y/C separation system and method |
EP1489591B1 (en) * | 2003-06-12 | 2016-12-07 | Microsoft Technology Licensing, LLC | System and method for displaying images utilizing multi-blending |
KR101130413B1 (en) * | 2003-06-19 | 2012-03-27 | 소니 에릭슨 모빌 커뮤니케이션즈 에이비 | Media stream mixing |
US7307667B1 (en) * | 2003-06-27 | 2007-12-11 | Zoran Corporation | Method and apparatus for an integrated high definition television controller |
US7120814B2 (en) * | 2003-06-30 | 2006-10-10 | Raytheon Company | System and method for aligning signals in multiple clock systems |
TWI373150B (en) * | 2003-07-09 | 2012-09-21 | Shinetsu Chemical Co | Silicone rubber composition, light-emitting semiconductor embedding/protecting material and light-emitting semiconductor device |
JP2005039794A (en) * | 2003-07-18 | 2005-02-10 | Matsushita Electric Ind Co Ltd | Display processing method and display processing apparatus |
JP4818919B2 (en) * | 2003-08-28 | 2011-11-16 | ミップス テクノロジーズ インコーポレイテッド | Integrated mechanism for suspending and deallocating computational threads of execution within a processor |
JP2005077501A (en) * | 2003-08-28 | 2005-03-24 | Toshiba Corp | Information processing apparatus, semiconductor device for display control and video stream data display control method |
US7584321B1 (en) * | 2003-08-28 | 2009-09-01 | Nvidia Corporation | Memory address and datapath multiplexing |
JP2005080134A (en) * | 2003-09-02 | 2005-03-24 | Sanyo Electric Co Ltd | Image signal processing circuit |
FR2859591A1 (en) * | 2003-09-08 | 2005-03-11 | St Microelectronics Sa | DVD and digital television image processing circuit includes external volatile memory for processing multiple video and graphical planes |
US7966642B2 (en) * | 2003-09-15 | 2011-06-21 | Nair Ajith N | Resource-adaptive management of video storage |
US20050062755A1 (en) * | 2003-09-18 | 2005-03-24 | Phil Van Dyke | YUV display buffer |
KR100580177B1 (en) * | 2003-09-22 | 2006-05-15 | 삼성전자주식회사 | Display synchronization signal generation apparatus in the digital receiver, decoder and method thereof |
KR100510550B1 (en) * | 2003-09-29 | 2005-08-26 | 삼성전자주식회사 | Method and apparatus for scaling an image in both horizontal and vertical directions |
WO2005039157A1 (en) * | 2003-10-22 | 2005-04-28 | Sanyo Electric Co., Ltd. | Mobile telephone apparatus, display method, and program |
US8063916B2 (en) * | 2003-10-22 | 2011-11-22 | Broadcom Corporation | Graphics layer reduction for video composition |
EP1528512A3 (en) * | 2003-10-28 | 2006-02-15 | Samsung Electronics Co., Ltd. | Graphic decoder, image reproduction apparatus and method for graphic display acceleration based on commands |
US7525526B2 (en) * | 2003-10-28 | 2009-04-28 | Samsung Electronics Co., Ltd. | System and method for performing image reconstruction and subpixel rendering to effect scaling for multi-mode display |
US7262782B1 (en) * | 2003-11-07 | 2007-08-28 | Adobe Systems Incorporated | Selectively transforming overlapping illustration artwork |
US7511714B1 (en) * | 2003-11-10 | 2009-03-31 | Nvidia Corporation | Video format conversion using 3D graphics pipeline of a GPU |
KR100519776B1 (en) * | 2003-11-24 | 2005-10-07 | 삼성전자주식회사 | Method and apparatus for converting resolution of video signal |
US7486337B2 (en) * | 2003-12-22 | 2009-02-03 | Intel Corporation | Controlling the overlay of multiple video signals |
US7535478B2 (en) * | 2003-12-24 | 2009-05-19 | Intel Corporation | Method and apparatus to communicate graphics overlay information to display modules |
US7477140B1 (en) | 2003-12-26 | 2009-01-13 | Booth Kenneth C | See-through lighted information display |
US7461037B2 (en) * | 2003-12-31 | 2008-12-02 | Nokia Siemens Networks Oy | Clustering technique for cyclic phenomena |
US8144156B1 (en) | 2003-12-31 | 2012-03-27 | Zii Labs Inc. Ltd. | Sequencer with async SIMD array |
US7262818B2 (en) * | 2004-01-02 | 2007-08-28 | Trumpion Microelectronic Inc. | Video system with de-motion-blur processing |
US7769198B2 (en) * | 2004-01-09 | 2010-08-03 | Broadcom Corporation | System, method, apparatus for repeating last line to scalar |
US7760968B2 (en) * | 2004-01-16 | 2010-07-20 | Nvidia Corporation | Video image processing with processing time allocation |
US7653265B2 (en) * | 2004-01-16 | 2010-01-26 | Nvidia Corporation | Video image processing with utility processing stage |
US9292904B2 (en) * | 2004-01-16 | 2016-03-22 | Nvidia Corporation | Video image processing with parallel processing |
US7308159B2 (en) * | 2004-01-16 | 2007-12-11 | Enuclia Semiconductor, Inc. | Image processing system and method with dynamically controlled pixel processing |
US8466924B2 (en) * | 2004-01-28 | 2013-06-18 | Entropic Communications, Inc. | Displaying on a matrix display |
JP2005217971A (en) * | 2004-01-30 | 2005-08-11 | Toshiba Corp | Onscreen superposing device |
WO2005074400A2 (en) | 2004-02-10 | 2005-08-18 | Lg Electronics Inc. | Recording medium and method and apparatus for decoding text subtitle streams |
DE602004010777T2 (en) * | 2004-02-18 | 2008-12-04 | Harman Becker Automotive Systems Gmbh | Alpha mix based on a lookup table |
US20050195200A1 (en) * | 2004-03-03 | 2005-09-08 | Chuang Dan M. | Embedded system with 3D graphics core and local pixel buffer |
DE102004012516A1 (en) * | 2004-03-15 | 2005-10-13 | Infineon Technologies Ag | Computer system for electronic data processing |
JP3852452B2 (en) * | 2004-03-16 | 2006-11-29 | ソニー株式会社 | Image data storage method and image processing apparatus |
WO2005096162A1 (en) * | 2004-03-18 | 2005-10-13 | Matsushita Electric Industrial Co., Ltd. | Arbitration method and device |
JP4673885B2 (en) | 2004-03-26 | 2011-04-20 | エルジー エレクトロニクス インコーポレイティド | Recording medium, method for reproducing text subtitle stream, and apparatus therefor |
EP1730730B1 (en) * | 2004-03-26 | 2009-11-25 | LG Electronics, Inc. | Recording medium and method and apparatus for reproducing text subtitle stream recorded on the recording medium |
US20050219252A1 (en) * | 2004-03-30 | 2005-10-06 | Buxton Mark J | Two-dimensional buffer, texture and frame buffer decompression |
FR2868865B1 (en) * | 2004-04-08 | 2007-01-19 | Philippe Hauttecoeur | METHOD AND SYSTEM FOR VOLATILE CONSTRUCTION OF AN IMAGE TO DISPLAY ON A DISPLAY SYSTEM FROM A PLURALITY OF OBJECTS |
US6940517B1 (en) * | 2004-04-26 | 2005-09-06 | Ati Technologies Inc. | Apparatus and method for pixel conversion using multiple buffers |
US20050248586A1 (en) * | 2004-05-06 | 2005-11-10 | Atousa Soroushi | Memory efficient method and apparatus for compression encoding large overlaid camera images |
US20080309817A1 (en) * | 2004-05-07 | 2008-12-18 | Micronas Usa, Inc. | Combined scaling, filtering, and scan conversion |
US7259796B2 (en) * | 2004-05-07 | 2007-08-21 | Micronas Usa, Inc. | System and method for rapidly scaling and filtering video data |
US7411628B2 (en) * | 2004-05-07 | 2008-08-12 | Micronas Usa, Inc. | Method and system for scaling, filtering, scan conversion, panoramic scaling, YC adjustment, and color conversion in a display controller |
US7408590B2 (en) * | 2004-05-07 | 2008-08-05 | Micronas Usa, Inc. | Combined scaling, filtering, and scan conversion |
US7545389B2 (en) * | 2004-05-11 | 2009-06-09 | Microsoft Corporation | Encoding ClearType text for use on alpha blended textures |
JP4624715B2 (en) * | 2004-05-13 | 2011-02-02 | ルネサスエレクトロニクス株式会社 | System LSI |
US8427490B1 (en) | 2004-05-14 | 2013-04-23 | Nvidia Corporation | Validating a graphics pipeline using pre-determined schedules |
US7688337B2 (en) * | 2004-05-21 | 2010-03-30 | Broadcom Corporation | System and method for reducing image scaling complexity with flexible scaling factors |
US7469068B2 (en) * | 2004-05-27 | 2008-12-23 | Seiko Epson Corporation | Method and apparatus for dimensionally transforming an image without a line buffer |
US7567670B2 (en) * | 2004-05-28 | 2009-07-28 | Intel Corporation | Verification information for digital video signal |
JP4306536B2 (en) * | 2004-05-31 | 2009-08-05 | パナソニック電工株式会社 | Scan converter |
US20050270297A1 (en) * | 2004-06-08 | 2005-12-08 | Sony Corporation And Sony Electronics Inc. | Time sliced architecture for graphics display system |
US20050280659A1 (en) * | 2004-06-16 | 2005-12-22 | Paver Nigel C | Display controller bandwidth and power reduction |
US8515741B2 (en) * | 2004-06-18 | 2013-08-20 | Broadcom Corporation | System (s), method (s) and apparatus for reducing on-chip memory requirements for audio decoding |
US8861600B2 (en) * | 2004-06-18 | 2014-10-14 | Broadcom Corporation | Method and system for dynamically configurable DCT/IDCT module in a wireless handset |
JP2006011074A (en) * | 2004-06-25 | 2006-01-12 | Seiko Epson Corp | Display controller, electronic equipment, and image data supply method |
US8600217B2 (en) * | 2004-07-14 | 2013-12-03 | Arturo A. Rodriguez | System and method for improving quality of displayed picture during trick modes |
KR100716982B1 (en) * | 2004-07-15 | 2007-05-10 | 삼성전자주식회사 | Multi-dimensional video format transforming apparatus and method |
US20060012714A1 (en) * | 2004-07-16 | 2006-01-19 | Greenforest Consulting, Inc | Dual-scaler architecture for reducing video processing requirements |
JP4880884B2 (en) * | 2004-07-21 | 2012-02-22 | 株式会社東芝 | Information processing apparatus and display control method |
TWI244333B (en) * | 2004-07-23 | 2005-11-21 | Realtek Semiconductor Corp | Video composing circuit and method thereof |
TWI246326B (en) * | 2004-08-16 | 2005-12-21 | Realtek Semiconductor Corp | Image processing circuit of digital TV |
KR100624311B1 (en) * | 2004-08-30 | 2006-09-19 | 삼성에스디아이 주식회사 | Method for controlling frame memory and display device using the same |
TWI248764B (en) * | 2004-09-01 | 2006-02-01 | Realtek Semiconductor Corp | Method and apparatus for generating visual effect |
US20060050089A1 (en) * | 2004-09-09 | 2006-03-09 | Atousa Soroushi | Method and apparatus for selecting pixels to write to a buffer when creating an enlarged image |
TWI256022B (en) * | 2004-09-24 | 2006-06-01 | Realtek Semiconductor Corp | Method and apparatus for scaling image block |
US8624906B2 (en) * | 2004-09-29 | 2014-01-07 | Nvidia Corporation | Method and system for non stalling pipeline instruction fetching from memory |
JP4367307B2 (en) * | 2004-09-30 | 2009-11-18 | 株式会社日立製作所 | Copyright management method and playback apparatus |
US7426594B1 (en) | 2004-10-08 | 2008-09-16 | Nvidia Corporation | Apparatus, system, and method for arbitrating between memory requests |
US20080192146A1 (en) * | 2004-10-14 | 2008-08-14 | Akifumi Yamana | Video Signal Processor |
US20060092320A1 (en) * | 2004-10-29 | 2006-05-04 | Nickerson Brian R | Transferring a video frame from memory into an on-chip buffer for video processing |
US7643032B2 (en) * | 2004-11-02 | 2010-01-05 | Microsoft Corporation | Texture-based packing, such as for packing 8-bit pixels into two bits |
US7532221B2 (en) * | 2004-11-02 | 2009-05-12 | Microsoft Corporation | Texture-based packing, such as for packing 16-bit pixels into four bits |
US8698817B2 (en) | 2004-11-15 | 2014-04-15 | Nvidia Corporation | Video processor having scalar and vector components |
TWI256036B (en) * | 2004-11-25 | 2006-06-01 | Realtek Semiconductor Corp | Method for blending digital images |
US20060125835A1 (en) * | 2004-12-10 | 2006-06-15 | Li Sha | DMA latency compensation with scaling line buffer |
US7380036B2 (en) * | 2004-12-10 | 2008-05-27 | Micronas Usa, Inc. | Combined engine for video and graphics processing |
US7432981B1 (en) | 2004-12-13 | 2008-10-07 | Nvidia Corporation | Apparatus, system, and method for processing digital audio/video signals |
KR100743520B1 (en) * | 2005-01-04 | 2007-07-27 | 삼성전자주식회사 | Video Scaler and method for scaling video signal |
US7853044B2 (en) * | 2005-01-13 | 2010-12-14 | Nvidia Corporation | Video processing system and method with dynamic tag architecture |
US20060152627A1 (en) * | 2005-01-13 | 2006-07-13 | Ruggiero Carl J | Video processing system and method with dynamic tag architecture |
US7869666B2 (en) * | 2005-01-13 | 2011-01-11 | Nvidia Corporation | Video processing system and method with dynamic tag architecture |
US7738740B2 (en) * | 2005-01-13 | 2010-06-15 | Nvidia Corporation | Video processing system and method with dynamic tag architecture |
US7792385B2 (en) * | 2005-01-25 | 2010-09-07 | Globalfoundries Inc. | Scratch pad for storing intermediate loop filter data |
US8576924B2 (en) * | 2005-01-25 | 2013-11-05 | Advanced Micro Devices, Inc. | Piecewise processing of overlap smoothing and in-loop deblocking |
US8773328B2 (en) * | 2005-02-12 | 2014-07-08 | Broadcom Corporation | Intelligent DMA in a mobile multimedia processor supporting multiple display formats |
US20060184893A1 (en) * | 2005-02-17 | 2006-08-17 | Raymond Chow | Graphics controller providing for enhanced control of window animation |
EP1856912A2 (en) * | 2005-02-28 | 2007-11-21 | Nxp B.V. | New compression format and apparatus using the new compression format for temporarily storing image data in a frame memory |
US8036873B2 (en) * | 2005-02-28 | 2011-10-11 | Synopsys, Inc. | Efficient clock models and their use in simulation |
TWI272006B (en) * | 2005-03-08 | 2007-01-21 | Realtek Semiconductor Corp | Method of recording a plurality of graphic objects and processing apparatus thereof |
US7382376B2 (en) * | 2005-04-01 | 2008-06-03 | Seiko Epson Corporation | System and method for effectively utilizing a memory device in a compressed domain |
US7450129B2 (en) * | 2005-04-29 | 2008-11-11 | Nvidia Corporation | Compression of streams of rendering commands |
US11733958B2 (en) | 2005-05-05 | 2023-08-22 | Iii Holdings 1, Llc | Wireless mesh-enabled system, host device, and method for use therewith |
US8019883B1 (en) | 2005-05-05 | 2011-09-13 | Digital Display Innovations, Llc | WiFi peripheral mode display system |
US20060282855A1 (en) * | 2005-05-05 | 2006-12-14 | Digital Display Innovations, Llc | Multiple remote display system |
US8200796B1 (en) | 2005-05-05 | 2012-06-12 | Digital Display Innovations, Llc | Graphics display system for multiple remote terminals |
US7667707B1 (en) | 2005-05-05 | 2010-02-23 | Digital Display Innovations, Llc | Computer system for supporting multiple remote displays |
US7769274B2 (en) * | 2005-05-06 | 2010-08-03 | Mediatek, Inc. | Video processing and optical recording using a shared memory |
US7847755B1 (en) * | 2005-05-23 | 2010-12-07 | Glance Networks | Method and apparatus for the identification and selective encoding of changed host display information |
US20060274088A1 (en) * | 2005-06-04 | 2006-12-07 | Network I/O, Inc. | Method for drawing graphics in a web browser or web application |
WO2006132069A1 (en) * | 2005-06-09 | 2006-12-14 | Sharp Kabushiki Kaisha | Video signal processing method, video signal processing apparatus, and display apparatus |
US8208564B2 (en) * | 2005-06-24 | 2012-06-26 | Ntt Docomo, Inc. | Method and apparatus for video encoding and decoding using adaptive interpolation |
US7877350B2 (en) | 2005-06-27 | 2011-01-25 | Ab Initio Technology Llc | Managing metadata for graph-based computations |
US7716630B2 (en) * | 2005-06-27 | 2010-05-11 | Ab Initio Technology Llc | Managing parameters for graph-based computations |
US7965773B1 (en) | 2005-06-30 | 2011-06-21 | Advanced Micro Devices, Inc. | Macroblock cache |
US8799757B2 (en) * | 2005-07-01 | 2014-08-05 | Microsoft Corporation | Synchronization aspects of interactive multimedia presentation management |
US8020084B2 (en) | 2005-07-01 | 2011-09-13 | Microsoft Corporation | Synchronization aspects of interactive multimedia presentation management |
US20070006062A1 (en) * | 2005-07-01 | 2007-01-04 | Microsoft Corporation | Synchronization aspects of interactive multimedia presentation management |
US8108787B2 (en) * | 2005-07-01 | 2012-01-31 | Microsoft Corporation | Distributing input events to multiple applications in an interactive media environment |
US8656268B2 (en) | 2005-07-01 | 2014-02-18 | Microsoft Corporation | Queueing events in an interactive media environment |
US8305398B2 (en) * | 2005-07-01 | 2012-11-06 | Microsoft Corporation | Rendering and compositing multiple applications in an interactive media environment |
US7941522B2 (en) * | 2005-07-01 | 2011-05-10 | Microsoft Corporation | Application security in an interactive media environment |
US7721308B2 (en) | 2005-07-01 | 2010-05-18 | Microsoft Corproation | Synchronization aspects of interactive multimedia presentation management |
US20070006065A1 (en) * | 2005-07-01 | 2007-01-04 | Microsoft Corporation | Conditional event timing for interactive multimedia presentations |
US8069466B2 (en) * | 2005-08-04 | 2011-11-29 | Nds Limited | Advanced digital TV system |
US8189908B2 (en) * | 2005-09-02 | 2012-05-29 | Adobe Systems, Inc. | System and method for compressing video data and alpha channel data using a single stream |
US8218655B2 (en) | 2005-09-19 | 2012-07-10 | Maxim Integrated Products, Inc. | Method, system and device for improving video quality through in-loop temporal pre-filtering |
US7456904B2 (en) * | 2005-09-22 | 2008-11-25 | Pelco, Inc. | Method and apparatus for superimposing characters on video |
US8223798B2 (en) | 2005-10-07 | 2012-07-17 | Csr Technology Inc. | Adaptive receiver |
US8000423B2 (en) * | 2005-10-07 | 2011-08-16 | Zoran Corporation | Adaptive sample rate converter |
CN101390153B (en) | 2005-10-14 | 2011-10-12 | 三星电子株式会社 | Improved gamut mapping and subpixel rendering system and method |
US9092170B1 (en) | 2005-10-18 | 2015-07-28 | Nvidia Corporation | Method and system for implementing fragment operation processing across a graphics bus interconnect |
JP2007124090A (en) * | 2005-10-26 | 2007-05-17 | Renesas Technology Corp | Information apparatus |
JP4974508B2 (en) * | 2005-10-28 | 2012-07-11 | キヤノン株式会社 | Bus master device, bus arbitration device, and bus arbitration method |
JP5119587B2 (en) * | 2005-10-31 | 2013-01-16 | 株式会社デンソー | Vehicle display device |
US7899864B2 (en) * | 2005-11-01 | 2011-03-01 | Microsoft Corporation | Multi-user terminal services accelerator |
KR101152064B1 (en) * | 2005-11-02 | 2012-06-11 | 엘지디스플레이 주식회사 | Apparatus for performing image and method for driving the same |
GB0524804D0 (en) * | 2005-12-05 | 2006-01-11 | Falanx Microsystems As | Method of and apparatus for processing graphics |
US8112513B2 (en) * | 2005-11-30 | 2012-02-07 | Microsoft Corporation | Multi-user display proxy server |
JP2007156525A (en) * | 2005-11-30 | 2007-06-21 | Matsushita Electric Ind Co Ltd | Drawing processing device and image processing method |
US7492371B2 (en) * | 2005-12-02 | 2009-02-17 | Seiko Epson Corporation | Hardware animation of a bouncing image |
US20070132786A1 (en) * | 2005-12-05 | 2007-06-14 | Prolific Technology Inc. | Segment-based video and graphics system with video window |
KR100968452B1 (en) * | 2005-12-12 | 2010-07-07 | 삼성전자주식회사 | Video processing apparatus and control method thereof |
US8284209B2 (en) * | 2005-12-15 | 2012-10-09 | Broadcom Corporation | System and method for optimizing display bandwidth |
US7636497B1 (en) | 2005-12-27 | 2009-12-22 | Advanced Micro Devices, Inc. | Video rotation in a media acceleration engine |
US7564466B2 (en) * | 2006-01-10 | 2009-07-21 | Kabushiki Kaisha Toshiba | System and method for managing memory for color transforms |
DE102006001681B4 (en) * | 2006-01-12 | 2008-07-10 | Wismüller, Axel, Dipl.-Phys. Dr.med. | Method and device for displaying multi-channel image data |
US8130317B2 (en) * | 2006-02-14 | 2012-03-06 | Broadcom Corporation | Method and system for performing interleaved to planar transformation operations in a mobile terminal having a video display |
US20070201833A1 (en) * | 2006-02-17 | 2007-08-30 | Apple Inc. | Interface for defining aperture |
US8438572B2 (en) * | 2006-03-15 | 2013-05-07 | Freescale Semiconductor, Inc. | Task scheduling method and apparatus |
US20070216685A1 (en) * | 2006-03-15 | 2007-09-20 | Microsoft Corporation | Scene write-once vector and triangle rasterization |
US8599841B1 (en) | 2006-03-28 | 2013-12-03 | Nvidia Corporation | Multi-format bitstream decoding engine |
US8593469B2 (en) * | 2006-03-29 | 2013-11-26 | Nvidia Corporation | Method and circuit for efficient caching of reference video data |
DE102007014590A1 (en) * | 2006-04-11 | 2007-11-15 | Mediatek Inc. | Method and system for image overlay processing |
US8314806B2 (en) * | 2006-04-13 | 2012-11-20 | Intel Corporation | Low power display mode |
US8218091B2 (en) * | 2006-04-18 | 2012-07-10 | Marvell World Trade Ltd. | Shared memory multi video channel display apparatus and methods |
US8284322B2 (en) | 2006-04-18 | 2012-10-09 | Marvell World Trade Ltd. | Shared memory multi video channel display apparatus and methods |
US8264610B2 (en) | 2006-04-18 | 2012-09-11 | Marvell World Trade Ltd. | Shared memory multi video channel display apparatus and methods |
KR101246293B1 (en) * | 2006-04-24 | 2013-03-21 | 삼성전자주식회사 | Method and apparatus for user interface in home network, and electronic device and storage medium thereof |
JP2007299191A (en) * | 2006-04-28 | 2007-11-15 | Yamaha Corp | Image processing apparatus |
US8379735B2 (en) * | 2006-05-15 | 2013-02-19 | Microsoft Corporation | Automatic video glitch detection and audio-video synchronization assessment |
JP4191206B2 (en) * | 2006-06-08 | 2008-12-03 | エーユー オプトロニクス コーポレイション | Image display system and image display apparatus |
JP2008011085A (en) * | 2006-06-28 | 2008-01-17 | Toshiba Corp | Digital tv capture unit, information processor, and method for transmitting signal |
US8009903B2 (en) * | 2006-06-29 | 2011-08-30 | Panasonic Corporation | Image processor, image processing method, storage medium, and integrated circuit that can adjust a degree of depth feeling of a displayed high-quality image |
US7660486B2 (en) * | 2006-07-10 | 2010-02-09 | Aten International Co., Ltd. | Method and apparatus of removing opaque area as rescaling an image |
US8095745B1 (en) * | 2006-08-07 | 2012-01-10 | Marvell International Ltd. | Non-sequential transfer of data from a memory |
WO2008021953A2 (en) | 2006-08-10 | 2008-02-21 | Ab Initio Software Llc | Distributing services in graph-based computations |
JP4827659B2 (en) * | 2006-08-25 | 2011-11-30 | キヤノン株式会社 | Image processing apparatus, image processing method, and computer program |
US20080062304A1 (en) * | 2006-09-07 | 2008-03-13 | Claude Villeneuve | Method and apparatus for displaying at least one video signal on at least one display |
US20080062312A1 (en) * | 2006-09-13 | 2008-03-13 | Jiliang Song | Methods and Devices of Using a 26 MHz Clock to Encode Videos |
US20080062311A1 (en) * | 2006-09-13 | 2008-03-13 | Jiliang Song | Methods and Devices to Use Two Different Clocks in a Television Digital Encoder |
US8013869B2 (en) * | 2006-09-13 | 2011-09-06 | Adobe Systems Incorporated | Color selection interface |
US7692647B2 (en) * | 2006-09-14 | 2010-04-06 | Microsoft Corporation | Real-time rendering of realistic rain |
TWI320158B (en) * | 2006-09-25 | 2010-02-01 | Image scaling circuit and method thereof | |
US7460725B2 (en) * | 2006-11-09 | 2008-12-02 | Calista Technologies, Inc. | System and method for effectively encoding and decoding electronic information |
US7813425B2 (en) * | 2006-11-29 | 2010-10-12 | Ipera Technology, Inc. | System and method for processing videos and images to a determined quality level |
US9965886B2 (en) * | 2006-12-04 | 2018-05-08 | Arm Norway As | Method of and apparatus for processing graphics |
US7720300B1 (en) | 2006-12-05 | 2010-05-18 | Calister Technologies | System and method for effectively performing an adaptive quantization procedure |
US8736627B2 (en) * | 2006-12-19 | 2014-05-27 | Via Technologies, Inc. | Systems and methods for providing a shared buffer in a multiple FIFO environment |
US7712047B2 (en) * | 2007-01-03 | 2010-05-04 | Microsoft Corporation | Motion desktop |
KR20080064607A (en) * | 2007-01-05 | 2008-07-09 | 삼성전자주식회사 | A unified memory apparatus for a reconfigurable processor and method of using thereof |
KR100823169B1 (en) | 2007-01-25 | 2008-04-18 | 삼성전자주식회사 | Flash memory system capable of improving the access performance and access method thereof |
CN100512373C (en) * | 2007-02-13 | 2009-07-08 | 华为技术有限公司 | Interlacing display anti-flickering method and apparatus |
JP4748077B2 (en) * | 2007-02-14 | 2011-08-17 | セイコーエプソン株式会社 | Pixel data transfer control device and pixel data transfer control method |
US8154561B1 (en) | 2007-03-22 | 2012-04-10 | Adobe Systems Incorporated | Dynamic display of a harmony rule list |
US8537890B2 (en) * | 2007-03-23 | 2013-09-17 | Ati Technologies Ulc | Video decoder with adaptive outputs |
US8144170B2 (en) * | 2007-03-28 | 2012-03-27 | Himax Technologies Limited | Apparatus for scaling image and line buffer thereof |
US8462141B2 (en) * | 2007-04-26 | 2013-06-11 | Freescale Semiconductor, Inc. | Unified memory architecture and display controller to prevent data feed under-run |
TW200843523A (en) * | 2007-04-27 | 2008-11-01 | Cheertek Inc | System and method for adjusting monitor chrominance using multiple window |
US8861591B2 (en) * | 2007-05-11 | 2014-10-14 | Advanced Micro Devices, Inc. | Software video encoder with GPU acceleration |
US8233527B2 (en) * | 2007-05-11 | 2012-07-31 | Advanced Micro Devices, Inc. | Software video transcoder with GPU acceleration |
US8384710B2 (en) * | 2007-06-07 | 2013-02-26 | Igt | Displaying and using 3D graphics on multiple displays provided for gaming environments |
US7821524B2 (en) * | 2007-06-26 | 2010-10-26 | Microsoft Corporation | Adaptive contextual filtering |
US7978195B2 (en) * | 2007-07-02 | 2011-07-12 | Hyperformix, Inc. | Method for superimposing statistical information on tabular data |
JP2009027552A (en) * | 2007-07-20 | 2009-02-05 | Funai Electric Co Ltd | Optical disk playback apparatus |
US8073282B2 (en) * | 2007-07-23 | 2011-12-06 | Qualcomm Incorporated | Scaling filter for video sharpening |
US8599315B2 (en) * | 2007-07-25 | 2013-12-03 | Silicon Image, Inc. | On screen displays associated with remote video source devices |
EP2234017A3 (en) | 2007-07-26 | 2010-10-27 | Ab Initio Technology LLC | Transactional graph-based computation with error handling |
US8683126B2 (en) | 2007-07-30 | 2014-03-25 | Nvidia Corporation | Optimal use of buffer space by a storage controller which writes retrieved data directly to a memory |
US20090033791A1 (en) * | 2007-07-31 | 2009-02-05 | Scientific-Atlanta, Inc. | Video processing systems and methods |
US8659601B1 (en) | 2007-08-15 | 2014-02-25 | Nvidia Corporation | Program sequencer for generating indeterminant length shader programs for a graphics processor |
US9024957B1 (en) | 2007-08-15 | 2015-05-05 | Nvidia Corporation | Address independent shader program loading |
US20090282199A1 (en) * | 2007-08-15 | 2009-11-12 | Cox Michael B | Memory control system and method |
US9209792B1 (en) | 2007-08-15 | 2015-12-08 | Nvidia Corporation | Clock selection system and method |
US8411096B1 (en) | 2007-08-15 | 2013-04-02 | Nvidia Corporation | Shader program instruction fetch |
US8698819B1 (en) | 2007-08-15 | 2014-04-15 | Nvidia Corporation | Software assisted shader merging |
US9024966B2 (en) * | 2007-09-07 | 2015-05-05 | Qualcomm Incorporated | Video blending using time-averaged color keys |
JP4950834B2 (en) * | 2007-10-19 | 2012-06-13 | キヤノン株式会社 | Image processing apparatus and image processing method |
US8698756B2 (en) * | 2007-11-06 | 2014-04-15 | Stmicroelectronics Asia Pacific Pte Ltd. | Interrupt reduction method in touch screen controller |
US8397207B2 (en) * | 2007-11-26 | 2013-03-12 | Microsoft Corporation | Logical structure design surface |
US8650238B2 (en) * | 2007-11-28 | 2014-02-11 | Qualcomm Incorporated | Resolving buffer underflow/overflow in a digital system |
SG152952A1 (en) * | 2007-12-05 | 2009-06-29 | Gemini Info Pte Ltd | Method for automatically producing video cartoon with superimposed faces from cartoon template |
US9064333B2 (en) | 2007-12-17 | 2015-06-23 | Nvidia Corporation | Interrupt handling techniques in the rasterizer of a GPU |
US8780123B2 (en) | 2007-12-17 | 2014-07-15 | Nvidia Corporation | Interrupt handling techniques in the rasterizer of a GPU |
SG153692A1 (en) * | 2007-12-19 | 2009-07-29 | St Microelectronics Asia | Method of scanning an array of sensors |
KR100941029B1 (en) * | 2008-02-27 | 2010-02-05 | 에이치기술(주) | Graphic accelerator and graphic accelerating method |
JP4528843B2 (en) * | 2008-03-28 | 2010-08-25 | シャープ株式会社 | Line buffer circuit, image processing apparatus, and image forming apparatus |
US9438844B2 (en) * | 2008-04-08 | 2016-09-06 | Imagine Communications Corp. | Video multiviewer system using direct memory access (DMA) registers and block RAM |
US9716854B2 (en) * | 2008-04-09 | 2017-07-25 | Imagine Communications Corp. | Video multiviewer system with distributed scaling and related methods |
US8773469B2 (en) * | 2008-04-09 | 2014-07-08 | Imagine Communications Corp. | Video multiviewer system with serial digital interface and related methods |
US9172900B2 (en) * | 2008-04-09 | 2015-10-27 | Imagine Communications Corp. | Video multiviewer system with switcher and distributed scaling and related methods |
US8811499B2 (en) * | 2008-04-10 | 2014-08-19 | Imagine Communications Corp. | Video multiviewer system permitting scrolling of multiple video windows and related methods |
US9124847B2 (en) | 2008-04-10 | 2015-09-01 | Imagine Communications Corp. | Video multiviewer system for generating video data based upon multiple video inputs with added graphic content and related methods |
US8525837B2 (en) * | 2008-04-29 | 2013-09-03 | Teledyne Lecroy, Inc. | Method and apparatus for data preview |
US8681861B2 (en) | 2008-05-01 | 2014-03-25 | Nvidia Corporation | Multistandard hardware video encoder |
US8923385B2 (en) | 2008-05-01 | 2014-12-30 | Nvidia Corporation | Rewind-enabled hardware encoder |
US11792538B2 (en) | 2008-05-20 | 2023-10-17 | Adeia Imaging Llc | Capturing and processing of images including occlusions focused on an image sensor by a lens stack array |
US8866920B2 (en) | 2008-05-20 | 2014-10-21 | Pelican Imaging Corporation | Capturing and processing of images using monolithic camera array with heterogeneous imagers |
US8213728B2 (en) * | 2008-06-16 | 2012-07-03 | Taiwan Imagingtek Corporation | Method and apparatus of image compression with output control |
US20090310947A1 (en) * | 2008-06-17 | 2009-12-17 | Scaleo Chip | Apparatus and Method for Processing and Blending Multiple Heterogeneous Video Sources for Video Output |
CN101606848A (en) * | 2008-06-20 | 2009-12-23 | Ge医疗系统环球技术有限公司 | Data entry device and supersonic imaging device |
US8422783B2 (en) * | 2008-06-25 | 2013-04-16 | Sharp Laboratories Of America, Inc. | Methods and systems for region-based up-scaling |
US8300696B2 (en) * | 2008-07-25 | 2012-10-30 | Cisco Technology, Inc. | Transcoding for systems operating under plural video coding specifications |
US8898716B2 (en) * | 2008-07-28 | 2014-11-25 | Stmicroelectronics International N.V. | Method and apparatus for designing a communication mechanism between embedded cable modem and embedded set-top box |
US7982723B2 (en) * | 2008-09-18 | 2011-07-19 | Stmicroelectronics Asia Pacific Pte. Ltd. | Multiple touch location in a three dimensional touch screen sensor |
US8279240B2 (en) * | 2008-09-29 | 2012-10-02 | Intel Corporation | Video scaling techniques |
JP5195250B2 (en) * | 2008-10-03 | 2013-05-08 | ソニー株式会社 | Image display system and image display apparatus |
KR101546022B1 (en) * | 2008-12-09 | 2015-08-20 | 삼성전자주식회사 | Apparatus and method for data management |
US8489851B2 (en) | 2008-12-11 | 2013-07-16 | Nvidia Corporation | Processing of read requests in a memory controller using pre-fetch mechanism |
US9218792B2 (en) | 2008-12-11 | 2015-12-22 | Nvidia Corporation | Variable scaling of image data for aspect ratio conversion |
US20100156934A1 (en) * | 2008-12-23 | 2010-06-24 | Wujian Zhang | Video Display Controller |
WO2010078277A1 (en) * | 2008-12-29 | 2010-07-08 | Celio Technology Corporation | Graphics processor |
US8619083B2 (en) * | 2009-01-06 | 2013-12-31 | Microsoft Corporation | Multi-layer image composition with intermediate blending resolutions |
AU2010213618B9 (en) | 2009-02-13 | 2015-07-30 | Ab Initio Technology Llc | Managing task execution |
US9135002B1 (en) * | 2009-03-06 | 2015-09-15 | Symantec Corporation | Systems and methods for recovering an application on a computing device |
US20100245675A1 (en) * | 2009-03-25 | 2010-09-30 | Nausser Fathollahi | Method and system for video parameter analysis and transmission |
US8860745B2 (en) * | 2009-06-01 | 2014-10-14 | Stmicroelectronics, Inc. | System and method for color gamut mapping |
GB2470611B (en) | 2009-06-25 | 2011-06-29 | Tv One Ltd | Apparatus and method for processing data |
US8634023B2 (en) * | 2009-07-21 | 2014-01-21 | Qualcomm Incorporated | System for video frame synchronization using sub-frame memories |
US8667329B2 (en) | 2009-09-25 | 2014-03-04 | Ab Initio Technology Llc | Processing transactions in graph-based applications |
US9665969B1 (en) * | 2009-09-29 | 2017-05-30 | Nvidia Corporation | Data path and instruction set for packed pixel operations for video processing |
US20110084982A1 (en) * | 2009-10-12 | 2011-04-14 | Sony Corporation | Apparatus and Method for Displaying Image Data With Memory Reduction |
US20110119454A1 (en) * | 2009-11-17 | 2011-05-19 | Hsiang-Tsung Kung | Display system for simultaneous displaying of windows generated by multiple window systems belonging to the same computer platform |
TWI482483B (en) * | 2009-12-03 | 2015-04-21 | Univ Nat Yang Ming | A head-mounted visual display device with stereo vision |
CN102474632A (en) * | 2009-12-08 | 2012-05-23 | 美国博通公司 | Method and system for handling multiple 3-d video formats |
EP2355472B1 (en) * | 2010-01-22 | 2020-03-04 | Samsung Electronics Co., Ltd. | Apparatus and method for transmitting and receiving handwriting animation message |
JP2011160075A (en) * | 2010-01-29 | 2011-08-18 | Sony Corp | Image processing device and method |
US9235452B2 (en) * | 2010-02-05 | 2016-01-12 | Microsoft Technology Licensing, Llc | Graphics remoting using augmentation data |
US9292161B2 (en) * | 2010-03-24 | 2016-03-22 | Microsoft Technology Licensing, Llc | Pointer tool with touch-enabled precise placement |
US8704783B2 (en) | 2010-03-24 | 2014-04-22 | Microsoft Corporation | Easy word selection and selection ahead of finger |
US8503534B2 (en) * | 2010-04-22 | 2013-08-06 | Maxim Integrated Products, Inc. | Multi-bus architecture for a video codec |
US9183560B2 (en) | 2010-05-28 | 2015-11-10 | Daniel H. Abelow | Reality alternate |
US8730251B2 (en) | 2010-06-07 | 2014-05-20 | Apple Inc. | Switching video streams for a display without a visible interruption |
EP3287896B1 (en) | 2010-06-15 | 2023-04-26 | Ab Initio Technology LLC | Dynamically loading graph-based computations |
GB2481857B (en) * | 2010-07-09 | 2017-02-08 | Snell Ltd | Methods and apparatus for resampling a spatially sampled attribute of an image |
GB2488396A (en) * | 2010-08-17 | 2012-08-29 | Streamworks Internat S A | Video signal processing |
US8493404B2 (en) | 2010-08-24 | 2013-07-23 | Qualcomm Incorporated | Pixel rendering on display |
KR20120066305A (en) * | 2010-12-14 | 2012-06-22 | 한국전자통신연구원 | Caching apparatus and method for video motion estimation and motion compensation |
US8878950B2 (en) | 2010-12-14 | 2014-11-04 | Pelican Imaging Corporation | Systems and methods for synthesizing high resolution images using super-resolution processes |
US8914534B2 (en) | 2011-01-05 | 2014-12-16 | Sonic Ip, Inc. | Systems and methods for adaptive bitrate streaming of media stored in matroska container files using hypertext transfer protocol |
US8786673B2 (en) | 2011-01-07 | 2014-07-22 | Cyberlink Corp. | Systems and methods for performing video conversion based on non-linear stretch information |
US8639053B2 (en) | 2011-01-18 | 2014-01-28 | Dimension, Inc. | Methods and systems for up-scaling a standard definition (SD) video to high definition (HD) quality |
US8723889B2 (en) | 2011-01-25 | 2014-05-13 | Freescale Semiconductor, Inc. | Method and apparatus for processing temporal and spatial overlapping updates for an electronic display |
US20120198181A1 (en) | 2011-01-31 | 2012-08-02 | Srinjoy Das | System and Method for Managing a Memory as a Circular Buffer |
GB2488516A (en) * | 2011-02-15 | 2012-09-05 | Advanced Risc Mach Ltd | Using priority dependent delays to ensure that the average delay between accesses to a memory remains below a threshold |
US8896610B2 (en) * | 2011-02-18 | 2014-11-25 | Texas Instruments Incorporated | Error recovery operations for a hardware accelerator |
KR20120105615A (en) * | 2011-03-16 | 2012-09-26 | 삼성전자주식회사 | Color space determination device and display device including the same |
KR20120108136A (en) * | 2011-03-23 | 2012-10-05 | 삼성전자주식회사 | Method for processing image and devices using the method |
US8891854B2 (en) * | 2011-03-31 | 2014-11-18 | Realtek Semiconductor Corp. | Device and method for transforming 2D images into 3D images |
TWI517667B (en) * | 2011-03-31 | 2016-01-11 | 瑞昱半導體股份有限公司 | Device and metod for transforming 2d images into 3d images |
US20130128120A1 (en) * | 2011-04-06 | 2013-05-23 | Rupen Chanda | Graphics Pipeline Power Consumption Reduction |
WO2012170954A2 (en) * | 2011-06-10 | 2012-12-13 | Flir Systems, Inc. | Line based image processing and flexible memory system |
US9317196B2 (en) | 2011-08-10 | 2016-04-19 | Microsoft Technology Licensing, Llc | Automatic zooming for text selection/cursor placement |
US8818171B2 (en) * | 2011-08-30 | 2014-08-26 | Kourosh Soroushian | Systems and methods for encoding alternative streams of video for playback on playback devices having predetermined display aspect ratios and network connection maximum data rates |
US9467708B2 (en) | 2011-08-30 | 2016-10-11 | Sonic Ip, Inc. | Selection of resolutions for seamless resolution switching of multimedia content |
KR102074148B1 (en) | 2011-08-30 | 2020-03-17 | 엔엘디 엘엘씨 | Systems and methods for encoding and streaming video encoded using a plurality of maximum bitrate levels |
JP5611917B2 (en) * | 2011-09-20 | 2014-10-22 | 株式会社東芝 | Projector and image processing apparatus |
US9129183B2 (en) | 2011-09-28 | 2015-09-08 | Pelican Imaging Corporation | Systems and methods for encoding light field image files |
GB2495553B (en) * | 2011-10-14 | 2018-01-03 | Snell Advanced Media Ltd | Re-sampling method and apparatus |
US9148670B2 (en) * | 2011-11-30 | 2015-09-29 | Freescale Semiconductor, Inc. | Multi-core decompression of block coded video data |
US9589540B2 (en) * | 2011-12-05 | 2017-03-07 | Microsoft Technology Licensing, Llc | Adaptive control of display refresh rate based on video frame rate and power efficiency |
EP2798629A4 (en) * | 2011-12-29 | 2015-08-05 | Intel Corp | Reducing the number of scaling engines used in a display controller to display a plurality of images on a screen |
WO2013103339A1 (en) * | 2012-01-04 | 2013-07-11 | Intel Corporation | Bimodal functionality between coherent link and memory expansion |
US8928677B2 (en) * | 2012-01-24 | 2015-01-06 | Nvidia Corporation | Low latency concurrent computation |
US20130207981A1 (en) * | 2012-02-09 | 2013-08-15 | Honeywell International Inc. | Apparatus and methods for cursor animation |
US9087409B2 (en) | 2012-03-01 | 2015-07-21 | Qualcomm Incorporated | Techniques for reducing memory access bandwidth in a graphics processing system based on destination alpha values |
US9619912B2 (en) * | 2012-03-02 | 2017-04-11 | Verizon Patent And Licensing Inc. | Animated transition from an application window to another application window |
CN102664937B (en) * | 2012-04-09 | 2016-02-03 | 威盛电子股份有限公司 | High in the clouds computing graphics server and high in the clouds computing figure method of servicing |
US8847970B2 (en) | 2012-04-18 | 2014-09-30 | 2236008 Ontario Inc. | Updating graphical content based on dirty display buffers |
US9483856B2 (en) | 2012-04-20 | 2016-11-01 | Freescale Semiconductor, Inc. | Display controller with blending stage |
US9148699B2 (en) * | 2012-06-01 | 2015-09-29 | Texas Instruments Incorporated | Optimized algorithm for construction of composite video from a set of discrete video sources |
US9251555B2 (en) | 2012-06-08 | 2016-02-02 | 2236008 Ontario, Inc. | Tiled viewport composition |
US8547480B1 (en) | 2012-06-25 | 2013-10-01 | Google Inc. | Coordinating distributed graphics rendering in a multi-window display |
CN102833507A (en) * | 2012-07-20 | 2012-12-19 | 圆刚科技股份有限公司 | Video recording device |
AU2013305770A1 (en) | 2012-08-21 | 2015-02-26 | Pelican Imaging Corporation | Systems and methods for parallax detection and correction in images captured using array cameras |
US10521250B2 (en) | 2012-09-12 | 2019-12-31 | The Directv Group, Inc. | Method and system for communicating between a host device and user device through an intermediate device using a composite video signal |
US9535722B2 (en) * | 2012-09-12 | 2017-01-03 | The Directv Group, Inc. | Method and system for communicating between a host device and a user device through an intermediate device using a composite graphics signal |
US9979960B2 (en) | 2012-10-01 | 2018-05-22 | Microsoft Technology Licensing, Llc | Frame packing and unpacking between frames of chroma sampling formats with different chroma resolutions |
US9661340B2 (en) | 2012-10-22 | 2017-05-23 | Microsoft Technology Licensing, Llc | Band separation filtering / inverse filtering for frame packing / unpacking higher resolution chroma sampling formats |
US9507682B2 (en) | 2012-11-16 | 2016-11-29 | Ab Initio Technology Llc | Dynamic graph performance monitoring |
US10108521B2 (en) | 2012-11-16 | 2018-10-23 | Ab Initio Technology Llc | Dynamic component performance monitoring |
CN102968395B (en) * | 2012-11-28 | 2015-04-15 | 中国人民解放军国防科学技术大学 | Method and device for accelerating memory copy of microprocessor |
US9191457B2 (en) | 2012-12-31 | 2015-11-17 | Sonic Ip, Inc. | Systems, methods, and media for controlling delivery of content |
CN103037257B (en) * | 2012-12-31 | 2016-12-28 | 北京赛科世纪数码科技有限公司 | A kind of startup method |
US9274926B2 (en) | 2013-01-03 | 2016-03-01 | Ab Initio Technology Llc | Configurable testing of computer programs |
EP2763401A1 (en) * | 2013-02-02 | 2014-08-06 | Novomatic AG | Embedded system for video processing with hardware equipment |
TWI497960B (en) * | 2013-03-04 | 2015-08-21 | Hon Hai Prec Ind Co Ltd | Tv set and method for displaying video image |
US8866912B2 (en) | 2013-03-10 | 2014-10-21 | Pelican Imaging Corporation | System and methods for calibration of an array camera using a single captured image |
US9053768B2 (en) | 2013-03-14 | 2015-06-09 | Gsi Technology, Inc. | Systems and methods of pipelined output latching involving synchronous memory arrays |
US9998750B2 (en) | 2013-03-15 | 2018-06-12 | Cisco Technology, Inc. | Systems and methods for guided conversion of video from a first to a second compression format |
US9473807B2 (en) * | 2013-05-31 | 2016-10-18 | Echostar Technologies L.L.C. | Methods and apparatus for moving video content to integrated virtual environment devices |
US11228769B2 (en) | 2013-06-03 | 2022-01-18 | Texas Instruments Incorporated | Multi-threading in a video hardware engine |
KR20140142863A (en) * | 2013-06-05 | 2014-12-15 | 한국전자통신연구원 | Apparatus and method for providing graphic editors |
CN103313096A (en) * | 2013-06-14 | 2013-09-18 | 成都思迈科技发展有限责任公司 | Network code stream converter |
CN104346308B (en) * | 2013-07-29 | 2018-11-20 | 鸿富锦精密工业(深圳)有限公司 | Electronic device |
CN104348889B (en) * | 2013-08-09 | 2019-04-16 | 鸿富锦精密工业(深圳)有限公司 | Switching switch and electronic device |
US9805478B2 (en) * | 2013-08-14 | 2017-10-31 | Arm Limited | Compositing plural layer of image data for display |
US9292285B2 (en) | 2013-08-19 | 2016-03-22 | Apple Inc. | Interpolation implementation |
US20150062134A1 (en) * | 2013-09-04 | 2015-03-05 | Apple Inc. | Parameter fifo for configuring video related settings |
US9231993B2 (en) * | 2013-09-06 | 2016-01-05 | Lg Display Co., Ltd. | Apparatus for transmitting encoded video stream and method for transmitting the same |
US9471639B2 (en) | 2013-09-19 | 2016-10-18 | International Business Machines Corporation | Managing a grouping window on an operator graph |
US10586513B2 (en) * | 2013-09-27 | 2020-03-10 | Koninklijke Philips N.V. | Simultaneously displaying video data of multiple video sources |
US9654391B2 (en) | 2013-10-02 | 2017-05-16 | Evertz Microsystems Ltd. | Video router |
US10119808B2 (en) | 2013-11-18 | 2018-11-06 | Fotonation Limited | Systems and methods for estimating depth from projected texture using camera arrays |
US20150143450A1 (en) * | 2013-11-21 | 2015-05-21 | Broadcom Corporation | Compositing images in a compressed bitstream |
CA3128713C (en) | 2013-12-05 | 2022-06-21 | Ab Initio Technology Llc | Managing interfaces for dataflow graphs composed of sub-graphs |
WO2015090217A1 (en) | 2013-12-18 | 2015-06-25 | Mediatek Inc. | Method and apparatus for palette table prediction |
US20150181393A1 (en) * | 2013-12-20 | 2015-06-25 | Plannet Associate Co., Ltd. | Distribution device for transmission packets |
US9723216B2 (en) | 2014-02-13 | 2017-08-01 | Nvidia Corporation | Method and system for generating an image including optically zoomed and digitally zoomed regions |
US9472168B2 (en) * | 2014-03-07 | 2016-10-18 | Apple Inc. | Display pipe statistics calculation for video encoder |
KR102117075B1 (en) | 2014-03-11 | 2020-05-29 | 삼성전자주식회사 | Reconfigurable image scaling circuit |
US9275434B2 (en) | 2014-03-11 | 2016-03-01 | Qualcomm Incorporated | Phase control multi-tap downscale filter |
US10708328B2 (en) * | 2014-03-17 | 2020-07-07 | Intel Corporation | Hardware assisted media playback and capture synchronization |
US9973754B2 (en) * | 2014-03-18 | 2018-05-15 | Texas Instruments Incorporated | Low power ultra-HD video hardware engine |
GB2525666B (en) * | 2014-05-02 | 2020-12-23 | Advanced Risc Mach Ltd | Graphics processing systems |
TWI675371B (en) * | 2014-06-05 | 2019-10-21 | 美商積佳半導體股份有限公司 | Systems and methods involving multi-bank memory circuitry |
WO2015188163A1 (en) * | 2014-06-05 | 2015-12-10 | Gsi Technology, Inc. | Systems and methods involving multi-bank, dual-pipe memory circuitry |
US20150379679A1 (en) * | 2014-06-25 | 2015-12-31 | Changliang Wang | Single Read Composer with Outputs |
US20160014417A1 (en) * | 2014-07-08 | 2016-01-14 | Magnum Semiconductor, Inc. | Methods and apparatuses for stripe-based temporal and spatial video processing |
CN106797495B (en) * | 2014-08-20 | 2020-12-22 | 弗劳恩霍夫应用研究促进协会 | Video composition system, video composition method, and computer-readable storage medium |
KR102155479B1 (en) * | 2014-09-01 | 2020-09-14 | 삼성전자 주식회사 | Semiconductor device |
CN105389776B (en) | 2014-09-02 | 2019-05-03 | 辉达公司 | Image scaling techniques |
KR102246105B1 (en) * | 2014-09-25 | 2021-04-29 | 삼성전자주식회사 | Display apparatus, controlling method thereof and data transmitting method thereof |
US9729801B2 (en) | 2014-10-02 | 2017-08-08 | Dolby Laboratories Licensing Corporation | Blending images using mismatched source and display electro-optical transfer functions |
FR3029660B1 (en) * | 2014-12-05 | 2017-12-22 | Stmicroelectronics (Grenoble 2) Sas | METHOD AND DEVICE FOR COMPOSING A MULTI-PLANE VIDEO IMAGE |
US9854201B2 (en) | 2015-01-16 | 2017-12-26 | Microsoft Technology Licensing, Llc | Dynamically updating quality to higher chroma sampling rate |
US9749646B2 (en) | 2015-01-16 | 2017-08-29 | Microsoft Technology Licensing, Llc | Encoding/decoding of high chroma resolution details |
US9558528B2 (en) * | 2015-03-25 | 2017-01-31 | Xilinx, Inc. | Adaptive video direct memory access module |
EP3073479A1 (en) * | 2015-03-27 | 2016-09-28 | BAE Systems PLC | Digital display |
CN107209663B (en) * | 2015-04-23 | 2020-03-10 | 华为技术有限公司 | Data format conversion device, buffer chip and method |
US10217187B2 (en) * | 2015-06-05 | 2019-02-26 | Qatar Foundation For Education, Science And Immunity Development | Method for dynamic video magnification |
US9571265B2 (en) * | 2015-07-10 | 2017-02-14 | Tempo Semicondutor, Inc. | Sample rate converter with sample and hold |
US10657134B2 (en) | 2015-08-05 | 2020-05-19 | Ab Initio Technology Llc | Selecting queries for execution on a stream of real-time data |
US10506257B2 (en) | 2015-09-28 | 2019-12-10 | Cybrook Inc. | Method and system of video processing with back channel message management |
US10516892B2 (en) | 2015-09-28 | 2019-12-24 | Cybrook Inc. | Initial bandwidth estimation for real-time video transmission |
US10756997B2 (en) | 2015-09-28 | 2020-08-25 | Cybrook Inc. | Bandwidth adjustment for real-time video transmission |
WO2017082129A1 (en) * | 2015-11-09 | 2017-05-18 | シャープ株式会社 | Display device |
US10506283B2 (en) * | 2015-11-18 | 2019-12-10 | Cybrook Inc. | Video decoding and rendering using combined jitter and frame buffer |
US10506245B2 (en) | 2015-11-18 | 2019-12-10 | Cybrook Inc. | Video data processing using a ring buffer |
WO2017112654A2 (en) | 2015-12-21 | 2017-06-29 | Ab Initio Technology Llc | Sub-graph interface generation |
CN106131565B (en) * | 2015-12-29 | 2020-05-01 | 苏州踪视通信息技术有限公司 | Video decoding and rendering using joint jitter-frame buffer |
CN107025100A (en) * | 2016-02-01 | 2017-08-08 | 阿里巴巴集团控股有限公司 | Play method, interface rendering intent and device, the equipment of multi-medium data |
US10140066B2 (en) * | 2016-02-01 | 2018-11-27 | International Business Machines Corporation | Smart partitioning of storage access paths in shared storage services |
US9948927B2 (en) * | 2016-03-10 | 2018-04-17 | Intel Corporation | Using an optical interface between a device under test and a test apparatus |
US9997232B2 (en) | 2016-03-10 | 2018-06-12 | Micron Technology, Inc. | Processing in memory (PIM) capable memory device having sensing circuitry performing logic operations |
US10430244B2 (en) * | 2016-03-28 | 2019-10-01 | Micron Technology, Inc. | Apparatuses and methods to determine timing of operations |
GB2553744B (en) | 2016-04-29 | 2018-09-05 | Advanced Risc Mach Ltd | Graphics processing systems |
US20180284741A1 (en) * | 2016-05-09 | 2018-10-04 | StrongForce IoT Portfolio 2016, LLC | Methods and systems for industrial internet of things data collection for a chemical production process |
US10148989B2 (en) | 2016-06-15 | 2018-12-04 | Divx, Llc | Systems and methods for encoding video content |
US10504409B2 (en) * | 2016-08-31 | 2019-12-10 | Intel Corporation | Display synchronization |
US10242644B2 (en) * | 2016-09-30 | 2019-03-26 | Intel Corporation | Transmitting display data |
US10368080B2 (en) | 2016-10-21 | 2019-07-30 | Microsoft Technology Licensing, Llc | Selective upsampling or refresh of chroma sample values |
US20180122038A1 (en) * | 2016-10-28 | 2018-05-03 | Qualcomm Incorporated | Multi-layer fetch during composition |
US12079642B2 (en) | 2016-10-31 | 2024-09-03 | Ati Technologies Ulc | Method and apparatus for dynamically reducing application render-to-on screen time in a desktop environment |
KR102507383B1 (en) * | 2016-11-08 | 2023-03-08 | 한국전자통신연구원 | Method and system for stereo matching by using rectangular window |
US10998040B2 (en) | 2016-12-06 | 2021-05-04 | Gsi Technology, Inc. | Computational memory cell and processing array device using the memory cells for XOR and XNOR computations |
US11227653B1 (en) | 2016-12-06 | 2022-01-18 | Gsi Technology, Inc. | Storage array circuits and methods for computational memory cells |
US10847213B1 (en) | 2016-12-06 | 2020-11-24 | Gsi Technology, Inc. | Write data processing circuits and methods associated with computational memory cells |
US10854284B1 (en) | 2016-12-06 | 2020-12-01 | Gsi Technology, Inc. | Computational memory cell and processing array device with ratioless write port |
US10521229B2 (en) | 2016-12-06 | 2019-12-31 | Gsi Technology, Inc. | Computational memory cell and processing array device using memory cells |
US10777262B1 (en) | 2016-12-06 | 2020-09-15 | Gsi Technology, Inc. | Read data processing circuits and methods associated memory cells |
US10847212B1 (en) | 2016-12-06 | 2020-11-24 | Gsi Technology, Inc. | Read and write data processing circuits and methods associated with computational memory cells using two read multiplexers |
US10891076B1 (en) | 2016-12-06 | 2021-01-12 | Gsi Technology, Inc. | Results processing circuits and methods associated with computational memory cells |
US10943648B1 (en) | 2016-12-06 | 2021-03-09 | Gsi Technology, Inc. | Ultra low VDD memory cell with ratioless write port |
US10860320B1 (en) | 2016-12-06 | 2020-12-08 | Gsi Technology, Inc. | Orthogonal data transposition system and method during data transfers to/from a processing array |
US10770133B1 (en) | 2016-12-06 | 2020-09-08 | Gsi Technology, Inc. | Read and write data processing circuits and methods associated with computational memory cells that provides write inhibits and read bit line pre-charge inhibits |
US10832735B2 (en) * | 2017-01-03 | 2020-11-10 | Ayre Acoustics, Inc. | System and method for improved transmission of digital data |
EP3438936B1 (en) * | 2017-08-04 | 2022-03-30 | NXP USA, Inc. | Method and apparatus for managing graphics layers within a data processing system |
CN107770472B (en) * | 2017-10-31 | 2020-07-28 | 中国电子科技集团公司第二十九研究所 | Digital demodulation method and digital signal image recovery method for SECAM analog television signal |
US10856040B2 (en) * | 2017-10-31 | 2020-12-01 | Avago Technologies International Sales Pte. Limited | Video rendering system |
JP2019109353A (en) * | 2017-12-18 | 2019-07-04 | シャープ株式会社 | Display control device and liquid crystal display device provided with the display control device |
CN108322722B (en) * | 2018-01-24 | 2020-01-21 | 阿里巴巴集团控股有限公司 | Image processing method and device based on augmented reality and electronic equipment |
EP3799042A4 (en) * | 2018-05-23 | 2021-07-14 | Sony Corporation | Transmission device, transmission method, reception device, and reception method |
EP3598315B1 (en) * | 2018-07-19 | 2022-12-28 | STMicroelectronics (Grenoble 2) SAS | Direct memory access |
CN109146814B (en) * | 2018-08-20 | 2021-02-23 | Oppo广东移动通信有限公司 | Image processing method, image processing device, storage medium and electronic equipment |
CN109379628B (en) * | 2018-11-27 | 2021-02-02 | Oppo广东移动通信有限公司 | Video processing method and device, electronic equipment and computer readable medium |
US10902825B2 (en) * | 2018-12-21 | 2021-01-26 | Arris Enterprises Llc | System and method for pre-filtering crawling overlay elements for display with reduced real-time processing demands |
CN109756728B (en) * | 2019-01-02 | 2021-12-07 | 京东方科技集团股份有限公司 | Image display method and apparatus, electronic device, computer-readable storage medium |
CN113454712A (en) * | 2019-02-27 | 2021-09-28 | 索尼集团公司 | Transmission device, transmission method, reception device, and reception method |
CN109801586B (en) * | 2019-03-26 | 2021-01-26 | 京东方科技集团股份有限公司 | Display controller, display control method and system and display device |
CN111754387B (en) * | 2019-03-28 | 2023-08-04 | 杭州海康威视数字技术股份有限公司 | Image processing method and device |
US10877731B1 (en) | 2019-06-18 | 2020-12-29 | Gsi Technology, Inc. | Processing array device that performs one cycle full adder operation and bit line read/write logic features |
US10930341B1 (en) | 2019-06-18 | 2021-02-23 | Gsi Technology, Inc. | Processing array device that performs one cycle full adder operation and bit line read/write logic features |
US10958272B2 (en) | 2019-06-18 | 2021-03-23 | Gsi Technology, Inc. | Computational memory cell and processing array device using complementary exclusive or memory cells |
WO2020259507A1 (en) * | 2019-06-24 | 2020-12-30 | Huawei Technologies Co., Ltd. | Method for computing position of integer grid reference sample for block level boundary sample gradient computation in bi-predictive optical flow computation and bi-predictive correction |
US11488349B2 (en) | 2019-06-28 | 2022-11-01 | Ati Technologies Ulc | Method and apparatus for alpha blending images from different color formats |
US11080055B2 (en) | 2019-08-22 | 2021-08-03 | Apple Inc. | Register file arbitration |
WO2021039189A1 (en) * | 2019-08-30 | 2021-03-04 | ソニー株式会社 | Transmission device, transmission method, reception device, and reception method |
KR102354918B1 (en) * | 2019-09-05 | 2022-01-21 | 라인플러스 주식회사 | Method, user device, server, and recording medium for creating composite videos |
CN114600165A (en) | 2019-09-17 | 2022-06-07 | 波士顿偏振测定公司 | System and method for surface modeling using polarization cues |
US20210081353A1 (en) * | 2019-09-17 | 2021-03-18 | Micron Technology, Inc. | Accelerator chip connecting a system on a chip and a memory chip |
US11020661B2 (en) * | 2019-10-01 | 2021-06-01 | Sony Interactive Entertainment Inc. | Reducing latency in cloud gaming applications by overlapping reception and decoding of video frames and their display |
DE112020004813B4 (en) | 2019-10-07 | 2023-02-09 | Boston Polarimetrics, Inc. | System for expanding sensor systems and imaging systems with polarization |
US11145107B2 (en) | 2019-11-04 | 2021-10-12 | Facebook Technologies, Llc | Artificial reality system using superframes to communicate surface data |
US11430141B2 (en) | 2019-11-04 | 2022-08-30 | Facebook Technologies, Llc | Artificial reality system using a multisurface display protocol to communicate surface data |
JP7329143B2 (en) | 2019-11-30 | 2023-08-17 | ボストン ポーラリメトリックス,インコーポレイティド | Systems and methods for segmentation of transparent objects using polarization cues |
US11195303B2 (en) | 2020-01-29 | 2021-12-07 | Boston Polarimetrics, Inc. | Systems and methods for characterizing object pose detection and measurement systems |
US11797863B2 (en) | 2020-01-30 | 2023-10-24 | Intrinsic Innovation Llc | Systems and methods for synthesizing data for training statistical models on different imaging modalities including polarized images |
US11953700B2 (en) | 2020-05-27 | 2024-04-09 | Intrinsic Innovation Llc | Multi-aperture polarization optical systems using beam splitters |
CN111669648B (en) * | 2020-06-19 | 2022-03-25 | 艾索信息股份有限公司 | Video frequency doubling method |
CN111710300B (en) * | 2020-06-30 | 2021-11-23 | 厦门天马微电子有限公司 | Display panel, driving method and display device |
WO2022014885A1 (en) * | 2020-07-17 | 2022-01-20 | Samsung Electronics Co., Ltd. | Method and electronic device for determining dynamic resolution for application of electronic device |
US11948000B2 (en) * | 2020-10-27 | 2024-04-02 | Advanced Micro Devices, Inc. | Gang scheduling for low-latency task synchronization |
KR20220056404A (en) * | 2020-10-28 | 2022-05-06 | 삼성전자주식회사 | Display apparatus and method of controlling the same |
CN112230776B (en) * | 2020-10-29 | 2024-07-02 | 北京京东方光电科技有限公司 | Virtual reality display method, device and storage medium |
US12069227B2 (en) | 2021-03-10 | 2024-08-20 | Intrinsic Innovation Llc | Multi-modal and multi-spectral stereo camera arrays |
US12020455B2 (en) | 2021-03-10 | 2024-06-25 | Intrinsic Innovation Llc | Systems and methods for high dynamic range image reconstruction |
US11954886B2 (en) | 2021-04-15 | 2024-04-09 | Intrinsic Innovation Llc | Systems and methods for six-degree of freedom pose estimation of deformable objects |
US11290658B1 (en) | 2021-04-15 | 2022-03-29 | Boston Polarimetrics, Inc. | Systems and methods for camera exposure control |
US12067746B2 (en) | 2021-05-07 | 2024-08-20 | Intrinsic Innovation Llc | Systems and methods for using computer vision to pick up small objects |
US11546612B2 (en) | 2021-06-02 | 2023-01-03 | Western Digital Technologies, Inc. | Data storage device and method for application-defined data retrieval in surveillance systems |
WO2022256553A1 (en) * | 2021-06-04 | 2022-12-08 | Tectus Corporation | Display pixels with integrated pipeline |
KR20240018582A (en) * | 2021-06-04 | 2024-02-13 | 텍투스 코포레이션 | Display pixels with integrated pipeline |
US11562460B1 (en) | 2021-06-25 | 2023-01-24 | Meta Platforms, Inc. | High performance hardware scaler |
US11689813B2 (en) | 2021-07-01 | 2023-06-27 | Intrinsic Innovation Llc | Systems and methods for high dynamic range imaging using crossed polarizers |
KR20230020211A (en) * | 2021-08-03 | 2023-02-10 | 삼성전자주식회사 | Apparatus and Method for Video Decoding |
US11769464B2 (en) * | 2021-09-02 | 2023-09-26 | Arm Limited | Image processing |
US11392163B1 (en) * | 2021-09-23 | 2022-07-19 | Apple Inc. | On-chip supply ripple tolerant clock distribution |
US20230325091A1 (en) * | 2022-02-04 | 2023-10-12 | Chronos Tech Llc | Devices and methods for synchronous and asynchronous interface using a circular fifo |
JP2023119512A (en) * | 2022-02-16 | 2023-08-28 | トヨタ自動車株式会社 | Control device, control method, control program, and vehicle |
CN115733940B (en) * | 2022-11-04 | 2024-10-22 | 中国船舶集团有限公司第七〇九研究所 | Multi-source heterogeneous video processing display device and method for ship system |
Family Cites Families (454)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US582383A (en) * | 1897-05-11 | And elisha p | ||
US4020332A (en) | 1975-09-24 | 1977-04-26 | Bell Telephone Laboratories, Incorporated | Interpolation-decimation circuit for increasing or decreasing digital sampling frequency |
US4205341A (en) * | 1978-01-24 | 1980-05-27 | Nippon Telegraph And Telephone Public Corporation | Picture signal coding apparatus |
US4264808A (en) * | 1978-10-06 | 1981-04-28 | Ncr Corporation | Method and apparatus for electronic image processing of documents for accounting purposes |
US4326258A (en) * | 1980-01-31 | 1982-04-20 | Ncr Canada Ltd - Ncr Canada Ltee | Method and apparatus for reducing the gray scale resolution of a digitized image |
JPS5711390A (en) | 1980-06-24 | 1982-01-21 | Nintendo Co Ltd | Scanning display indication controller |
US4412294A (en) * | 1981-02-23 | 1983-10-25 | Texas Instruments Incorporated | Display system with multiple scrolling regions |
US4481594A (en) | 1982-01-18 | 1984-11-06 | Honeywell Information Systems Inc. | Method and apparatus for filling polygons displayed by a raster graphic system |
US4532547A (en) * | 1982-03-31 | 1985-07-30 | Ampex Corporation | Video device synchronization system |
US4959718A (en) | 1982-03-31 | 1990-09-25 | Ampex Corporation | Video device synchronization system |
US4533910A (en) * | 1982-11-02 | 1985-08-06 | Cadtrak Corporation | Graphics display system with viewports of arbitrary location and content |
US4603418A (en) * | 1983-07-07 | 1986-07-29 | Motorola, Inc. | Multiple access data communications controller for a time-division multiplex bus |
IL72685A (en) | 1983-08-30 | 1988-08-31 | Gen Electric | Advanced video object generator |
DE3486494T2 (en) | 1983-12-26 | 2004-03-18 | Hitachi, Ltd. | Graphic pattern processing device |
US4679040A (en) * | 1984-04-30 | 1987-07-07 | The Singer Company | Computer-generated image system to display translucent features with anti-aliasing |
US4688033A (en) | 1984-10-25 | 1987-08-18 | International Business Machines Corporation | Merged data storage panel display |
US4710761A (en) * | 1985-07-09 | 1987-12-01 | American Telephone And Telegraph Company, At&T Bell Laboratories | Window border generation in a bitmapped graphics workstation |
US4682225A (en) * | 1985-09-13 | 1987-07-21 | The United States Of America As Represented By The Administrator Of The National Aeronautics And Space Administration | Method and apparatus for telemetry adaptive bandwidth compression |
US4751446A (en) | 1985-12-06 | 1988-06-14 | Apollo Computer, Inc. | Lookup table initialization |
US4782462A (en) * | 1985-12-30 | 1988-11-01 | Signetics Corporation | Raster scan video controller with programmable prioritized sharing of display memory between update and display processes and programmable memory access termination |
US4799053A (en) | 1986-04-28 | 1989-01-17 | Texas Instruments Incorporated | Color palette having multiplexed color look up table loading |
US5043714A (en) | 1986-06-04 | 1991-08-27 | Apple Computer, Inc. | Video display apparatus |
JPH0731662B2 (en) * | 1986-07-15 | 1995-04-10 | 富士通株式会社 | Multiprocessor system |
JP2592810B2 (en) * | 1986-09-30 | 1997-03-19 | 株式会社東芝 | Sample rate conversion circuit |
US5146592A (en) | 1987-09-14 | 1992-09-08 | Visual Information Technologies, Inc. | High speed image processing computer with overlapping windows-div |
US4918523A (en) * | 1987-10-05 | 1990-04-17 | Intel Corporation | Digital video formatting and transmission system and method |
US4785349A (en) * | 1987-10-05 | 1988-11-15 | Technology Inc. 64 | Digital video decompression system |
US4811115A (en) * | 1987-10-16 | 1989-03-07 | Xerox Corporation | Image processing apparatus using approximate auto correlation function to detect the frequency of half-tone image data |
US5384912A (en) | 1987-10-30 | 1995-01-24 | New Microtime Inc. | Real time video image processing system |
KR900008518Y1 (en) | 1987-12-24 | 1990-09-22 | 주식회사 금성사 | Text mode color selecting device |
JPH0264875A (en) * | 1988-08-31 | 1990-03-05 | Toshiba Corp | High speed chroma converter for color picture |
US4954970A (en) | 1988-04-08 | 1990-09-04 | Walker James T | Video overlay image processing apparatus |
US5003299A (en) | 1988-05-17 | 1991-03-26 | Apple Computer, Inc. | Method for building a color look-up table |
US4967392A (en) | 1988-07-27 | 1990-10-30 | Alliant Computer Systems Corporation | Drawing processor for computer graphic system using a plurality of parallel processors which each handle a group of display screen scanlines |
US5065231A (en) | 1988-09-26 | 1991-11-12 | Apple Computer, Inc. | Apparatus and method for merging input RGB and composite video signals to provide both RGB and composite merged video outputs |
US4908780A (en) | 1988-10-14 | 1990-03-13 | Sun Microsystems, Inc. | Anti-aliasing raster operations utilizing sub-pixel crossing information to control pixel shading |
US5155816A (en) * | 1989-02-10 | 1992-10-13 | Intel Corporation | Pipelined apparatus and method for controlled loading of floating point data in a microprocessor |
US6299550B1 (en) | 1989-03-10 | 2001-10-09 | Spalding Sports Worldwide, Inc. | Golf ball with multiple shell layers |
US5060144A (en) * | 1989-03-16 | 1991-10-22 | Unisys Corporation | Locking control with validity status indication for a multi-host processor system that utilizes a record lock processor and a cache memory for each host processor |
US5038138A (en) * | 1989-04-17 | 1991-08-06 | International Business Machines Corporation | Display with enhanced scrolling capabilities |
JPH033165A (en) | 1989-05-31 | 1991-01-09 | Sony Corp | Optical disk reproducing device |
US5254981A (en) | 1989-09-15 | 1993-10-19 | Copytele, Inc. | Electrophoretic display employing gray scale capability utilizing area modulation |
US5598545A (en) | 1989-10-13 | 1997-01-28 | Texas Instruments Incorporated | Circuitry and method for performing two operating instructions during a single clock in a processing device |
US5765010A (en) | 1989-10-13 | 1998-06-09 | Texas Instruments Incorporated | Timing and control circuit and method for a synchronous vector processor |
US5539891A (en) | 1989-10-13 | 1996-07-23 | Texas Instruments Incorporated | Data transfer control circuit with a sequencer circuit and control subcircuits and data control method for successively entering data into a memory |
JPH03219291A (en) * | 1989-11-09 | 1991-09-26 | Matsushita Electric Ind Co Ltd | Large-screen image display method |
US5594467A (en) * | 1989-12-06 | 1997-01-14 | Video Logic Ltd. | Computer based display system allowing mixing and windowing of graphics and video |
US5097257A (en) * | 1989-12-26 | 1992-03-17 | Apple Computer, Inc. | Apparatus for providing output filtering from a frame buffer storing both video and graphics signals |
JP3056514B2 (en) | 1990-08-27 | 2000-06-26 | 任天堂株式会社 | Image display device and external storage device used therefor |
US5142273A (en) | 1990-09-20 | 1992-08-25 | Ampex Corporation | System for generating color blended video signal |
JPH04152717A (en) | 1990-10-17 | 1992-05-26 | Hitachi Ltd | A/d converter |
US5396567A (en) | 1990-11-16 | 1995-03-07 | Siemens Aktiengesellschaft | Process for adaptive quantization for the purpose of data reduction in the transmission of digital images |
JPH04185172A (en) | 1990-11-20 | 1992-07-02 | Matsushita Electric Ind Co Ltd | High-efficiency coding device for digital image signal |
US5402181A (en) | 1991-04-01 | 1995-03-28 | Jenison; Timothy P. | Method and apparatus utilizing look-up tables for color graphics in the digital composite video domain |
JP2707864B2 (en) | 1991-04-18 | 1998-02-04 | 松下電器産業株式会社 | Recording and playback device |
GB9108389D0 (en) * | 1991-04-19 | 1991-06-05 | 3 Space Software Ltd | Treatment of video images |
JPH07101916B2 (en) | 1991-05-14 | 1995-11-01 | 富士ゼロックス株式会社 | Color image editing device |
US5344048A (en) | 1991-05-24 | 1994-09-06 | Bonerb Timothy C | Flexible bulk container apparatus and discharge method |
US6088045A (en) | 1991-07-22 | 2000-07-11 | International Business Machines Corporation | High definition multimedia display |
JPH06509893A (en) * | 1991-08-13 | 1994-11-02 | ボード オブ リージェンツ オブ ザ ユニバーシティ オブ ワシントン | Image processing and graphics processing system |
WO1993004457A2 (en) * | 1991-08-21 | 1993-03-04 | Digital Equipment Corporation | Address method for computer graphics system |
US5315698A (en) * | 1991-08-21 | 1994-05-24 | Digital Equipment Corporation | Method and apparatus for varying command length in a computer graphics system |
US6124865A (en) | 1991-08-21 | 2000-09-26 | Digital Equipment Corporation | Duplicate cache tag store for computer graphics system |
EP0541220B1 (en) | 1991-09-09 | 1997-12-10 | Sun Microsystems, Inc. | Apparatus and method for managing the assignment of display attribute identification values and multiple hardware color lookup tables |
US5258747A (en) | 1991-09-30 | 1993-11-02 | Hitachi, Ltd. | Color image displaying system and method thereof |
JPH05108043A (en) | 1991-10-16 | 1993-04-30 | Pioneer Video Corp | Graphic decoder |
US5742779A (en) | 1991-11-14 | 1998-04-21 | Tolfa Corporation | Method of communication using sized icons, text, and audio |
US5706415A (en) | 1991-12-20 | 1998-01-06 | Apple Computer, Inc. | Method and apparatus for distributed interpolation of pixel shading parameter values |
US5345541A (en) * | 1991-12-20 | 1994-09-06 | Apple Computer, Inc. | Method and apparatus for approximating a value between two endpoint values in a three-dimensional image rendering device |
GB2263038B (en) | 1991-12-30 | 1996-01-31 | Apple Computer | Apparatus for manipulating streams of data |
US5371877A (en) | 1991-12-31 | 1994-12-06 | Apple Computer, Inc. | Apparatus for alternatively accessing single port random access memories to implement dual port first-in first-out memory |
GB9200281D0 (en) | 1992-01-08 | 1992-02-26 | Thomson Consumer Electronics | A pip horizontal panning circuit for wide screen television |
JPH07120434B2 (en) * | 1992-01-29 | 1995-12-20 | インターナショナル・ビジネス・マシーンズ・コーポレイション | Method and apparatus for volume rendering |
US5262854A (en) | 1992-02-21 | 1993-11-16 | Rca Thomson Licensing Corporation | Lower resolution HDTV receivers |
US5345313A (en) | 1992-02-25 | 1994-09-06 | Imageware Software, Inc | Image editing system for taking a background and inserting part of an image therein |
JPH05242232A (en) | 1992-02-28 | 1993-09-21 | Hitachi Ltd | Information processor and video display device |
US5526024A (en) | 1992-03-12 | 1996-06-11 | At&T Corp. | Apparatus for synchronization and display of plurality of digital video data streams |
JP2892898B2 (en) * | 1992-04-17 | 1999-05-17 | インターナショナル・ビジネス・マシーンズ・コーポレイション | Window management method and raster display window management system |
US5963201A (en) | 1992-05-11 | 1999-10-05 | Apple Computer, Inc. | Color processing system |
US5253059A (en) * | 1992-05-15 | 1993-10-12 | Bell Communications Research, Inc. | Method and circuit for adjusting the size of a video frame |
JPH05336441A (en) | 1992-06-03 | 1993-12-17 | Pioneer Electron Corp | Video synthesis effect device |
DE4218615C1 (en) | 1992-06-05 | 1993-07-15 | Nukem Gmbh, 8755 Alzenau, De | |
US5289276A (en) | 1992-06-19 | 1994-02-22 | General Electric Company | Method and apparatus for conveying compressed video data over a noisy communication channel |
US5640543A (en) | 1992-06-19 | 1997-06-17 | Intel Corporation | Scalable multimedia platform architecture |
US5243447A (en) | 1992-06-19 | 1993-09-07 | Intel Corporation | Enhanced single frame buffer display system |
US5432900A (en) | 1992-06-19 | 1995-07-11 | Intel Corporation | Integrated graphics and video computer display system |
US5287178A (en) | 1992-07-06 | 1994-02-15 | General Electric Company | Reset control network for a video signal encoder |
JP3032382B2 (en) * | 1992-07-13 | 2000-04-17 | シャープ株式会社 | Digital signal sampling frequency converter |
JP2582999B2 (en) | 1992-07-22 | 1997-02-19 | インターナショナル・ビジネス・マシーンズ・コーポレイション | Color palette generation method, apparatus, data processing system, and lookup table input generation method |
US5319742A (en) | 1992-08-04 | 1994-06-07 | International Business Machines Corporation | Image enhancement with mask having fuzzy edges |
JP2583003B2 (en) * | 1992-09-11 | 1997-02-19 | インターナショナル・ビジネス・マシーンズ・コーポレイション | Image display method, frame buffer, and graphics display system in graphics display system |
US5475628A (en) | 1992-09-30 | 1995-12-12 | Analog Devices, Inc. | Asynchronous digital sample rate converter |
TW397958B (en) | 1992-10-09 | 2000-07-11 | Hudson Soft Co Ltd | Image processing system |
US5404538A (en) * | 1992-10-28 | 1995-04-04 | International Business Machines Corporation | Method and apparatus for multilevel bus arbitration |
US5838389A (en) | 1992-11-02 | 1998-11-17 | The 3Do Company | Apparatus and method for updating a CLUT during horizontal blanking |
WO1994010641A1 (en) | 1992-11-02 | 1994-05-11 | The 3Do Company | Audio/video computer architecture |
IT1258631B (en) | 1992-11-02 | 1996-02-27 | Taplast Srl | PERFECTED DOSER-DISPENSER DEVICE FOR GRANULAR OR POWDER PRODUCTS |
US5600364A (en) | 1992-12-09 | 1997-02-04 | Discovery Communications, Inc. | Network controller for cable television delivery systems |
JP3939366B2 (en) | 1992-12-09 | 2007-07-04 | 松下電器産業株式会社 | Keyboard input device |
EP0605945B1 (en) | 1992-12-15 | 1997-12-29 | Sun Microsystems, Inc. | Method and apparatus for presenting information in a display system using transparent windows |
US5533182A (en) | 1992-12-22 | 1996-07-02 | International Business Machines Corporation | Aural position indicating mechanism for viewable objects |
US5301332A (en) | 1992-12-23 | 1994-04-05 | Ncr Corporation | Method and apparatus for a dynamic, timed-loop arbitration |
DE69423662T2 (en) | 1993-01-06 | 2000-07-20 | Sony Corp., Tokio/Tokyo | Recording device and recording method for a recording medium |
US5614952A (en) * | 1994-10-11 | 1997-03-25 | Hitachi America, Ltd. | Digital video decoder for decoding digital high definition and/or digital standard definition television signals |
EP0607988B1 (en) * | 1993-01-22 | 1999-10-13 | Matsushita Electric Industrial Co., Ltd. | Program controlled processor |
US5335074A (en) | 1993-02-08 | 1994-08-02 | Panasonic Technologies, Inc. | Phase locked loop synchronizer for a resampling system having incompatible input and output sample rates |
CA2109681C (en) | 1993-03-10 | 1998-08-25 | Donald Edgar Blahut | Method and apparatus for the coding and display of overlapping windows with transparency |
US5625764A (en) | 1993-03-16 | 1997-04-29 | Matsushita Electric Industrial Co., Ltd. | Weighted average circuit using digit shifting |
US5638501A (en) | 1993-05-10 | 1997-06-10 | Apple Computer, Inc. | Method and apparatus for displaying an overlay image |
US5754186A (en) | 1993-05-10 | 1998-05-19 | Apple Computer, Inc. | Method and apparatus for blending images |
US5877754A (en) | 1993-06-16 | 1999-03-02 | Intel Corporation | Process, apparatus, and system for color conversion of image signals |
JP4018159B2 (en) | 1993-06-28 | 2007-12-05 | 株式会社ルネサステクノロジ | Semiconductor integrated circuit |
US5784046A (en) * | 1993-07-01 | 1998-07-21 | Intel Corporation | Horizontally scaling image signals using digital differential accumulator processing |
US5583575A (en) | 1993-07-08 | 1996-12-10 | Mitsubishi Denki Kabushiki Kaisha | Image reproduction apparatus performing interfield or interframe interpolation |
US5479606A (en) | 1993-07-21 | 1995-12-26 | Pgm Systems, Inc. | Data display apparatus for displaying patterns using samples of signal data |
US5550594A (en) | 1993-07-26 | 1996-08-27 | Pixel Instruments Corp. | Apparatus and method for synchronizing asynchronous signals |
US5821918A (en) | 1993-07-29 | 1998-10-13 | S3 Incorporated | Video processing apparatus, systems and methods |
US5546103A (en) | 1993-08-06 | 1996-08-13 | Intel Corporation | Method and apparatus for displaying an image in a windowed environment |
US5486929A (en) * | 1993-09-03 | 1996-01-23 | Apple Computer, Inc. | Time division multiplexed video recording and playback system |
US5764238A (en) | 1993-09-10 | 1998-06-09 | Ati Technologies Inc. | Method and apparatus for scaling and blending an image to be displayed |
JP3237975B2 (en) * | 1993-09-20 | 2001-12-10 | 富士通株式会社 | Image processing device |
CA2113600C (en) | 1993-09-30 | 1999-09-14 | Sanford S. Lum | Video processing unit |
US5669013A (en) | 1993-10-05 | 1997-09-16 | Fujitsu Limited | System for transferring M elements X times and transferring N elements one time for an array that is X*M+N long responsive to vector type instructions |
KR0126658B1 (en) | 1993-10-05 | 1997-12-29 | 구자홍 | The sample rate conversion device for signal processing of non-standard tv. |
US5469223A (en) | 1993-10-13 | 1995-11-21 | Auravision Corporation | Shared line buffer architecture for a video processing circuit |
US5594487A (en) * | 1993-10-13 | 1997-01-14 | Kabushiki Kaisha Tec | Thermal head supporting device |
US5398211A (en) | 1993-10-14 | 1995-03-14 | Integrated Device Technology, Inc. | Structure and method for providing prioritized arbitration in a dual port memory |
US5418535A (en) | 1993-10-25 | 1995-05-23 | Cardion, Inc. | Mixed radar/graphics indicator |
US6361526B1 (en) * | 1993-11-01 | 2002-03-26 | Medtronic Xomed, Inc. | Antimicrobial tympanostomy tube |
US5452235A (en) * | 1993-11-01 | 1995-09-19 | Intel Corp. | Planar/packed video data FIFO memory device |
US5475400A (en) | 1993-11-15 | 1995-12-12 | Media Vision Inc. | Graphic card with two color look up tables |
EP0658871B1 (en) * | 1993-12-09 | 2002-07-17 | Sun Microsystems, Inc. | Interleaving pixel data for a memory display interface |
US5487170A (en) * | 1993-12-16 | 1996-01-23 | International Business Machines Corporation | Data processing system having dynamic priority task scheduling capabilities |
US5604514A (en) * | 1994-01-03 | 1997-02-18 | International Business Machines Corporation | Personal computer with combined graphics/image display system having pixel mode frame buffer interpretation |
US5774110A (en) | 1994-01-04 | 1998-06-30 | Edelson; Steven D. | Filter RAMDAC with hardware 11/2-D zoom function |
US6434319B1 (en) | 1994-01-19 | 2002-08-13 | Thomson Licensing S.A. | Digital video tape recorder for digital HDTV |
US5812210A (en) | 1994-02-01 | 1998-09-22 | Hitachi, Ltd. | Display apparatus |
TW268980B (en) | 1994-02-02 | 1996-01-21 | Novo Nordisk As | |
GB2287627B (en) * | 1994-03-01 | 1998-07-15 | Vtech Electronics Ltd | Graphic video display system including graphic layers with sizable,positionable windows and programmable priority |
US5488385A (en) | 1994-03-03 | 1996-01-30 | Trident Microsystems, Inc. | Multiple concurrent display system |
US5570296A (en) | 1994-03-30 | 1996-10-29 | Apple Computer, Inc. | System and method for synchronized presentation of video and audio signals |
AU692544B2 (en) * | 1994-04-08 | 1998-06-11 | Rovi Guides, Inc. | Interactive scroll program guide |
US5808627A (en) | 1994-04-22 | 1998-09-15 | Apple Computer, Inc. | Method and apparatus for increasing the speed of rendering of objects in a display system |
EP1174792A3 (en) * | 1994-05-16 | 2007-07-25 | Apple Computer, Inc. | A graphical user interface and method |
DE69525338T2 (en) * | 1994-05-16 | 2002-10-24 | Apple Computer, Inc. | ABSTRACTING PATTERNS AND COLORS IN A GRAPHIC USER INTERFACE |
US5577187A (en) | 1994-05-20 | 1996-11-19 | Microsoft Corporation | Method and system for tiling windows based on previous position and size |
US5664162A (en) | 1994-05-23 | 1997-09-02 | Cirrus Logic, Inc. | Graphics accelerator with dual memory controllers |
US5706478A (en) | 1994-05-23 | 1998-01-06 | Cirrus Logic, Inc. | Display list processor for operating in processor and coprocessor modes |
US5638499A (en) | 1994-05-27 | 1997-06-10 | O'connor; Michael | Image composition method and apparatus for developing, storing and reproducing image data using absorption, reflection and transmission properties of images to be combined |
US5694143A (en) * | 1994-06-02 | 1997-12-02 | Accelerix Limited | Single chip frame buffer and graphics accelerator |
US5621869A (en) | 1994-06-29 | 1997-04-15 | Drews; Michael D. | Multiple level computer graphics system with display level blending |
DE4423214C2 (en) | 1994-07-01 | 1998-02-12 | Harris Corp | Multinorm decoder for video signals and method for decoding video signals |
DE4423224C2 (en) | 1994-07-01 | 1998-02-26 | Harris Corp | Video signal decoder and method for decoding video signals |
JP3076201B2 (en) | 1994-07-28 | 2000-08-14 | 日本電気株式会社 | Image data expansion method |
US5615376A (en) | 1994-08-03 | 1997-03-25 | Neomagic Corp. | Clock management for power reduction in a video display sub-system |
US5574572A (en) | 1994-09-07 | 1996-11-12 | Harris Corporation | Video scaling method and device |
US5768607A (en) | 1994-09-30 | 1998-06-16 | Intel Corporation | Method and apparatus for freehand annotation and drawings incorporating sound and for compressing and synchronizing sound |
US5610983A (en) | 1994-09-30 | 1997-03-11 | Thomson Consumer Electronics, Inc. | Apparatus for detecting a synchronization component in a satellite transmission system receiver |
US5920842A (en) | 1994-10-12 | 1999-07-06 | Pixel Instruments | Signal synchronization |
US5600379A (en) | 1994-10-13 | 1997-02-04 | Yves C. Faroudia | Television digital signal processing apparatus employing time-base correction |
US5815137A (en) * | 1994-10-19 | 1998-09-29 | Sun Microsystems, Inc. | High speed display system having cursor multiplexing scheme |
US6301299B1 (en) | 1994-10-28 | 2001-10-09 | Matsushita Electric Industrial Co., Ltd. | Memory controller for an ATSC video decoder |
US5623311A (en) | 1994-10-28 | 1997-04-22 | Matsushita Electric Corporation Of America | MPEG video decoder having a high bandwidth memory |
US5838334A (en) | 1994-11-16 | 1998-11-17 | Dye; Thomas A. | Memory and graphics controller which performs pointer-based display list video refresh operations |
US6002411A (en) | 1994-11-16 | 1999-12-14 | Interactive Silicon, Inc. | Integrated video and memory controller with data processing and graphical processing capabilities |
US6067098A (en) * | 1994-11-16 | 2000-05-23 | Interactive Silicon, Inc. | Video/graphics controller which performs pointer-based display list video refresh operation |
US5696527A (en) | 1994-12-12 | 1997-12-09 | Aurvision Corporation | Multimedia overlay system for graphics and video |
US5737455A (en) * | 1994-12-12 | 1998-04-07 | Xerox Corporation | Antialiasing with grey masking techniques |
EP0746840B1 (en) | 1994-12-23 | 2008-01-23 | Nxp B.V. | Single frame buffer image processing system |
US5598525A (en) | 1995-01-23 | 1997-01-28 | Cirrus Logic, Inc. | Apparatus, systems and methods for controlling graphics and video data in multimedia data processing and display systems |
US5619337A (en) | 1995-01-27 | 1997-04-08 | Matsushita Electric Corporation Of America | MPEG transport encoding/decoding system for recording transport streams |
JP3716441B2 (en) | 1995-02-09 | 2005-11-16 | ヤマハ株式会社 | Image decoder |
US5621906A (en) | 1995-02-13 | 1997-04-15 | The Trustees Of Columbia University In The City Of New York | Perspective-based interface using an extended masthead |
US5659631A (en) * | 1995-02-21 | 1997-08-19 | Ricoh Company, Ltd. | Data compression for indexed color image data |
US5649173A (en) | 1995-03-06 | 1997-07-15 | Seiko Epson Corporation | Hardware architecture for image generation and manipulation |
US5610942A (en) * | 1995-03-07 | 1997-03-11 | Chen; Keping | Digital signal transcoder and method of transcoding a digital signal |
US5727192A (en) | 1995-03-24 | 1998-03-10 | 3Dlabs Inc. Ltd. | Serial rendering system with auto-synchronization on frame blanking |
US5777629A (en) | 1995-03-24 | 1998-07-07 | 3Dlabs Inc. Ltd. | Graphics subsystem with smart direct-memory-access operation |
US5742796A (en) | 1995-03-24 | 1998-04-21 | 3Dlabs Inc. Ltd. | Graphics system with color space double buffering |
US5764243A (en) | 1995-03-24 | 1998-06-09 | 3Dlabs Inc. Ltd. | Rendering architecture with selectable processing of multi-pixel spans |
JPH08287288A (en) | 1995-03-24 | 1996-11-01 | Internatl Business Mach Corp <Ibm> | Plurality of side annotations interactive three-dimensional graphics and hot link |
US5594854A (en) * | 1995-03-24 | 1997-01-14 | 3Dlabs Inc. Ltd. | Graphics subsystem with coarse subpixel correction |
US5526054A (en) | 1995-03-27 | 1996-06-11 | International Business Machines Corporation | Apparatus for header generation |
US6266072B1 (en) | 1995-04-05 | 2001-07-24 | Hitachi, Ltd | Graphics system |
US5825360A (en) * | 1995-04-07 | 1998-10-20 | Apple Computer, Inc. | Method for arranging windows in a computer workspace |
US5831637A (en) | 1995-05-01 | 1998-11-03 | Intergraph Corporation | Video stream data mixing for 3D graphics systems |
US5787264A (en) * | 1995-05-08 | 1998-07-28 | Apple Computer, Inc. | Method and apparatus for arbitrating access to a shared bus |
JPH08331472A (en) | 1995-05-24 | 1996-12-13 | Internatl Business Mach Corp <Ibm> | Method and apparatus for synchronizing video data with graphic data in multimedia display device containing communal frame buffer |
US5982459A (en) | 1995-05-31 | 1999-11-09 | 8×8, Inc. | Integrated multimedia communications processor and codec |
US5751979A (en) | 1995-05-31 | 1998-05-12 | Unisys Corporation | Video hardware for protected, multiprocessing systems |
JPH08328941A (en) | 1995-05-31 | 1996-12-13 | Nec Corp | Memory access control circuit |
JPH08328599A (en) | 1995-06-01 | 1996-12-13 | Mitsubishi Electric Corp | Mpeg audio decoder |
JP3355596B2 (en) | 1995-06-06 | 2002-12-09 | インターナショナル・ビジネス・マシーンズ・コーポレーション | Graphics device and display method |
US5870622A (en) | 1995-06-07 | 1999-02-09 | Advanced Micro Devices, Inc. | Computer system and method for transferring commands and data to a dedicated multimedia engine |
US5557759A (en) | 1995-06-07 | 1996-09-17 | International Business Machines Corporation | Video processor with non-stalling interrupt service |
US5748983A (en) | 1995-06-07 | 1998-05-05 | Advanced Micro Devices, Inc. | Computer system having a dedicated multimedia engine and multimedia memory having arbitration logic which grants main memory access to either the CPU or multimedia engine |
FR2735253B1 (en) | 1995-06-08 | 1999-10-22 | Hewlett Packard Co | SYNCHRONIZATION OF DATA BETWEEN SEVERAL ASYNCHRONOUS DATA RETURN DEVICES |
US5757377A (en) | 1995-06-16 | 1998-05-26 | Hewlett-Packard Company | Expediting blending and interpolation via multiplication |
JP2914226B2 (en) * | 1995-06-16 | 1999-06-28 | 日本電気株式会社 | Transformation encoding of digital signal enabling reversible transformation |
US5959637A (en) * | 1995-06-23 | 1999-09-28 | Cirrus Logic, Inc. | Method and apparatus for executing a raster operation in a graphics controller circuit |
US5828383A (en) | 1995-06-23 | 1998-10-27 | S3 Incorporated | Controller for processing different pixel data types stored in the same display memory by use of tag bits |
US5727084A (en) | 1995-06-27 | 1998-03-10 | Motorola, Inc. | Method and system for compressing a pixel map signal using block overlap |
US5673321A (en) | 1995-06-29 | 1997-09-30 | Hewlett-Packard Company | Efficient selection and mixing of multiple sub-word items packed into two or more computer words |
US5920572A (en) | 1995-06-30 | 1999-07-06 | Divicom Inc. | Transport stream decoder/demultiplexer for hierarchically organized audio-video streams |
US5896140A (en) | 1995-07-05 | 1999-04-20 | Sun Microsystems, Inc. | Method and apparatus for simultaneously displaying graphics and video data on a computer display |
US5748178A (en) | 1995-07-18 | 1998-05-05 | Sybase, Inc. | Digital video system and methods for efficient rendering of superimposed vector graphics |
DE69610548T2 (en) * | 1995-07-21 | 2001-06-07 | Koninklijke Philips Electronics N.V., Eindhoven | MULTI-MEDIA PROCESSOR ARCHITECTURE WITH HIGH PERFORMANCE |
US6702736B2 (en) * | 1995-07-24 | 2004-03-09 | David T. Chen | Anatomical visualization system |
US5673401A (en) | 1995-07-31 | 1997-09-30 | Microsoft Corporation | Systems and methods for a customizable sprite-based graphical user interface |
US5870097A (en) | 1995-08-04 | 1999-02-09 | Microsoft Corporation | Method and system for improving shadowing in a graphics rendering system |
US6005582A (en) * | 1995-08-04 | 1999-12-21 | Microsoft Corporation | Method and system for texture mapping images with anisotropic filtering |
US5867166A (en) | 1995-08-04 | 1999-02-02 | Microsoft Corporation | Method and system for generating images using Gsprites |
DE19530483A1 (en) | 1995-08-18 | 1997-02-20 | Siemens Ag | Device and method for real-time processing of a plurality of tasks |
US5838296A (en) | 1995-08-31 | 1998-11-17 | General Instrument Corporation | Apparatus for changing the magnification of video graphics prior to display therefor on a TV screen |
US5812144A (en) * | 1995-09-08 | 1998-09-22 | International Business Machines Corporation | System for performing real-time video resizing in a data processing system having multimedia capability |
US5758177A (en) * | 1995-09-11 | 1998-05-26 | Advanced Microsystems, Inc. | Computer system having separate digital and analog system chips for improved performance |
US5692211A (en) | 1995-09-11 | 1997-11-25 | Advanced Micro Devices, Inc. | Computer system and method having a dedicated multimedia engine and including separate command and data paths |
JP2770801B2 (en) * | 1995-09-27 | 1998-07-02 | 日本電気株式会社 | Video display system |
JP2861890B2 (en) | 1995-09-28 | 1999-02-24 | 日本電気株式会社 | Color image display |
WO1997013362A1 (en) * | 1995-09-29 | 1997-04-10 | Matsushita Electric Industrial Co., Ltd. | Method and device for encoding seamless-connection of telecine-converted video data |
US5638533A (en) * | 1995-10-12 | 1997-06-10 | Lsi Logic Corporation | Method and apparatus for providing data to a parallel processing array |
US5940089A (en) | 1995-11-13 | 1999-08-17 | Ati Technologies | Method and apparatus for displaying multiple windows on a display monitor |
US5835941A (en) | 1995-11-17 | 1998-11-10 | Micron Technology Inc. | Internally cached static random access memory architecture |
US5682484A (en) | 1995-11-20 | 1997-10-28 | Advanced Micro Devices, Inc. | System and method for transferring data streams simultaneously on multiple buses in a computer system |
US5754807A (en) * | 1995-11-20 | 1998-05-19 | Advanced Micro Devices, Inc. | Computer system including a multimedia bus which utilizes a separate local expansion bus for addressing and control cycles |
US6331856B1 (en) | 1995-11-22 | 2001-12-18 | Nintendo Co., Ltd. | Video game system with coprocessor providing high speed efficient 3D graphics and digital audio signal processing |
US6169843B1 (en) | 1995-12-01 | 2001-01-02 | Harmonic, Inc. | Recording and playback of audio-video transport streams |
WO1997021192A1 (en) | 1995-12-06 | 1997-06-12 | Intergraph Corporation | Peer-to-peer parallel processing graphics accelerator |
US5745095A (en) | 1995-12-13 | 1998-04-28 | Microsoft Corporation | Compositing digital information on a display screen based on screen descriptor |
KR100194802B1 (en) | 1995-12-19 | 1999-06-15 | 이계철 | MPEG-2 Encoder Preprocessor for Split Screen Image Processing of Digital TV and High-Definition TV |
US5977933A (en) | 1996-01-11 | 1999-11-02 | S3, Incorporated | Dual image computer display controller |
JPH09212146A (en) | 1996-02-06 | 1997-08-15 | Sony Computer Entertainment:Kk | Address generation device and picture display device |
US5754185A (en) | 1996-02-08 | 1998-05-19 | Industrial Technology Research Institute | Apparatus for blending pixels of a source object and destination plane |
JPH09224265A (en) * | 1996-02-14 | 1997-08-26 | Shinmei Denki Kk | Method and device for recording stereoscopic image |
US5914725A (en) | 1996-03-07 | 1999-06-22 | Powertv, Inc. | Interpolation of pixel values and alpha values in a computer graphics display device |
US6023302A (en) * | 1996-03-07 | 2000-02-08 | Powertv, Inc. | Blending of video images in a home communications terminal |
US5903281A (en) | 1996-03-07 | 1999-05-11 | Powertv, Inc. | List controlled video operations |
JP3183155B2 (en) | 1996-03-18 | 2001-07-03 | 株式会社日立製作所 | Image decoding apparatus and image decoding method |
US6005546A (en) * | 1996-03-21 | 1999-12-21 | S3 Incorporated | Hardware assist for YUV data format conversion to software MPEG decoder |
US5727455A (en) * | 1996-04-01 | 1998-03-17 | Yerman; Arthur J. | Automatic syringe destruction system and process |
CA2173651A1 (en) * | 1996-04-09 | 1997-10-10 | Stephen B. Sutherland | Method of rendering an image |
US5961603A (en) * | 1996-04-10 | 1999-10-05 | Worldgate Communications, Inc. | Access system and method for providing interactive access to an information source through a networked distribution system |
US5726415A (en) * | 1996-04-16 | 1998-03-10 | The Lincoln Electric Company | Gas cooled plasma torch |
US5850232A (en) * | 1996-04-25 | 1998-12-15 | Microsoft Corporation | Method and system for flipping images in a window using overlays |
JP3876392B2 (en) | 1996-04-26 | 2007-01-31 | 富士通株式会社 | Motion vector search method |
US6006286A (en) | 1996-04-26 | 1999-12-21 | Texas Instruments Incorporated | System for controlling data packet transfers by associating plurality of data packet transfer control instructions in packet control list including plurality of related logical functions |
US5802330A (en) * | 1996-05-01 | 1998-09-01 | Advanced Micro Devices, Inc. | Computer system including a plurality of real time peripheral devices having arbitration control feedback mechanisms |
US5761516A (en) | 1996-05-03 | 1998-06-02 | Lsi Logic Corporation | Single chip multiprocessor architecture with internal task switching synchronization bus |
US5802579A (en) | 1996-05-16 | 1998-09-01 | Hughes Electronics Corporation | System and method for simultaneously reading and writing data in a random access memory |
US5864345A (en) * | 1996-05-28 | 1999-01-26 | Intel Corporation | Table-based color conversion to different RGB16 formats |
KR970078629A (en) | 1996-05-28 | 1997-12-12 | 이형도 | Digital Satellite Video Receiver with Multi-Channel Simultaneous Search |
US5793385A (en) * | 1996-06-12 | 1998-08-11 | Chips And Technologies, Inc. | Address translator for a shared memory computing system |
US5903261A (en) | 1996-06-20 | 1999-05-11 | Data Translation, Inc. | Computer based video system |
US5701365A (en) | 1996-06-21 | 1997-12-23 | Xerox Corporation | Subpixel character positioning with antialiasing with grey masking techniques |
US5790795A (en) | 1996-07-01 | 1998-08-04 | Sun Microsystems, Inc. | Media server system which employs a SCSI bus and which utilizes SCSI logical units to differentiate between transfer modes |
US5821949A (en) * | 1996-07-01 | 1998-10-13 | Sun Microsystems, Inc. | Three-dimensional graphics accelerator with direct data channels for improved performance |
JPH1040063A (en) | 1996-07-26 | 1998-02-13 | Canon Inc | Method and device for processing image information |
US5883670A (en) | 1996-08-02 | 1999-03-16 | Avid Technology, Inc. | Motion video processing circuit for capture playback and manipulation of digital motion video information on a computer |
US5818533A (en) | 1996-08-08 | 1998-10-06 | Lsi Logic Corporation | Method and apparatus for decoding B frames in video codecs with minimal memory |
US5949439A (en) * | 1996-08-15 | 1999-09-07 | Chromatic Research, Inc. | Computing apparatus and operating method using software queues to improve graphics performance |
US6233634B1 (en) | 1996-08-17 | 2001-05-15 | Compaq Computer Corporation | Server controller configured to snoop and receive a duplicative copy of display data presented to a video controller |
KR100280285B1 (en) * | 1996-08-19 | 2001-02-01 | 윤종용 | Multimedia processor suitable for multimedia signals |
US5960464A (en) | 1996-08-23 | 1999-09-28 | Stmicroelectronics, Inc. | Memory sharing architecture for a decoding in a computer system |
JP3472667B2 (en) * | 1996-08-30 | 2003-12-02 | 株式会社日立製作所 | Video data processing device and video data display device |
US6256348B1 (en) | 1996-08-30 | 2001-07-03 | Texas Instruments Incorporated | Reduced memory MPEG video decoder circuits and methods |
JPH1174868A (en) * | 1996-09-02 | 1999-03-16 | Toshiba Corp | Information transmission method, coder/decoder in information transmission system adopting the method, coding multiplexer/decoding inverse multiplexer |
JP3268980B2 (en) | 1996-09-02 | 2002-03-25 | インターナショナル・ビジネス・マシーンズ・コーポレーション | Data buffering system |
GB2317292B (en) * | 1996-09-12 | 2000-04-19 | Discreet Logic Inc | Processing image data |
US5940080A (en) * | 1996-09-12 | 1999-08-17 | Macromedia, Inc. | Method and apparatus for displaying anti-aliased text |
KR100483370B1 (en) | 1996-09-17 | 2005-04-15 | 세드나 페이턴트 서비시즈, 엘엘씨 | Set top terminal for an interactive information distribution system |
US5986718A (en) * | 1996-09-19 | 1999-11-16 | Video Magic, Inc. | Photographic method using chroma-key and a photobooth employing the same |
US5920682A (en) * | 1996-09-20 | 1999-07-06 | Seiko Epson Corporation | Multiple layer cluster dither matrix for reducing artifacts in printed images |
KR100218318B1 (en) * | 1996-10-01 | 1999-09-01 | 문정환 | Frequency converting apparatus |
US5953691A (en) * | 1996-10-11 | 1999-09-14 | Divicom, Inc. | Processing system with graphics data prescaling |
US5926647A (en) | 1996-10-11 | 1999-07-20 | Divicom Inc. | Processing system with dynamic alteration of a color look-up table |
US6311204B1 (en) | 1996-10-11 | 2001-10-30 | C-Cube Semiconductor Ii Inc. | Processing system with register-based process sharing |
US5790842A (en) | 1996-10-11 | 1998-08-04 | Divicom, Inc. | Processing system with simultaneous utilization of multiple clock signals |
US5923385A (en) | 1996-10-11 | 1999-07-13 | C-Cube Microsystems Inc. | Processing system with single-buffered display capture |
US5889949A (en) * | 1996-10-11 | 1999-03-30 | C-Cube Microsystems | Processing system with memory arbitrating between memory access requests in a set top box |
US6088355A (en) | 1996-10-11 | 2000-07-11 | C-Cube Microsystems, Inc. | Processing system with pointer-based ATM segmentation and reassembly |
US5923316A (en) * | 1996-10-15 | 1999-07-13 | Ati Technologies Incorporated | Optimized color space conversion |
US5978509A (en) | 1996-10-23 | 1999-11-02 | Texas Instruments Incorporated | Low power video decoder system with block-based motion compensation |
US5896136A (en) | 1996-10-30 | 1999-04-20 | Hewlett Packard Company | Computer graphics system with improved blending |
US6369855B1 (en) | 1996-11-01 | 2002-04-09 | Texas Instruments Incorporated | Audio and video decoder circuit and system |
KR19980042024A (en) | 1996-11-01 | 1998-08-17 | 윌리엄비.켐플러 | A system for multiplexing and blending graphics OSD and motion video pictures for digital television |
KR19980042031A (en) | 1996-11-01 | 1998-08-17 | 윌리엄 비. 켐플러 | Variable resolution screen display system |
KR19980042025A (en) | 1996-11-01 | 1998-08-17 | 윌리엄비.켐플러 | On-Screen Display System Using Real-Time Window Address Calculation |
JP3037161B2 (en) * | 1996-11-08 | 2000-04-24 | 日本電気アイシーマイコンシステム株式会社 | Graphic image display device and graphic image display method |
US6141373A (en) * | 1996-11-15 | 2000-10-31 | Omnipoint Corporation | Preamble code structure and detection method and apparatus |
KR100517189B1 (en) | 1996-12-06 | 2005-12-06 | 코닌클리케 필립스 일렉트로닉스 엔.브이. | Graphical and video signal mixing devices and their mixing methods |
US5844608A (en) | 1996-12-12 | 1998-12-01 | Thomson Consumer Electronics, Inc. | Picture element processor for a memory management system |
US5887166A (en) * | 1996-12-16 | 1999-03-23 | International Business Machines Corporation | Method and system for constructing a program including a navigation instruction |
US6018803A (en) | 1996-12-17 | 2000-01-25 | Intel Corporation | Method and apparatus for detecting bus utilization in a computer system based on a number of bus events per sample period |
JP3742167B2 (en) * | 1996-12-18 | 2006-02-01 | 株式会社東芝 | Image display control device |
US6373497B1 (en) | 1999-05-14 | 2002-04-16 | Zight Corporation | Time sequential lookup table arrangement for a display |
US6124878A (en) | 1996-12-20 | 2000-09-26 | Time Warner Cable, A Division Of Time Warner Enterainment Company, L.P. | Optimum bandwidth utilization in a shared cable system data channel |
US5982425A (en) | 1996-12-23 | 1999-11-09 | Intel Corporation | Method and apparatus for draining video data from a planarized video buffer |
US5951644A (en) | 1996-12-24 | 1999-09-14 | Apple Computer, Inc. | System for predicting and managing network performance by managing and monitoring resourse utilization and connection of network |
WO1998029834A1 (en) | 1996-12-30 | 1998-07-09 | Sharp Kabushiki Kaisha | Sprite-based video coding system |
JPH10207446A (en) * | 1997-01-23 | 1998-08-07 | Sharp Corp | Programmable display device |
US5961628A (en) * | 1997-01-28 | 1999-10-05 | Samsung Electronics Co., Ltd. | Load and store unit for a vector processor |
KR100232164B1 (en) | 1997-02-05 | 1999-12-01 | 구자홍 | Trnsport stream demultiplexer |
US6046740A (en) | 1997-02-07 | 2000-04-04 | Seque Software, Inc. | Application testing with virtual object recognition |
US6029197A (en) | 1997-02-14 | 2000-02-22 | Advanced Micro Devices, Inc. | Management information base (MIB) report interface for abbreviated MIB data |
US6208693B1 (en) * | 1997-02-14 | 2001-03-27 | At&T Corp | Chroma-key for efficient and low complexity shape representation of coded arbitrary video objects |
US5790199A (en) * | 1997-03-06 | 1998-08-04 | International Business Machines Corporation | Method and apparatus to accommodate partial picture input to an MPEG-compliant encoder |
US5929872A (en) | 1997-03-21 | 1999-07-27 | Alliance Semiconductor Corporation | Method and apparatus for multiple compositing of source data in a graphics display processor |
JPH10269706A (en) * | 1997-03-27 | 1998-10-09 | Sony Corp | Information reproducing apparatus and information reproducing method |
US5982436A (en) | 1997-03-28 | 1999-11-09 | Philips Electronics North America Corp. | Method for seamless splicing in a video encoder |
US6357045B1 (en) | 1997-03-31 | 2002-03-12 | Matsushita Electric Industrial Co., Ltd. | Apparatus and method for generating a time-multiplexed channel surfing signal at television head-end sites |
US6077084A (en) | 1997-04-01 | 2000-06-20 | Daiichi Kosho, Co., Ltd. | Karaoke system and contents storage medium therefor |
EP0869680A3 (en) | 1997-04-04 | 2000-07-26 | Samsung Electronics Co., Ltd. | Symbol decoding method and apparatus |
US5909559A (en) | 1997-04-04 | 1999-06-01 | Texas Instruments Incorporated | Bus bridge device including data bus of first width for a first processor, memory controller, arbiter circuit and second processor having a different second data width |
US6134378A (en) | 1997-04-06 | 2000-10-17 | Sony Corporation | Video signal processing device that facilitates editing by producing control information from detected video signal information |
US5996051A (en) | 1997-04-14 | 1999-11-30 | Advanced Micro Devices, Inc. | Communication system which in a first mode supports concurrent memory acceses of a partitioned memory array and in a second mode supports non-concurrent memory accesses to the entire memory array |
US5941968A (en) | 1997-04-14 | 1999-08-24 | Advanced Micro Devices, Inc. | Computer system for concurrent data transferring between graphic controller and unified system memory and between CPU and expansion bus device |
US5920495A (en) * | 1997-05-14 | 1999-07-06 | Cirrus Logic, Inc. | Programmable four-tap texture filter |
US5959626A (en) | 1997-05-22 | 1999-09-28 | International Business Machines Corporation | Method and apparatus for manipulating very long lists of data displayed in a graphical user interface using a layered list mechanism |
US6032232A (en) * | 1997-05-29 | 2000-02-29 | 3Com Corporation | Multiported memory access system with arbitration and a source burst limiter for blocking a memory access request |
US6496228B1 (en) | 1997-06-02 | 2002-12-17 | Koninklijke Philips Electronics N.V. | Significant scene detection and frame filtering for a visual indexing system using dynamic thresholds |
US5937199A (en) * | 1997-06-03 | 1999-08-10 | International Business Machines Corporation | User programmable interrupt mask with timeout for enhanced resource locking efficiency |
US5875342A (en) * | 1997-06-03 | 1999-02-23 | International Business Machines Corporation | User programmable interrupt mask with timeout |
US6067322A (en) | 1997-06-04 | 2000-05-23 | Microsoft Corporation | Half pixel motion estimation in motion video signal encoding |
US6236727B1 (en) | 1997-06-24 | 2001-05-22 | International Business Machines Corporation | Apparatus, method and computer program product for protecting copyright data within a computer system |
US5854761A (en) | 1997-06-26 | 1998-12-29 | Sun Microsystems, Inc. | Cache memory array which stores two-way set associative data |
US5963262A (en) | 1997-06-30 | 1999-10-05 | Cirrus Logic, Inc. | System and method for scaling images and reducing flicker in interlaced television images converted from non-interlaced computer graphics data |
US5982381A (en) * | 1997-07-03 | 1999-11-09 | Microsoft Corporation | Method and apparatus for modifying a cutout image for compositing |
JP3607463B2 (en) * | 1997-07-04 | 2005-01-05 | 株式会社リコー | Output circuit |
US6266753B1 (en) | 1997-07-10 | 2001-07-24 | Cirrus Logic, Inc. | Memory manager for multi-media apparatus and method therefor |
US6057850A (en) | 1997-07-15 | 2000-05-02 | Silicon Graphics, Inc. | Blended texture illumination mapping |
US6038031A (en) | 1997-07-28 | 2000-03-14 | 3Dlabs, Ltd | 3D graphics object copying with reduced edge artifacts |
DE19733527A1 (en) * | 1997-08-02 | 1999-02-04 | Philips Patentverwaltung | Communication system with a DMA unit |
US5907295A (en) * | 1997-08-04 | 1999-05-25 | Neomagic Corp. | Audio sample-rate conversion using a linear-interpolation stage with a multi-tap low-pass filter requiring reduced coefficient storage |
KR100249229B1 (en) | 1997-08-13 | 2000-03-15 | 구자홍 | Down Conversion Decoding Apparatus of High Definition TV |
JPH1165989A (en) * | 1997-08-22 | 1999-03-09 | Sony Computer Entertainment:Kk | Information processor |
US6006303A (en) | 1997-08-28 | 1999-12-21 | Oki Electric Industry Co., Inc. | Priority encoding and decoding for memory architecture |
US5936677A (en) * | 1997-09-12 | 1999-08-10 | Microsoft Corporation | Microbuffer used in synchronization of image data |
US5982305A (en) | 1997-09-17 | 1999-11-09 | Microsoft Corporation | Sample rate converter |
US6275507B1 (en) | 1997-09-26 | 2001-08-14 | International Business Machines Corporation | Transport demultiplexor for an MPEG-2 compliant data stream |
US6115422A (en) | 1997-09-26 | 2000-09-05 | International Business Machines Corporation | Protocol and procedure for time base change in an MPEG-2 compliant datastream |
US6549577B2 (en) | 1997-09-26 | 2003-04-15 | Sarnoff Corporation | Computational resource allocation in an information stream decoder |
US6151074A (en) * | 1997-09-30 | 2000-11-21 | Texas Instruments Incorporated | Integrated MPEG decoder and image resizer for SLM-based digital display system |
US6353460B1 (en) | 1997-09-30 | 2002-03-05 | Matsushita Electric Industrial Co., Ltd. | Television receiver, video signal processing device, image processing device and image processing method |
GB2329984B (en) | 1997-10-01 | 2002-07-17 | Thomson Training & Simulation | A Multi-Processor Computer System |
US6167498A (en) | 1997-10-02 | 2000-12-26 | Cirrus Logic, Inc. | Circuits systems and methods for managing data requests between memory subsystems operating in response to multiple address formats |
US6088046A (en) | 1997-10-02 | 2000-07-11 | Cirrus Logic, Inc. | Host DMA through subsystem XY processing |
RU2000111530A (en) | 1997-10-02 | 2002-05-27 | Каналь+Сосьетэ Аноним | METHOD AND DEVICE FOR ENCRYPTED DATA STREAM TRANSLATION |
US6100899A (en) | 1997-10-02 | 2000-08-08 | Silicon Graphics, Inc. | System and method for performing high-precision, multi-channel blending using multiple blending passes |
US6057084A (en) * | 1997-10-03 | 2000-05-02 | Fusion Systems Corporation | Controlled amine poisoning for reduced shrinkage of features formed in photoresist |
US6281873B1 (en) * | 1997-10-09 | 2001-08-28 | Fairchild Semiconductor Corporation | Video line rate vertical scaler |
US6204859B1 (en) | 1997-10-15 | 2001-03-20 | Digital Equipment Corporation | Method and apparatus for compositing colors of images with memory constraints for storing pixel data |
KR100222994B1 (en) * | 1997-10-23 | 1999-10-01 | 윤종용 | Apparatus and method of receiving analog television of digital television receiver |
US5963222A (en) | 1997-10-27 | 1999-10-05 | International Business Machines Corporation | Multi-format reduced memory MPEG decoder with hybrid memory address generation |
US6108047A (en) * | 1997-10-28 | 2000-08-22 | Stream Machine Company | Variable-size spatial and temporal video scaler |
US6002882A (en) | 1997-11-03 | 1999-12-14 | Analog Devices, Inc. | Bidirectional communication port for digital signal processor |
US6208350B1 (en) * | 1997-11-04 | 2001-03-27 | Philips Electronics North America Corporation | Methods and apparatus for processing DVD video |
US6061094A (en) | 1997-11-12 | 2000-05-09 | U.S. Philips Corporation | Method and apparatus for scaling and reducing flicker with dynamic coefficient weighting |
US6046676A (en) | 1997-11-14 | 2000-04-04 | International Business Machines Corporation | Self powered electronic memory identification tag with dual communication ports |
US5943064A (en) | 1997-11-15 | 1999-08-24 | Trident Microsystems, Inc. | Apparatus for processing multiple types of graphics data for display |
US6148033A (en) * | 1997-11-20 | 2000-11-14 | Hitachi America, Ltd. | Methods and apparatus for improving picture quality in reduced resolution video decoders |
US6339434B1 (en) | 1997-11-24 | 2002-01-15 | Pixelworks | Image scaling circuit for fixed pixed resolution display |
KR100232144B1 (en) | 1997-11-28 | 1999-12-01 | 구자홍 | Digital television lookup table processing unit and its method |
US6070231A (en) * | 1997-12-02 | 2000-05-30 | Intel Corporation | Method and apparatus for processing memory requests that require coherency transactions |
DE19753952C2 (en) * | 1997-12-05 | 2003-06-26 | Stahlwerk Ergste Westig Gmbh | Saw band or blade |
US6320619B1 (en) | 1997-12-11 | 2001-11-20 | Intel Corporation | Flicker filter circuit |
US6466210B1 (en) | 1997-12-22 | 2002-10-15 | Adobe Systems Incorporated | Blending image data using layers |
JP3681528B2 (en) | 1997-12-22 | 2005-08-10 | 株式会社ルネサステクノロジ | Graphic processor and data processing system |
US6212590B1 (en) | 1997-12-22 | 2001-04-03 | Compaq Computer Corporation | Computer system having integrated bus bridge design with delayed transaction arbitration mechanism employed within laptop computer docked to expansion base |
US5987555A (en) | 1997-12-22 | 1999-11-16 | Compaq Computer Corporation | Dynamic delayed transaction discard counter in a bus bridge of a computer system |
US6199131B1 (en) | 1997-12-22 | 2001-03-06 | Compaq Computer Corporation | Computer system employing optimized delayed transaction arbitration technique |
US6199127B1 (en) * | 1997-12-24 | 2001-03-06 | Intel Corporation | Method and apparatus for throttling high priority memory accesses |
US6356569B1 (en) * | 1997-12-31 | 2002-03-12 | At&T Corp | Digital channelizer with arbitrary output sampling frequency |
US6121978A (en) | 1998-01-07 | 2000-09-19 | Ati Technologies, Inc. | Method and apparatus for graphics scaling |
US6088027A (en) | 1998-01-08 | 2000-07-11 | Macromedia, Inc. | Method and apparatus for screen object manipulation |
US6351474B1 (en) | 1998-01-14 | 2002-02-26 | Skystream Networks Inc. | Network distributed remultiplexer for video program bearing transport streams |
US6351471B1 (en) | 1998-01-14 | 2002-02-26 | Skystream Networks Inc. | Brandwidth optimization of video program bearing transport streams |
US6064676A (en) | 1998-01-14 | 2000-05-16 | Skystream Corporation | Remultipelxer cache architecture and memory organization for storing video program bearing transport packets and descriptors |
US6111896A (en) | 1998-01-14 | 2000-08-29 | Skystream Corporation | Remultiplexer for video program bearing transport streams with program clock reference time stamp adjustment |
US6028583A (en) | 1998-01-16 | 2000-02-22 | Adobe Systems, Inc. | Compound layers for composited image manipulation |
US6208671B1 (en) | 1998-01-20 | 2001-03-27 | Cirrus Logic, Inc. | Asynchronous sample rate converter |
US6326963B1 (en) | 1998-01-22 | 2001-12-04 | Nintendo Co., Ltd. | Method and apparatus for efficient animation and collision detection using local coordinate systems |
JPH11215647A (en) * | 1998-01-28 | 1999-08-06 | Furukawa Electric Co Ltd:The | Electrical junction box |
US6199149B1 (en) * | 1998-01-30 | 2001-03-06 | Intel Corporation | Overlay counter for accelerated graphics port |
US5973955A (en) | 1998-02-02 | 1999-10-26 | Motorola, Inc. | Comparison circuit utilizing a differential amplifier |
JPH11225292A (en) | 1998-02-04 | 1999-08-17 | Sony Corp | Digital broadcast receiver and reception method |
EP1055201B1 (en) * | 1998-02-17 | 2003-07-09 | Sun Microsystems, Inc. | Graphics system with variable resolution super-sampling |
US6496186B1 (en) | 1998-02-17 | 2002-12-17 | Sun Microsystems, Inc. | Graphics system having a super-sampled sample buffer with generation of output pixels using selective adjustment of filtering for reduced artifacts |
JP3684525B2 (en) | 1998-02-19 | 2005-08-17 | 富士通株式会社 | Multi-screen composition method and multi-screen composition device |
US6178486B1 (en) | 1998-02-19 | 2001-01-23 | Quantum Corporation | Time allocation shared memory arbitration for disk drive controller |
US6081854A (en) | 1998-03-26 | 2000-06-27 | Nvidia Corporation | System for providing fast transfers to input/output device by assuring commands from only one application program reside in FIFO |
US6313822B1 (en) | 1998-03-27 | 2001-11-06 | Sony Corporation | Method and apparatus for modifying screen resolution based on available memory |
US6023738A (en) * | 1998-03-30 | 2000-02-08 | Nvidia Corporation | Method and apparatus for accelerating the transfer of graphical images |
US6133901A (en) | 1998-03-31 | 2000-10-17 | Silicon Graphics, Inc. | Method and system for width independent antialiasing |
US6374244B1 (en) | 1998-04-01 | 2002-04-16 | Matsushita Electric Industrial Co., Ltd. | Data transfer device |
US6092124A (en) * | 1998-04-17 | 2000-07-18 | Nvidia Corporation | Method and apparatus for accelerating the rendering of images |
JP3300280B2 (en) | 1998-04-23 | 2002-07-08 | インターナショナル・ビジネス・マシーンズ・コーポレーション | Image synthesis processing apparatus and method |
WO1999056249A1 (en) | 1998-04-27 | 1999-11-04 | Interactive Silicon, Inc. | Graphics system and method for rendering independent 2d and 3d objects |
US6184908B1 (en) | 1998-04-27 | 2001-02-06 | Ati Technologies, Inc. | Method and apparatus for co-processing video graphics data |
US6510554B1 (en) | 1998-04-27 | 2003-01-21 | Diva Systems Corporation | Method for generating information sub-streams for FF/REW applications |
DE19919412B4 (en) * | 1998-04-29 | 2006-02-23 | Lg Electronics Inc. | Decoder for a digital television receiver |
US6144392A (en) | 1998-04-30 | 2000-11-07 | Ati Technologies, Inc. | Method and apparatus for formatting a texture in a frame buffer |
US6186064B1 (en) * | 1998-05-22 | 2001-02-13 | Heidelberger Druckmaschinen Ag | Web fed rotary printing press with movable printing units |
US6151030A (en) * | 1998-05-27 | 2000-11-21 | Intel Corporation | Method of creating transparent graphics |
US6542162B1 (en) * | 1998-06-15 | 2003-04-01 | International Business Machines Corporation | Color mapped and direct color OSD region processor with support for 4:2:2 profile decode function |
JP3057055B2 (en) | 1998-07-21 | 2000-06-26 | インターナショナル・ビジネス・マシーンズ・コーポレイション | Computer, overlay processing apparatus, and overlay processing execution method |
US6466581B1 (en) | 1998-08-03 | 2002-10-15 | Ati Technologies, Inc. | Multistream data packet transfer apparatus and method |
US6529284B1 (en) * | 1998-08-07 | 2003-03-04 | Texas Instruments Incorporated | Efficient rendering of masks to a screened buffer using a lookup table |
US6342982B1 (en) | 1998-08-31 | 2002-01-29 | Matsushita Electric Industrial Co., Ltd. | Card reader |
US6266100B1 (en) * | 1998-09-04 | 2001-07-24 | Sportvision, Inc. | System for enhancing a video presentation of a live event |
US6229550B1 (en) | 1998-09-04 | 2001-05-08 | Sportvision, Inc. | Blending a graphic |
JP4399910B2 (en) | 1998-09-10 | 2010-01-20 | 株式会社セガ | Image processing apparatus and method including blending processing |
US6157978A (en) | 1998-09-16 | 2000-12-05 | Neomagic Corp. | Multimedia round-robin arbitration with phantom slots for super-priority real-time agent |
US6894706B1 (en) * | 1998-09-18 | 2005-05-17 | Hewlett-Packard Development Company, L.P. | Automatic resolution detection |
US6295048B1 (en) * | 1998-09-18 | 2001-09-25 | Compaq Computer Corporation | Low bandwidth display mode centering for flat panel display controller |
US6271847B1 (en) | 1998-09-25 | 2001-08-07 | Microsoft Corporation | Inverse texture mapping using weighted pyramid blending and view-dependent weight maps |
US6263019B1 (en) | 1998-10-09 | 2001-07-17 | Matsushita Electric Industrial Co., Ltd. | Variable rate MPEG-2 video syntax processor |
US6263023B1 (en) | 1998-10-15 | 2001-07-17 | International Business Machines Corporation | High definition television decoder |
KR100363159B1 (en) | 1998-10-17 | 2003-01-24 | 삼성전자 주식회사 | Digital receiver for simultaneously receiveing multiple channel, and display control method |
US6466624B1 (en) | 1998-10-28 | 2002-10-15 | Pixonics, Llc | Video decoder with bit stream based enhancements |
US6263369B1 (en) | 1998-10-30 | 2001-07-17 | Cisco Technology, Inc. | Distributed architecture allowing local user authentication and authorization |
US6327002B1 (en) | 1998-10-30 | 2001-12-04 | Ati International, Inc. | Method and apparatus for video signal processing in a video system |
US6326984B1 (en) | 1998-11-03 | 2001-12-04 | Ati International Srl | Method and apparatus for storing and displaying video image data in a video graphics system |
US6208354B1 (en) | 1998-11-03 | 2001-03-27 | Ati International Srl | Method and apparatus for displaying multiple graphics images in a mixed video graphics display |
AU1910800A (en) * | 1998-11-09 | 2000-05-29 | Broadcom Corporation | Graphics display system |
US6798420B1 (en) * | 1998-11-09 | 2004-09-28 | Broadcom Corporation | Video and graphics system with a single-port RAM |
US6853385B1 (en) | 1999-11-09 | 2005-02-08 | Broadcom Corporation | Video, audio and graphics decode, composite and display system |
US6636222B1 (en) * | 1999-11-09 | 2003-10-21 | Broadcom Corporation | Video and graphics system with an MPEG video decoder for concurrent multi-row decoding |
US6573905B1 (en) * | 1999-11-09 | 2003-06-03 | Broadcom Corporation | Video and graphics system with parallel processing of graphics windows |
US6661422B1 (en) * | 1998-11-09 | 2003-12-09 | Broadcom Corporation | Video and graphics system with MPEG specific data transfer commands |
US6570922B1 (en) | 1998-11-24 | 2003-05-27 | General Instrument Corporation | Rate control for an MPEG transcoder without a priori knowledge of picture type |
US6215703B1 (en) | 1998-12-04 | 2001-04-10 | Intel Corporation | In order queue inactivity timer to improve DRAM arbiter operation |
US6157415A (en) * | 1998-12-15 | 2000-12-05 | Ati International Srl | Method and apparatus for dynamically blending image input layers |
JP4391610B2 (en) | 1998-12-25 | 2009-12-24 | パナソニック株式会社 | Transport stream processing device |
US6747642B1 (en) * | 1999-01-29 | 2004-06-08 | Nintendo Co., Ltd. | Method and apparatus for providing non-photorealistic cartoon outlining within a 3D videographics system |
US6466220B1 (en) | 1999-03-05 | 2002-10-15 | Teralogic, Inc. | Graphics engine architecture |
US6408436B1 (en) | 1999-03-18 | 2002-06-18 | Next Level Communications | Method and apparatus for cross-connection of video signals |
US6411333B1 (en) | 1999-04-02 | 2002-06-25 | Teralogic, Inc. | Format conversion using patch-based filtering |
US6327000B1 (en) | 1999-04-02 | 2001-12-04 | Teralogic, Inc. | Efficient image scaling for scan rate conversion |
US6421460B1 (en) | 1999-05-06 | 2002-07-16 | Adobe Systems Incorporated | Blending colors in the presence of transparency |
US6924806B1 (en) * | 1999-08-06 | 2005-08-02 | Microsoft Corporation | Video card with interchangeable connector module |
US6538656B1 (en) * | 1999-11-09 | 2003-03-25 | Broadcom Corporation | Video and graphics system with a data transport processor |
JP3591635B2 (en) * | 1999-12-08 | 2004-11-24 | 富士電機機器制御株式会社 | DC-DC converter |
US6372002B1 (en) * | 2000-03-13 | 2002-04-16 | General Electric Company | Functionalized diamond, methods for producing same, abrasive composites and abrasive tools comprising functionalized diamonds |
EP1134698A1 (en) * | 2000-03-13 | 2001-09-19 | Koninklijke Philips Electronics N.V. | Video-apparatus with histogram modification means |
US6662329B1 (en) | 2000-03-23 | 2003-12-09 | International Business Machines Corporation | Processing errors in MPEG data as it is sent to a fixed storage device |
US6426755B1 (en) | 2000-05-16 | 2002-07-30 | Sun Microsystems, Inc. | Graphics system using sample tags for blur |
US6956617B2 (en) * | 2000-11-17 | 2005-10-18 | Texas Instruments Incorporated | Image scaling and sample rate conversion by interpolation with non-linear positioning vector |
US6390502B1 (en) | 2001-03-26 | 2002-05-21 | Delphi Technologies, Inc. | Supplemental restraint assembly for an automotive vehicle |
US6525608B2 (en) * | 2001-03-27 | 2003-02-25 | Intel Corporation | High gain, high bandwidth, fully differential amplifier |
US6519965B1 (en) * | 2002-02-01 | 2003-02-18 | Daniel R. Blanchard, Sr. | Externally illuminated cooler box |
US6944746B2 (en) * | 2002-04-01 | 2005-09-13 | Broadcom Corporation | RISC processor supporting one or more uninterruptible co-processors |
US20050288114A1 (en) | 2002-05-07 | 2005-12-29 | Meadows Joseph S | System and apparatus for propelling and carrying a user within a confined interior |
US20040234340A1 (en) | 2003-05-20 | 2004-11-25 | Cho Yong Min | Mobile levee system |
US8907987B2 (en) * | 2010-10-20 | 2014-12-09 | Ncomputing Inc. | System and method for downsizing video data for memory bandwidth optimization |
-
1999
- 1999-11-09 AU AU19108/00A patent/AU1910800A/en not_active Abandoned
- 1999-11-09 EP EP99962726A patent/EP1145218B1/en not_active Expired - Lifetime
- 1999-11-09 US US09/437,326 patent/US6661427B1/en not_active Expired - Fee Related
- 1999-11-09 WO PCT/US1999/026484 patent/WO2000028518A2/en active IP Right Grant
- 1999-11-09 US US09/437,206 patent/US6380945B1/en not_active Expired - Lifetime
- 1999-11-09 US US09/437,581 patent/US6630945B1/en not_active Expired - Lifetime
- 1999-11-09 US US09/437,579 patent/US6501480B1/en not_active Expired - Lifetime
- 1999-11-09 US US09/437,580 patent/US7911483B1/en not_active Expired - Fee Related
- 1999-11-09 DE DE69917489T patent/DE69917489T2/en not_active Expired - Lifetime
- 1999-11-09 AT AT99962726T patent/ATE267439T1/en not_active IP Right Cessation
- 1999-11-09 US US09/437,325 patent/US6608630B1/en not_active Expired - Fee Related
- 1999-11-09 US US09/437,209 patent/US6189064B1/en not_active Expired - Lifetime
- 1999-11-09 US US09/437,205 patent/US6927783B1/en not_active Expired - Lifetime
- 1999-11-09 EP EP03017061A patent/EP1365385B1/en not_active Expired - Lifetime
- 1999-11-09 US US09/437,348 patent/US6700588B1/en not_active Expired - Lifetime
- 1999-11-09 US US09/437,207 patent/US6744472B1/en not_active Expired - Lifetime
- 1999-11-09 US US09/437,716 patent/US6731295B1/en not_active Expired - Lifetime
- 1999-11-09 US US09/437,208 patent/US6570579B1/en not_active Expired - Lifetime
- 1999-11-09 US US09/437,327 patent/US6738072B1/en not_active Expired - Lifetime
-
2000
- 2000-11-14 US US09/712,736 patent/US6529935B1/en not_active Expired - Lifetime
-
2001
- 2001-11-30 US US10/017,784 patent/US6819330B2/en not_active Expired - Fee Related
-
2002
- 2002-10-28 US US10/282,822 patent/US6762762B2/en not_active Expired - Lifetime
- 2002-12-17 US US10/322,059 patent/US6721837B2/en not_active Expired - Lifetime
-
2003
- 2003-04-25 US US10/423,364 patent/US7057622B2/en not_active Expired - Fee Related
- 2003-07-18 US US10/622,194 patent/US7530027B2/en not_active Expired - Lifetime
- 2003-09-25 US US10/670,627 patent/US7538783B2/en not_active Expired - Fee Related
- 2003-11-13 US US10/712,809 patent/US7002602B2/en not_active Expired - Lifetime
-
2004
- 2004-01-21 US US10/762,937 patent/US7598962B2/en not_active Expired - Fee Related
- 2004-01-22 US US10/762,975 patent/US7209992B2/en not_active Expired - Lifetime
- 2004-01-22 US US10/763,087 patent/US9077997B2/en not_active Expired - Fee Related
- 2004-02-03 US US10/770,851 patent/US7015928B2/en not_active Expired - Lifetime
- 2004-05-10 US US10/842,743 patent/US6879330B2/en not_active Expired - Lifetime
- 2004-05-17 US US10/847,122 patent/US7227582B2/en not_active Expired - Lifetime
- 2004-07-13 US US10/889,820 patent/US20040246257A1/en not_active Abandoned
-
2005
- 2005-04-01 US US11/097,028 patent/US7098930B2/en not_active Expired - Lifetime
- 2005-04-14 US US11/106,038 patent/US7184058B2/en not_active Expired - Fee Related
-
2006
- 2006-08-28 US US11/511,042 patent/US7310104B2/en not_active Expired - Lifetime
- 2006-12-28 US US11/617,468 patent/US7746354B2/en not_active Expired - Fee Related
-
2007
- 2007-04-23 US US11/738,870 patent/US7545438B2/en not_active Expired - Fee Related
- 2007-12-18 US US11/959,139 patent/US7554553B2/en not_active Expired - Fee Related
- 2007-12-18 US US11/959,315 patent/US7554562B2/en not_active Expired - Fee Related
-
2008
- 2008-11-10 US US12/268,036 patent/US8078981B2/en not_active Expired - Fee Related
-
2009
- 2009-05-26 US US12/472,235 patent/US7920151B2/en not_active Expired - Fee Related
- 2009-06-30 US US12/494,864 patent/US20100171761A1/en not_active Abandoned
- 2009-06-30 US US12/494,909 patent/US20100171762A1/en not_active Abandoned
- 2009-08-13 US US12/540,633 patent/US20090295815A1/en not_active Abandoned
-
2010
- 2010-10-15 US US12/905,617 patent/US8390635B2/en not_active Expired - Fee Related
- 2010-11-23 US US12/953,168 patent/US8164601B2/en not_active Expired - Fee Related
- 2010-11-24 US US12/953,774 patent/US20110292082A1/en not_active Abandoned
-
2011
- 2011-04-05 US US13/079,845 patent/US8493415B2/en not_active Expired - Lifetime
- 2011-08-01 US US13/195,115 patent/US8848792B2/en not_active Expired - Fee Related
-
2013
- 2013-03-01 US US13/782,081 patent/US9111369B2/en not_active Expired - Fee Related
-
2015
- 2015-07-02 US US14/790,923 patent/US9575665B2/en not_active Expired - Fee Related
Cited By (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20090168885A1 (en) * | 2007-12-29 | 2009-07-02 | Yong Peng | Two-dimensional interpolation architecture for motion compensation in multiple video standards |
US8588305B2 (en) * | 2007-12-29 | 2013-11-19 | Nvidia Corporation | Two-dimensional interpolation architecture for motion compensation in multiple video standards |
US11445227B2 (en) | 2018-06-12 | 2022-09-13 | Ela KLIOTS SHAPIRA | Method and system for automatic real-time frame segmentation of high resolution video streams into constituent features and modifications of features in each frame to simultaneously create multiple different linear views from same video source |
US11943489B2 (en) | 2018-06-12 | 2024-03-26 | Snakeview Data Science, Ltd. | Method and system for automatic real-time frame segmentation of high resolution video streams into constituent features and modifications of features in each frame to simultaneously create multiple different linear views from same video source |
Also Published As
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US9575665B2 (en) | Graphics display system with unified memory architecture | |
US7667710B2 (en) | Graphics display system with line buffer control scheme | |
US20120268655A1 (en) | Graphics Display System with Anti-Flutter Filtering and Vertical Scaling Feature |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
STCB | Information on status: application discontinuation |
Free format text: ABANDONED -- FAILURE TO RESPOND TO AN OFFICE ACTION |
|
AS | Assignment |
Owner name: BANK OF AMERICA, N.A., AS COLLATERAL AGENT, NORTH CAROLINA Free format text: PATENT SECURITY AGREEMENT;ASSIGNOR:BROADCOM CORPORATION;REEL/FRAME:037806/0001 Effective date: 20160201 Owner name: BANK OF AMERICA, N.A., AS COLLATERAL AGENT, NORTH Free format text: PATENT SECURITY AGREEMENT;ASSIGNOR:BROADCOM CORPORATION;REEL/FRAME:037806/0001 Effective date: 20160201 |
|
AS | Assignment |
Owner name: BROADCOM CORPORATION, CALIFORNIA Free format text: TERMINATION AND RELEASE OF SECURITY INTEREST IN PATENTS;ASSIGNOR:BANK OF AMERICA, N.A., AS COLLATERAL AGENT;REEL/FRAME:041712/0001 Effective date: 20170119 |