Copyright © 2013 W3C® (MIT, ERCIM, Keio, Beihang), All Rights Reserved. W3C liability, trademark and document use rules apply.
This module describes, in general terms, the basic structure and syntax of CSS stylesheets. It defines, in detail, the syntax and parsing of CSS - how to turn a stream of bytes into a meaningful stylesheet. CSS is a language for describing the rendering of structured documents (such as HTML and XML) on screen, on paper, in speech, etc.
This section describes the status of this document at the time of its publication. Other documents may supersede this document. A list of current W3C publications and the latest revision of this technical report can be found in the W3C technical reports index at http://www.w3.org/TR/.
Publication as a Working Draft does not imply endorsement by the W3C Membership. This is a draft document and may be updated, replaced or obsoleted by other documents at any time. It is inappropriate to cite this document as other than work in progress.
The (archived) public mailing list www-style@w3.org (see instructions) is preferred for discussion of this specification. When sending e-mail, please put the text “css-syntax” in the subject, preferably like this: “[css-syntax] …summary of comment…”
This document was produced by the CSS Working Group (part of the Style Activity).
This document was produced by a group operating under the 5 February 2004 W3C Patent Policy. W3C maintains a public list of any patent disclosures made in connection with the deliverables of the group; that page also includes instructions for disclosing a patent. An individual who has actual knowledge of a patent which the individual believes contains Essential Claim(s) must disclose the information in accordance with section 6 of the W3C Patent Policy.
This section is not normative.
This module defines the abstract syntax and parsing of CSS stylesheets
and other things which use CSS syntax
(such as the HTML style
attribute).
It defines algorithms for converting a stream of Unicode code points (in other words, text) into a stream of CSS tokens, and then further into CSS objects such as stylesheets, rules, and declarations.
This module defines the syntax and parsing of CSS stylesheets. It supersedes the lexical scanner and grammar defined in CSS 2.1.
This section is not normative.
A CSS document is a series of qualified rules, which are usually style rules that apply CSS properties to elements, and at-rules, which define special processing rules or values for the CSS document.
A qualified rule starts with a prelude then has a {}-wrapped block containing a sequence of declarations. The meaning of the prelude varies based on the context that the rule appears in - for style rules, it’s a selector which specifies what elements the declarations will apply to. Each declaration has a name, followed by a colon and the declaration value. Declarations are separated by semicolons.
A typical rule might look something like this:
p > a { color: blue; text-decoration: underline; }
In the above rule, "p > a
" is the selector,
which, if the source document is HTML,
selects any <a>
elements that are children of a <p>
element.
"color: blue;
" is a declaration specifying that,
for the elements that match the selector,
their color property should have the value blue.
Similiarly, their text-decoration property should have the value underline.
At-rules are all different, but they have a basic structure in common. They start with an "@" code point followed by their name. Some at-rules are simple statements, with their name followed by more CSS values to specify their behavior, and finally ended by a semicolon. Others are blocks; they can have CSS values following their name, but they end with a {}-wrapped block, similar to a qualified rule. Even the contents of these blocks are specific to the given at-rule: sometimes they contain a sequence of declarations, like a qualified rule; other times, they may contain additional blocks, or at-rules, or other structures altogether.
Here are several examples of at-rules that illustrate the varied syntax they may contain.
@import "my-styles.css";
The @import at-rule is a simple statement. After its name, it takes a single string or url() function to indicate the stylesheet that it should import.
@page :left { margin-left: 4cm; margin-right: 3cm; }
The @page at-rule consists of an optional page selector (the :left pseudoclass), followed by a block of properties that apply to the page when printed. In this way, it’s very similar to a normal style rule, except that its properties don’t apply to any "element", but rather the page itself.
@media print { body { font-size: 10pt } }
The @media at-rule begins with a media type and a list of optional media queries. Its block contains entire rules, which are only applied when the @medias conditions are fulfilled.
Property names and at-rule names are always identifiers, which have to start with a letter or a hyphen followed by a letter, and then can contain letters, numbers, hyphens, or underscores. You can include any code point at all, even ones that CSS uses in its syntax, by escaping it.
The syntax of selectors is defined in the Selectors spec. Similarly, the syntax of the wide variety of CSS values is defined in the Values & Units spec. The special syntaxes of individual at-rules can be found in the specs that define them.
This section is not normative.
Any Unicode code point can be included in an identifier or quoted string by escaping it. CSS escape sequences start with a backslash (\), and continue with:
An identifier with the value "&B" could be written as \26 B or \000026B.
A "real" space after the escape sequence must be doubled.
This section is not normative.
When errors occur in CSS, the parser attempts to recover gracefully, throwing away only the minimum amount of content before returning to parsing as normal. This is because errors aren’t always mistakes - new syntax looks like an error to an old parser, and it’s useful to be able to add new syntax to the language without worrying about stylesheets that include it being completely broken in older UAs.
The precise error-recovery behavior is detailed in the parser itself, but it’s simple enough that a short description is fairly accurate:
User agents must use the parsing rules described in this specification to generate the CSSOM trees from text/css resources. Together, these rules define what is referred to as the CSS parser.
This specification defines the parsing rules for CSS documents, whether they are syntactically correct or not. Certain points in the parsing algorithm are said to be a parse errors. The error handling for parse errors is well-defined: user agents must either act as described below when encountering such problems, or must abort processing at the first error that they encounter for which they do not wish to apply the rules described below.
Conformance checkers must report at least one parse error condition to the user if one or more parse error conditions exist in the document and must not report parse error conditions if none exist in the document. Conformance checkers may report more than one parse error condition if more than one parse error condition exists in the document. Conformance checkers are not required to recover from parse errors, but if they do, they must recover in the same way as user agents.
The input to the CSS parsing process consists of a stream of Unicode code points, which is passed through a tokenization stage followed by a tree construction stage. The output is a CSSStyleSheet object.
Note: Implementations that do not support scripting do not have to actually create a CSSOM CSSStyleSheet object, but the CSSOM tree in such cases is still used as the model for the rest of the specification.
When parsing a stylesheet, the stream of Unicode code points that comprises the input to the tokenization stage may be initially seen by the user agent as a stream of bytes (typically coming over the network or from the local file system). The bytes encode the code points according to a particular character encoding, which the user agent must use to decode the bytes into code points.
To decode the stream of bytes into a stream of code points, UAs must follow these steps.
The algorithms to get an encoding and decode are defined in the Encoding Standard.
First, determine the fallback encoding:
40 63 68 61 72 73 65 74 20 22 (not 22)* 22 3B
then get an encoding for the sequence of (not 22)*
bytes,
decoded per windows-1252
.
Note: Anything ASCII-compatible will do, so using windows-1252
is fine.
Note: The byte sequence above,
when decoded as ASCII,
is the string "@charset "…";
",
where the "…" is the sequence of bytes corresponding to the encoding’s name.
If the return value was utf-16
or utf-16be
,
use utf-8
as the fallback encoding;
if it was anything else except failure,
use the return value as the fallback encoding.
Note: This mimics HTML <meta>
behavior.
charset
attribute on the <link>
element or <?xml-stylesheet?>
processing instruction that caused the style sheet to be included, if any.
If that does not return failure,
use the return value as the fallback encoding.
utf-8
as the fallback encoding.
Then, decode the byte stream using the fallback encoding.
Note: the decode algorithm lets the byte order mark (BOM) take precedence, hence the usage of the term "fallback" above.
Anne says that steps 3/4 should be an input to this algorithm from the specs that define importing stylesheet, to make the algorithm as a whole cleaner. Perhaps abstract it into the concept of an "environment charset" or something?
Should we only take the charset from the referring document if it’s same-origin?
The input stream consists of the code points pushed into it as the input byte stream is decoded.
Before sending the input stream to the tokenizer, implementations must make the following code point substitutions:
Implementations must act as if they used the following algorithms to tokenize CSS. To transform a stream of code points into a stream of tokens, repeatedly consume a token until an <EOF-token> is reached, collecting the returned tokens into a stream. Each call to the consume a token algorithm returns a single token, so it can also be used "on-demand" to tokenize a stream of code points during parsing, if so desired.
The output of the tokenization step is a stream of zero or more of the following tokens: <ident-token>, <function-token>, <at-keyword-token>, <hash-token>, <string-token>, <bad-string-token>, <url-token>, <bad-url-token>, <delim-token>, <number-token>, <percentage-token>, <dimension-token>, <unicode-range-token>, <include-match-token>, <dash-match-token>, <prefix-match-token>, <suffix-match-token>, <substring-match-token>, <column-token>, <whitespace-token>, <CDO-token>, <CDC-token>, <colon-token>, <semicolon-token>, <comma-token>, <[-token>, <]-token>, <(-token>, <)-token>, <{-token>, and <}-token>.
Note: The type flag of hash tokens is used in the Selectors syntax [SELECT]. Only hash tokens with the "id" type are valid ID selectors.
Note: As a technical note, the tokenizer defined here requires only three code points of look-ahead. The tokens it produces are designed to allow Selectors to be parsed with one token of look-ahead, and additional tokens may be added in the future to maintain this invariant.
This section is non-normative.
This section presents an informative view of the tokenizer, in the form of railroad diagrams. Railroad diagrams are more compact than an explicit parser, but often easier to read than an regular expression.
These diagrams are informative and incomplete; they describe the grammar of "correct" tokens, but do not describe error-handling at all. They are provided solely to make it easier to get an intuitive grasp of the syntax of each token.
Diagrams with names such as <foo-token> represent tokens. The rest are productions referred to by other diagrams.
This section defines several terms used during the tokenization phase.
The algorithms defined in this section transform a stream of code points into a stream of tokens.
This section describes how to consume a token from a stream of code points. It will return a single token of any type.
Consume the next input code point.
Otherwise, return a <delim-token> with its value set to the current input code point.
Otherwise, emit a <delim-token> with its value set to the current input code point.
Otherwise, return a <delim-token> with its value set to the current input code point.
Otherwise, return a <delim-token> with its value set to the current input code point.
Otherwise, if the input stream starts with an identifier, reconsume the current input code point, consume an ident-like token, and return it.
Otherwise, if the next 2 input code points are U+002D HYPHEN-MINUS U+003E GREATER-THAN SIGN (->), consume them and return a <CDC-token>.
Otherwise, return a <delim-token> with its value set to the current input code point.
Otherwise, return a <delim-token> with its value set to the current input code point.
Otherwise, return a <delim-token> with its value set to the current input code point.
Otherwise, return a <delim-token> with its value set to the current input code point.
Otherwise, return a <delim-token> with its value set to the current input code point.
Otherwise, this is a parse error. Return a <delim-token> with its value set to the current input code point.
Otherwise, return a <delim-token> with its value set to the current input code point.
Otherwise, reconsume the current input code point, consume an ident-like token, and return it.
Otherwise, if the next input code point is U+0073 VERTICAL LINE (|), consume it and return a <column-token>.
Otherwise, return a <delim-token> with its value set to the current input code point.
Otherwise, return a <delim-token> with its value set to the current input code point.
This section describes how to consume a numeric token from a stream of code points. It returns either a <number-token>, <percentage-token>, or <dimension-token>.
If the next 3 input code points would start an identifier, then:
Otherwise, if the next input code point is U+0025 PERCENTAGE SIGN (%), consume it. Create a <percentage-token> with the same representation and value as the returned number, and return it.
Otherwise, create a <number-token> with the same representation, value, and type flag as the returned number, and return it.
This section describes how to consume an ident-like token from a stream of code points. It returns an <ident-token>, <function-token>, <url-token>, or <bad-url-token>.
If the returned string’s value is an ASCII case-insensitive match for "url", and the next input code point is U+0028 LEFT PARENTHESIS ((), consume it. Consume a url token, and return it.
Otherwise, if the next input code point is U+0028 LEFT PARENTHESIS ((), consume it. Create a <function-token> with its value set to the returned string and return it.
Otherwise, create an <ident-token> with its value set to the returned string and return it.
This section describes how to consume a string token from a stream of code points. It returns either a <string-token> or <bad-string-token>.
This algorithm must be called with an ending code point, which denotes the code point that ends the string.
Initially create a <string-token> with its value set to the empty string.
Repeatedly consume the next input code point from the stream:
Otherwise, if the next input code point is a newline, consume it.
Otherwise, if the stream starts with a valid escape, consume an escaped code point and append the returned code point to the <string-token>’s value.
This section describes how to consume a url token from a stream of code points. It returns either a <url-token> or a <bad-url-token>.
Note: This algorithm assumes that the initial "url(" has already been consumed.
Execute the following steps in order:
Otherwise, this is a parse error. Consume the remnants of a bad url, create a <bad-url-token>, and return it.
This section describes how to consume a unicode-range token. It returns a <unicode-range-token>.
Note: This algorithm assumes that the initial "u+" has been consumed, and the next code point verified to be a hex digit or a "?".
Execute the following steps in order:
If any U+003F QUESTION MARK (?) code points were consumed, then:
Otherwise, interpret the digits as a hexadecimal number. This is the start of the range.
This section describes how to consume an escaped code point. It assumes that the U+005C REVERSE SOLIDUS (\) has already been consumed and that the next input code point has already been verified to not be a newline or EOF. It will return a code point.
Consume the next input code point.
This section describes how to check if two code points are a valid escape. The algorithm described here can be called explicitly with two code points, or can be called with the input stream itself. In the latter case, the two code points in question are the current input code point and the next input code point, in that order.
Note: This algorithm will not consume any additional code point.
If the first code point is not U+005D REVERSE SOLIDUS (\), return false.
Otherwise, if the second code point is a newline, return false.
Otherwise, return true.
This section describes how to check if three code points would start an identifier. The algorithm described here can be called explicitly with three code points, or can be called with the input stream itself. In the latter case, the three code points in question are the current input code point and the next two input code points, in that order.
Note: This algorithm will not consume any additional code point.
Look at the first code point:
This section describes how to check if three code points would start a number. The algorithm described here can be called explicitly with three code points, or can be called with the input stream itself. In the latter case, the three code points in question are the current input code point and the next two input code points, in that order.
Note: This algorithm will not consume any additional code points.
Look at the first code point:
Otherwise, if the second code point is a U+002E FULL STOP (.) and the third code point is a digit, return true.
Otherwise, return false.
This section describes how to consume a name from a stream of code points. It returns a string containing the largest name that can be formed from adjacent code points in the stream, starting from the first.
Note: This algorithm does not do the verification of the first few code points that are necessary to ensure the returned code points would constitute an <ident-token>. If that is the intended use, ensure that the stream starts with an identifier before calling this algorithm.
Let result initially be an empty string.
Repeatedly consume the next input code point from the stream:
This section describes how to consume a number from a stream of code points. It returns a 3-tuple of a string representation, a numeric value, and a type flag which is either "integer" or "number".
Note: This algorithm does not do the verification of the first few code points that are necessary to ensure a number can be obtained from the stream. Ensure that the stream starts with a number before calling this algorithm.
Execute the following steps in order:
This section describes how to convert a string to a number. It returns a number.
Note: This algorithm does not do any verification to ensure that the string contains only a number. Ensure that the string contains only a valid CSS number before calling this algorithm.
Divide the string into seven components, in order from left to right:
Return the number s·(i + f·10-d)·10te
.
This section describes how to consume the remnants of a bad url from a stream of code points, "cleaning up" after the tokenizer realizes that it’s in the middle of a <bad-url-token> rather than a <url-token>. It returns nothing; its sole use is to consume enough of the input stream to reach a recovery point where normal tokenizing can resume.
Repeatedly consume the next input code point from the stream:
The input to the parsing stage is a stream or list of tokens from the tokenization stage. The output depends on how the parser is invoked, as defined by the entry points listed later in this section. The parser output can consist of at-rules, qualified rules, and/or declarations.
The parser’s output is constructed according to the fundamental syntax of CSS, without regards for the validity of any specific item. Implementations may check the validity of items as they are returned by the various parser algorithms and treat the algorithm as returning nothing if the item was invalid according to the implementation’s own grammar knowledge, or may construct a full tree as specified and "clean up" afterwards by removing any invalid items.
The items that can appear in the tree are:
Note: This specification places no limits on what an at-rule’s block may contain. Individual at-rules must define whether they accept a block, and if so, how to parse it (preferably using one of the parser algorithms or entry points defined in this specification).
Note: Most qualified rules will be style rules, where the prelude is a selector [SELECT] and the block a list of declarations.
Should we go ahead and generalize the important flag to be a list of bang values? Suggested by Zack Weinberg.
Declarations are further categorized as "properties" or "descriptors", with the former typically appearing in qualified rules and the latter appearing in at-rules. (This categorization does not occur at the Syntax level; instead, it is a product of where the declaration appears, and is defined by the respective specifications defining the given rule.)
Note: The non-preserved tokens listed above are always consumed into higher-level objects, either functions or simple blocks, and so never appear in any parser output themselves.
Note: The tokens <<}-token>>s, <<)-token>>s, <<]-token>>, <bad-string-token>, and <bad-url-token> are always parse errors, but they are preserved in the token stream by this specification to allow other specs, such as Media Queries, to define more fine-grainted error-handling than just dropping an entire declaration or block.
This section is non-normative.
This section presents an informative view of the parser, in the form of railroad diagrams. Railroad diagrams are more compact than a state-machine, but often easier to read than a regular expression.
These diagrams are informative and incomplete; they describe the grammar of "correct" stylesheets, but do not describe error-handling at all. They are provided solely to make it easier to get an intuitive grasp of the syntax.
The algorithms defined in this section produce high-level CSS objects from lower-level objects. They assume that they are invoked on a token stream, but they may also be invoked on a string; if so, first perform input preprocessing to produce a code point stream, then perform tokenization to produce a token stream.
"Parse a stylesheet" can also be invoked on a byte stream, in which case The input byte stream defines how to decode it into Unicode.
Note: This specification does not define how a byte stream is decoded for other entry points.
Note: Other specs can define additional entry points for their own purposes.
CSSStyleSheet#insertRule
method,
and similar functions which might exist,
which parse text into a single rule.
style
attribute,
which parses text into the contents of a single style rule.
media
HTML attribute.
All of the algorithms defined in this spec may be called with either a list of tokens or of component values. Either way produces an identical result.
To parse a stylesheet from a stream of tokens:
To parse a list of rules from a stream of tokens:
To parse a rule from a stream of tokens:
Otherwise, if the current input token is an <at-keyword-token>, consume an at-rule and let rule be the return value.
Otherwise, consume a qualified rule and let rule be the return value. If nothing was returned, return a syntax error.
Note: Unlike "Parse a list of declarations", this parses only a declaration and not an at-rule.
Note: Despite the name, this actually parses a mixed list of declarations and at-rules, as CSS 2.1 does for @page. Unexpected at-rules (which could be all of them, in a given context) are invalid and should be ignored by the consumer.
To parse a list of declarations:
To parse a list of component values:
The following algorithms comprise the parser. They are called by the parser entry points above.
These algorithms may be called with a list of either tokens or of component values. (The difference being that some tokens are replaced by functions and simple blocks in a list of component values.) Similar to how the input stream returned EOF code points to represent when it was empty during the tokenization stage, the lists in this stage must return an <EOF-token> when the next token is requested but they are empty.
An algorithm may be invoked with a specific list, in which case it consumes only that list (and when that list is exhausted, it begins returning <EOF-token>s). Otherwise, it is implicitly invoked with the same list as the invoking algorithm.
Create an initially empty list of rules.
Repeatedly consume the next input token:
Otherwise, reconsume the current input token. Consume a qualified rule. If anything is returned, append it to the list of rules.
Create a new at-rule with its name set to the value of the current input token, its prelude initially set to an empty list, and its value initially set to nothing.
Repeatedly consume the next input token:
Create a new qualified rule with its prelude initially set to an empty list, and its value initially set to nothing.
Repeatedly consume the next input token:
To consume a list of declarations:
Create an initially empty list of declarations.
Repeatedly consume the next input token:
Create a new declaration with its name set to the value of the current input token and its value initially set to the empty list.
Otherwise, consume the next input token.
If the current input token is a <<{-token>>, <<[-token>>, or <<(-token>>, consume a simple block and return it.
Otherwise, if the current input token is a <function-token>, consume a function and return it.
Otherwise, return the current input token.
The ending token is the mirror variant of the current input token. (E.g. if it was called with <<[-token>>, the ending token is <<]-token>>.)
Create a simple block with its associated token set to the current input token and with a value with is initially an empty list.
Repeatedly consume the next input token and process it as follows:
Create a function with a name equal to the value of the current input token, and with a value which is initially an empty list.
Repeatedly consume the next input token and process it as follows:
Several things in CSS, such as the :nth-child() pseudoclass, need to indicate indexes in a list. The An+B microsyntax is useful for this, allowing an author to easily indicate single elements or all elements at regularly-spaced intervals in a list.
The An+B notation defines an integer step (A) and offset (B), and represents the An+Bth elements in a list, for every positive integer or zero value of n, with the first element in the list having index 1 (not 0).
For values of A and B greater than 0, this effectively divides the list into groups of A elements (the last group taking the remainder), and selecting the Bth element of each group.
The An+B notation also accepts the even and odd keywords, which have the same meaning as 2n and 2n+1, respectively.
Examples:
2n+0 /* represents all of the even elements in the list */ even /* same */ 4n+1 /* represents the 1st, 5th, 9th, 13th, etc. elements in the list */
The values of A and B can be negative, but only the positive results of An+B, for n ≥ 0, are used.
Example:
-n+6 /* represents the first 6 elements of the list */
If both A and B are 0, the pseudo-class represents no element in the list.
This section is non-normative.
When A is 0, the An part may be omitted (unless the B part is already omitted). When An is not included and B is non-negative, the + sign before B (when allowed) may also be omitted. In this case the syntax simplifies to just B.
Examples:
0n+5 /* represents the 5th element in the list */ 5 /* same */
When A is 1 or -1,
the 1
may be omitted from the rule.
Examples:
The following notations are therefore equivalent:
1n+0 /* represents all elements in the list */ n+0 /* same */ n /* same */
If B is 0, then every Ath element is picked. In such a case, the +B (or -B) part may be omitted unless the A part is already omitted.
Examples:
2n+0 /* represents every even element in the list */ 2n /* same */
Whitespace is permitted on either side of the + or - that separates the An and B parts when both are present.
Valid Examples with white space:
3n + 1 +3n - 2 -n+ 6 +6
Invalid Examples with white space:
3 n + 2n + 2
<an+b>
typeThe An+B notation was originally defined using a slightly different tokenizer than the rest of CSS, resulting in a somewhat odd definition when expressed in terms of CSS tokens. This section describes how to recognize the An+B notation in terms of CSS tokens (thus defining the <an+b> type for CSS grammar purposes), and how to interpret the CSS tokens to obtain values for A and B.
The <an+b> type is defined (using the Value Definition Syntax in the Values & Units spec) as:
<an+b> = odd | even | <integer> | <n-dimension> | '+'?† n | -n | <ndashdigit-dimension> | '+'?† <ndashdigit-ident> | <dashndashdigit-ident> | <n-dimension> <signed-integer> | '+'?† n <signed-integer> | -n <signed-integer> | <ndash-dimension> <signless-integer> | '+'?† n- <signless-integer> | -n- <signless-integer> | <n-dimension> ['+' | '-'] <signless-integer> '+'?† n ['+' | '-'] <signless-integer> | -n ['+' | '-'] <signless-integer>
where:
<n-dimension>
is a <dimension-token> with its type flag set to "integer", and a unit that is an ASCII case-insensitive match for "n"
<ndash-dimension>
is a <dimension-token> with its type flag set to "integer", and a unit that is an ASCII case-insensitive match for "n-"
<ndashdigit-dimension>
is a <dimension-token> with its type flag set to "integer", and a unit that is an ASCII case-insensitive match for "n-*", where "*" is a series of one or more digits
<ndashdigit-ident>
is an <ident-token> whose value is an ASCII case-insensitive match for "n-*", where "*" is a series of one or more digits
<dashndashdigit-ident>
is an <ident-token> whose value is an ASCII case-insensitive match for "-n-*", where "*" is a series of one or more digits
<integer>
is a <number-token> with its type flag set to "integer"
<signed-integer>
is a <number-token> with its type flag set to "integer", and whose representation starts with "+" or "-"
<signless-integer>
is a <number-token> with its type flag set to "integer", and whose representation start with a digit
†: When a plus sign (+) precedes an ident starting with "n", as in the cases marked above, there must be no whitespace between the two tokens, or else the tokens do not match the above grammar.
The clauses of the production are interpreted as follows:
<integer>
<n-dimension>
'+'? n
-n
<ndashdigit-dimension>
'+'? <ndashdigit-ident>
<dashndashdigit-ident>
<n-dimension> <signed-integer>
'+'? n <signed-integer>
-n <signed-integer>
<ndash-dimension> <signless-integer>
'+'? n- <signless-integer>
-n- <signless-integer>
<n-dimension> ['+' | '-'] <signless-integer>
'+'? n ['+' | '-'] <signless-integer>
-n ['+' | '-'] <signless-integer>
The Values spec defines how to specify a grammar for properties. This section does the same, but for rules.
Just like in property grammars,
the notation <foo>
refers to the "foo" grammar term,
assumed to be defined elsewhere.
Substituting the <foo>
for its definition results in a semantically identical grammar.
Several types of tokens are written literally, without quotes:
:
), <comma-token> (written as ,
), <semicolon-token> (written as ;
), <<(-token>>, <<)-token>>, <<{-token>>, and <<}-token>>s.
Tokens match if their value is an ASCII case-insensitive match for the value defined in the grammar.
Although it is possible, with escaping,
to construct an <ident-token> whose value starts with @
or ends with (
,
such a tokens is not an <at-keyword-token> or a <function-token>
and does not match corresponding grammar definitions.
<delim-token>s are written with their value enclosed in single quotes.
For example, a <delim-token> containing the "+" code point is written as '+'
.
Similarly, the <<[-token>> and <<]-token>>s must be written in single quotes,
as they’re used by the syntax of the grammar itself to group clauses.
<whitespace-token> is never indicated in the grammar;
<whitespace-token>s are allowed before, after, and between any two tokens,
unless explicitly specified otherwise in prose definitions.
(For example, if the prelude of a rule is a selector,
whitespace is significant.)
When defining a function or a block, the ending token must be specified in the grammar, but if it’s not present in the eventual token stream, it still matches.
translateX( <translation-value> )
However, the stylesheet may end with the function unclosed, like:
.foo { transform: translate(50px
The CSS parser parses this as a style rule containing one declaration, whose value is a function named "translate". This matches the above grammar, even though the ending token didn’t appear in the token stream, because by the time the parser is finished, the presence of the ending token is no longer possible to determine; all you have is the fact that there’s a block and a function.
The CSS parser is agnostic as to the contents of blocks, such as those that come at the end of some at-rules. Defining the generic grammar of the blocks in terms of tokens is non-trivial, but there are dedicated and unambiguous algorithms defined for parsing this.
The <declaration-list> production represents a list of declarations. It may only be used in grammars as the sole value in a block, and represents that the contents of the block must be parsed using the consume a list of declarations algorithm.
Similarly, the <rule-list> production represents a list of rules, and may only be used in grammars as the sole value in a block. It represents that the contents of the block must be parsed using the consume a list of rules algorithm.
Finally, the <stylesheet> production represents a list of rules. It is identical to <rule-list>, except that blocks using it default to accepting all rules that aren’t otherwise limited to a particular context.
@font-face { <declaration-list> }
This is a complete and sufficient definition of the rule’s grammar.
For another example, @keyframes rules are more complex, interpreting their prelude as a name and containing keyframes rules in their block Their grammar is:
@keyframes <keyframes-name> { <rule-list> }
For rules that use <declaration-list>, the spec for the rule must define which properties, descriptors, and/or at-rules are valid inside the rule; this may be as simple as saying "The @foo rule accepts the properties/descriptors defined in this specification/section.", and extension specs may simply say "The @foo rule additionally accepts the following properties/descriptors.". Any declarations or at-rules found inside the block that are not defined as valid must be removed from the rule’s value.
Within a <declaration-list>,
!important
is automatically invalid on any descriptors.
If the rule accepts properties,
the spec for the rule must define whether the properties interact with the cascade,
and with what specificity.
If they don’t interact with the cascade,
properties containing !important
are automatically invalid;
otherwise using !important
is valid and has its usual effect on the cascade origin of the property.
For rules that use <rule-list>, the spec for the rule must define what types of rules are valid inside the rule, same as <declaration-list>, and unrecognized rules must similarly be removed from the rule’s value.
<keyframe-rule> = <keyframe-selector> { <declaration-list> }
Keyframe rules, then, must further define that they accept as declarations all animatable CSS properties, plus the animation-timing-function property, but that they do not interact with the cascade.
For rules that use <stylesheet>, all rules are allowed by default, but the spec for the rule may define what types of rules are invalid inside the rule.
@media <media-query-list> { <stylesheet> }
It additionally defines a restriction that the <stylesheet> can not contain @media rules, which causes them to be dropped from the outer rule’s value if they appear.
To parse a CSS stylesheet, first parse a stylesheet. Interpret all of the resulting top-level qualified rules as style rules, defined below.
If any style rule is invalid, or any at-rule is not recognized or is invalid according to its grammar or context, it’s a parse error. Discard that rule.
A style rule is a qualified rule that associates a selector list [SELECT] with a list of property declarations. They are also called rule sets in [CSS21]. CSS Cascading and Inheritance [CSS3CASCADE] defines how the declarations inside of style rules participate in the cascade.
The prelude of the qualified rule is parsed as a selector list. If this results in an invalid selector list, the entire style rule is invalid.
The content of the qualified rule’s block is parsed as a list of declarations. Unless defined otherwise by another specification or a future level of this specification, at-rules in that list are invalid and must be ignored. Declaration for an unknown CSS property or whose value does not match the syntax defined by the property are invalid and must be ignored. The validity of the style rule’s contents have no effect on the validity of the style rule itself. Unless otherwise specified, property names are ASCII case-insensitive.
Note: The names of Custom Properties [CSS-VARIABLES] are case-sensitive.
Qualified rules at the top-level of a CSS stylesheet are style rules. Qualified rules in other contexts may or may not be style rules, as defined by the context.
For example, qualified rules inside @media rules [CSS3-CONDITIONAL] are style rules, but qualified rules inside @keyframes rules are not [CSS3-ANIMATIONS].
The @charset rule is a very special at-rule associated with determining the character encoding of the stylesheet. In general, its grammar is:
<at-charset-rule> = @charset <string>;
Additionally, an @charset rule is invalid if it is not at the top-level of a stylesheet, or if it is not the very first rule of a stylesheet.
@charset rules have an encoding, given by the value of the <string>.
The @charset rule has no effect on a stylesheet.
Note: However, the algorithm to determine the fallback encoding
looks at the first several bytes of the stylesheet
to see if they’re a match for the ASCII characters @charset "XXX";
,
where XXX is a sequence of bytes other than 22 (ASCII for "
).
While this resembles an @charset rule,
it’s not actually the same thing.
For example, the necessary sequence of bytes will spell out something entirely different
if the stylesheet is in an encoding that’s not ASCII-compatible,
such as UTF-16.
This specification does not define how to serialize CSS in general, leaving that task to the CSSOM and individual feature specifications. However, there is one important facet that must be specified here regarding comments, to ensure accurate "round-tripping" of data from text to CSS objects and back.
The tokenizer described in this specification does not produce tokens for comments, or otherwise preserve them in any way. Implementations may preserve the contents of comments and their location in the token stream. If they do, this preserved information must have no effect on the parsing step, but must be serialized in its position as "/*" followed by its contents followed by "*/".
Unless the implementation has a preserved comment at that position, it must insert the text "/**/" between the serialization of adjacent tokens when the two tokens are of the following pairs:
Note: The preceding pairs of tokens can only be adjacent due to comments in the original text, so the above rule reinserts the minimum number of comments into the serialized text to ensure an accurate round-trip. (Roughly. The <delim-token> rules are slightly too powerful, for simplicity.)
Note: No comment is inserted between consecutive <whitespace-token>. As a consequence such token sequences will not "round-trip" exactly. This shouldn’t be an issue as CSS grammars always interpret any amount of whitespace as identical to a single space.
To serialize an <an+b> value, let s initially be the empty string:
Return s.
This section is non-normative.
Note: The point of this spec is to match reality; changes from CSS2.1 are nearly always because CSS 2.1 specified something that doesn’t match actual browser behavior, or left something unspecified. If some detail doesn’t match browsers, please let me know as it’s almost certainly unintentional.
Changes in decoding from a byte stream:
Tokenization changes:
Parsing changes:
An+B changes from Selectors Level 3 [SELECT]:
Thanks for feedback and contributions from David Baron, 呂康豪 (Kang-Hao Lu), Marc O’Morain, and Zack Weinberg.
Conformance requirements are expressed with a combination of descriptive assertions and RFC 2119 terminology. The key words "MUST", "MUST NOT", "REQUIRED", "SHALL", "SHALL NOT", "SHOULD", "SHOULD NOT", "RECOMMENDED", "MAY", and "OPTIONAL" in the normative parts of this document are to be interpreted as described in RFC 2119. However, for readability, these words do not appear in all uppercase letters in this specification.
All of the text of this specification is normative except sections explicitly marked as non-normative, examples, and notes. [RFC2119]
Examples in this specification are introduced with the words "for example"
or are set apart from the normative text with class="example"
,
like this:
This is an example of an informative example.
Informative notes begin with the word "Note" and are set apart from the
normative text with class="note"
, like this:
Note, this is an informative note.
Conformance to this specification is defined for three conformance classes:
A style sheet is conformant to this specification if all of its statements that use syntax defined in this module are valid according to the generic CSS grammar and the individual grammars of each feature defined in this module.
A renderer is conformant to this specification if, in addition to interpreting the style sheet as defined by the appropriate specifications, it supports all the features defined by this specification by parsing them correctly and rendering the document accordingly. However, the inability of a UA to correctly render a document due to limitations of the device does not make the UA non-conformant. (For example, a UA is not required to render color on a monochrome monitor.)
An authoring tool is conformant to this specification if it writes style sheets that are syntactically correct according to the generic CSS grammar and the individual grammars of each feature in this module, and meet all other conformance requirements of style sheets as described in this module.
So that authors can exploit the forward-compatible parsing rules to assign fallback values, CSS renderers must treat as invalid (and ignore as appropriate) any at-rules, properties, property values, keywords, and other syntactic constructs for which they have no usable level of support. In particular, user agents must not selectively ignore unsupported component values and honor supported values in a single multi-value property declaration: if any value is considered invalid (as unsupported values must be), CSS requires that the entire declaration be ignored.
To avoid clashes with future CSS features, the CSS2.1 specification reserves a prefixed syntax for proprietary and experimental extensions to CSS.
Prior to a specification reaching the Candidate Recommendation stage in the W3C process, all implementations of a CSS feature are considered experimental. The CSS Working Group recommends that implementations use a vendor-prefixed syntax for such features, including those in W3C Working Drafts. This avoids incompatibilities with future changes in the draft.
Once a specification reaches the Candidate Recommendation stage, non-experimental implementations are possible, and implementors should release an unprefixed implementation of any CR-level feature they can demonstrate to be correctly implemented according to spec.
To establish and maintain the interoperability of CSS across implementations, the CSS Working Group requests that non-experimental CSS renderers submit an implementation report (and, if necessary, the testcases used for that implementation report) to the W3C before releasing an unprefixed implementation of any CSS features. Testcases submitted to W3C are subject to review and correction by the CSS Working Group.
Further information on submitting testcases and implementation reports can be found from on the CSS Working Group’s website at http://www.w3.org/Style/CSS/Test/. Questions should be directed to the public-css-testsuite@w3.org mailing list.
No properties defined.
Anne says that steps 3/4 should be an input to this algorithm from the specs that define importing stylesheet, to make the algorithm as a whole cleaner. Perhaps abstract it into the concept of an "environment charset" or something? ↵
Should we only take the charset from the referring document if it’s same-origin? ↵
Should we go ahead and generalize the important flag to be a list of bang values? Suggested by Zack Weinberg. ↵
CSSStyleSheet#insertRule
method,
and similar functions which might exist,
which parse text into a single rule.
style
attribute,
which parses text into the contents of a single style rule.
media
HTML attribute.