Polyfilling CSS with CSS Parser Extensions
In April I attended #BlinkOn, the conference for web platform contributors in the Chromium open source project. At the conference I gave a presentation about βCSS Parser Extensionsβ, a wild idea I have to fix CSS polyfilling once and for all.
If you didnβt know, polyfilling CSS features is extremely hard, mainly because the CSS Parser discards what it does not understand. So what if, instead of having authors write their own parser and cascade to polyfill a CSS feature, they could teach the parser some new tricks?
~
β οΈ This is a personal idea. There is nothing official about this β¦ yet.
The goal of the talk I gave (slides, recording) was to nerd snipe some of the engineers present and get their input on this wild idea I have been sitting on for the past two years. Next steps will be to whip up a proper explainer and then take this to the CSS WG to seek broader interest. It will take years to get this done, if it ever gets done.
~
Intro
When it comes to the adoption of new CSS features, web developers often tell me that they wonβt use a feature yet because said feature does not have cross-browser support. Within their organization there is often still the expectation that websites need to look exactly the same in every browser. Or they simply want to write code only once β code of which they know that it works fine across various browsers (including some older versions of those browsers).
As a result, the adoption of new CSS features β including features that are a perfect Progressive Enhancement β is blocked until the feature is Baseline Widely available. Assuming an average time-to-interop of Β±1.5 years, this means a CSS feature is only getting wider adoption 4 years after it first shipped in a browser.
(There are some exceptions of course, and there are many other factors contributing to the (not-)adoption of a feature, but very often thatβs how it goes)

To speed up the adoption of new CSS features, polyfills can be created. For example, the polyfill for container queries has proven its worth. However, this polyfill β like any other CSS polyfill β is not perfect and comes with some limitations. Furthermore, Β±65% of the code of that polyfill is dedicated to parsing CSS and extracting the necessary information such property values and container at-rules from the CSS β which is a bit ridiculous.
CSS Parser Extensions aims to remove these limitations and to ease this information gathering by allowing authors to extend the CSS Parser with new syntaxes, properties, keywords, etc. for it to support. By tapping directly into the CSS parser, CSS polyfills become easier to author, have a reduced size & performance footprint, and become more robust.
~
How to (try to) polyfill CSS today
The problem is clearly stated in the talk The Dark Side of Polyfilling CSS by Philip Walton. It is recommended to watch this presentation to get a good understanding of the problem. Below is an abbreviated and less-detailed version of the problem statement.
When authors create a polyfill for a CSS feature, they canβt rely on the CSS parser giving them the information about, for example, the declarations they want to polyfill. This is because the CSS Parser throws away rules and declarations it couldnβt successfully parse. Therefore, polyfills need to gather and re-process the stylesheets themselves in order to get the tokens for the feature that they want to polyfill.
While this looks as simple as performing these 3 steps, itβs more complicated than it looks.
- Gather all styles
- Parse the styles
- Apply the styles
Each step has its own challenges and limitations, detailed below, and nicely summarized by this quote by Philip Walton from 2016 (!):
If youβve never tried writing a CSS polyfill yourself, then youβve probably never experienced the pain.
1. Gather all styles
Collecting all styles in itself already is challenging, as authors need to gather these from a various sources:
document.styleSheets
document.adoptedStyleSheets
- Element attached styles
After collecting all references to these stylesheets, the work is done as authors also need to keep an eye out for mutations in any of those sources.
2. Parse the styles
With all style sheets in hand, authors can then continue to parse the contents of the style sheets. This sounds like a no-brainer but it already comes with challenges as in many cases they canβt access contents of stylesheets being served from a CORS-protected origin.
In case they do have access to the style sheetβs contents, authors need to manually tokenize and parse the contents, duplicating work that was already done by the UA.
The custom CSS parser they let loose on the source code must also work with the entire CSS Syntax. For example, when a UA ships a feature like CSS nesting, the polyfillβs CSS parser also needs to support it. As a result, CSS parsers used in CSS polyfills constantly need to play catch-up to support the latest syntax.
3. Apply the styles
With the styles parsed authors must then figure out which elements they need to apply things to. For declarations for example, this basically means that they need to write their own cascade. They also need to implement CSS features such as Media Queries and take those into account. And oh, thereβs also the Shadow DOM which complicates things.
~
Proposed Solution
What if, instead of having polyfill authors to write their own CSS parser and cascade, they could teach the parser some new tricks?
As in: give authors access to the CSS Parser using JavaScript β through CSS.parser
β so that they can extend it with new syntaxes, properties, keywords, and functions to support.
- CSS Keywords:
CSS.parser.registerKeyword(β¦)
- CSS Functions:
CSS.parser.registerFunction(β¦)
- CSS Syntaxes:
CSS.parser.registerSyntax(β¦)
- CSS Declarations:
CSS.parser.registerProperty(β¦)
After registering one of these features with the CSS Parser, the parser wonβt discard the tokens associated with it and authors can use the feature as if the parser never discarded them.
For example, when registering an unsupported CSS Property + Syntax, the parser will keep the declaration, and the property will show up in things like window.getComputedStyle()
. Feature checks using CSS.supports()
/ @supports()
will then also pass.
In addition to these registrations, some utility functions should be made available to authors as well. For example, ways to get the specified style of an element, a way to compute lengths to the pixel value they represent, a way to figure out which registrations have already been done, etc.
~
Examples
β οΈ These examples should give you an idea of what should be possible with CSS Parser Extensions. The syntax here is not set in stone at all. It is something I came up while exploring the possibilities.
Register a keyword: random
In the following example the inexistent random
keyword gets registered. Whenever the CSS engine parses that keyword, it will return a random value.
CSS.parser
.registerKeyword('random:<number>', {
caching_mode: CSS.parser.caching_modes.PER_MATCH,
invalidation: CSS.parser.invalidation.NONE,
})
.computeTo((match) => {
return Math.random();
});
;
The replacement is meant only to happen once per occurrence in the style sheet, which is controlled by the caching_mode
and invalidation
options.
Register a function: light-dark()
The following snippet polyfills the wonderful light-dark()
. Itβs a function that returns one of two passed in colors depending on the used color-scheme
for the element. When the color-scheme
is light
the first value gets used and when itβs something else the second value gets returned.
CSS.parser
.registerFunction(
'light-dark(light:<color>, dark:<color>):<color>',
{ invalidation: ['color-scheme'] }
)
.computeTo((match, args) => {
const { element, property, propertyValue } = match;
const colorScheme =
CSS.parser.getSpecifiedStyle(element)β¨
.getPropertyValue('color-scheme');
if (colorScheme == 'light') return args.light;
return args.dark;
})
);
Because the returned value depends on the color-scheme
value, the color-scheme
property is listed as a property that causes an invalidation.
Register a function: at-rule()
The following code snippet polyfills the wonderful at-rule()
function that allows you to feature detect at-rules. It returns a <boolean>
based on a check.
CSS.parser
.registerFunction('at-rule(keyword:<string>):<boolean>', {
caching_mode: CSS.parser.computation_modes.GLOBAL,
})
.computeTo((match, args) => {
switch (args.keyword) {
case '@view-transition':
return ("CSSViewTransitionRule" in window);
case '@starting-style':
return ("CSSStartingStyleRule" in window);
// …
default:
return false;
}
})
;
Because the detection should only be done once, the result of the check can be cached globally.
Custom functions are excluded here. Maybe these should be added, or maybe not.
Register a property: size
The CSS size
property is a brand new property that was only resolved on recently. It still needs to be specced and implemented, and will act as a shorthand for setting the width
and height
in one go.
The property gets registered with the standard traits a property has. In addition to its computeTo
method that determines its computed value, the onMatch
method returns a block of declarations to be used as a replacement whenever a declaration using the property is detected.
CSS.parser
.registerProperty('size', {
syntax: '[<length-percentage [0,β]> | auto]{1,2}',
initialValue: 'auto',
inherits: false,
percentages: 'inline-size'
animatable: CSS.parser.animation_types.BY_COMPUTED_VALUE,
})
.computeTo(…)
.onMatch((match, computedValue) => {
const { element, specifiedValue } = match;
return {
'width': computedValue[0],
'height': computedValue[1] ?? computedValue[0],
};
});
;
Register a property: scroll-timeline
Hereβs another example of registering a property, namely the scroll-timeline
property. The registration and matching can be done separately, and it also shows that some data on a match can be stored for later use. Here itβs a ResizeObserver
that gets added to β and later removed from β the matched element.
CSS.parser.registerProperty('scroll-timeline', { … });
CSS.parser
.matchProperty('scroll-timeline')
// No .computeTo … so it would just return the declared value
.onMatch(parserMatch => {
const resizeObserver = new ResizeObserver((entries) => {
// …
});
resizeObserver.observe(parserMatch.element);
parserMatch.data.set('ro', resizeObserver);
})
.onElementUnmatch(parserMatch => {
const resizeObserver = parserMatch.data.get('ro');
resizeObserver.disconnect();
})
;
Register a syntax
Itβs also possible to register a syntax for later use.
CSS.parser
.registerSyntax(
'<single-animation-timeline>',
'auto | none | <dashed-ident> | <scroll()> | <view()>'
)
;
CSS.parser
.registerProperty('animation-timeline', {
syntax: '<single-animation-timeline>#',
initialValue: 'auto',
inherits: false,
animatable: CSS.parser.ANIMATABLE_NO,
})
.onMatch(β¦);
Fully fledged example: position: fixed / visual
In w3c/csswg-drafts#7475 I suggested an extension to position: fixed
that allows you to indicate which thing the element should be fixed to.
position: fixed / layout
= current behavior, would be the same asposition: fixed
)position: fixed / visual
= fixed against the visual viewport, also when zoomed inposition: fixed / fixed
(lacking a better name) = positioned against the unzoomed visual viewport
The code to polyfill that could look something like this:
// Register syntaxes used by the polyfill.
CSS.parser.registerSyntax('<position>', 'static | relative | absolute | sticky | fixed');
CSS.parser.registerSyntax('<position-arg>', 'layout | visual | visual-unzoomed');
// Extend the existing `position` property registration, only overriding certain parts.
// The non-overriden parts remain untouched
const positionWithArgRegistration = CSS.parser
.registerProperty('position', {
extends: 'position',
syntax: '<position> [/ arg:<position-arg>]?',
})
// No .computeTo … so the syntax will compute individually
;
const cssPositionFixed =
positionWithArgRegistration
.with('position', 'fixed') // Only `position: fixed`
.with('arg') // Any arg value
.onMatch((match) => {
const { element, specifiedValue } = match;
const { position, arg } = specifiedValue;
const styles = CSS.parser.getSpecifiedStyle(element);
const visualViewport = determineVisualViewport();
switch (arg) {
case 'layout':
return {
position: 'fixed',
};
case 'visual':
return {
position: 'fixed',
bottom: (() => {
if (styles.bottom.toString() != 'auto') {
return styles.bottom.add(CSS.px(visualViewport.height));
}
})(),
};
case 'visual-unzoomed':
return {
position: 'fixed',
// @TODO: change all other properties
};
}
})
;
window.visualViewport.addEventListener('resize', () => {
cssPositionFixed.triggerMatch();
});
~
Outcome and considerations
Benefits
By allowing polyfill authors to extend the CSS Parser that ships with the UA, they no longer need to gather all styles, parse stylesheets themselves, or figure out when to apply styles to an element. The resulting polyfills will be easier to author, smaller in size, perform faster, and be more robust and efficient.
With robust CSS polyfills powered by CSS Parser Extensions available, the adoption of CSS features is no longer blocked on Baseline widely available cross-browser support, leading to an increased adoption rate.
Furthermore this would also allow browser vendors to more easily prototype a feature as it would require less investment upfront.
Risks / Caveats
For this to work, the timing of when things get executed are of utmost important. You donβt want to run blocking JavaScript in between the Style-Layout-Paint steps of the pixel pipeline. This is something that needs to be carefully thought about. Maybe this should be modeled as an Observer?
Something that is currently not included is polyfilling selectors. I have not given this any thought yet, so this could be added once it has been properly looked into. My initial guess is that polyfilling selectors like :has-interest
could easily be done, but that polyfilling pseudo-elements would be a bit more difficult as youβd also need to modify the DOM for those the work.
Additionally not every CSS feature can be polyfilled. Things like View Transitions come to mind.
And finally, this whole idea falls or stands with buy-in from all browser vendors. If one of the (major) browser vendors is not on board with this, then this project will fail.
~
So, whatβs next?
Itβs been 12 years since The Extensible Web Manifesto launched and 9 years since Philip Walton shared how hard it is to polyfill CSS, yet somehow not much has changed since then.
To try and move the needle here, next step for me is to whip up a proper explainer and to take this to the CSS WG to seek broader interest. Some of my colleagues at Google have expressed interest in this and have offered their help, and I know that Brian is interested in this as well β¦ so maybe more people (from other browser vendors) will be too.
To set expectations here, though: donβt expect this to land any time soon. This will take years to build, if it gets built, which I hope it will.
~
π₯ Like what you see? Want to stay in the loop? Here's how:
I can also be found on π Twitter and π Mastodon but only post there sporadically.