Reading view

There are new articles available, click to refresh the page.

Polyfilling CSS with CSS Parser Extensions

By: Bramus!

In April I attended #BlinkOn, the conference for web platform contributors in the Chromium open source project. At the conference I gave a presentation about “CSS Parser Extensions”, a wild idea I have to fix CSS polyfilling once and for all.

If you didn’t know, polyfilling CSS features is extremely hard, mainly because the CSS Parser discards what it does not understand. So what if, instead of having authors write their own parser and cascade to polyfill a CSS feature, they could teach the parser some new tricks?

~

⚠️ This is a personal idea. There is nothing official about this … yet.

The goal of the talk I gave (slides, recording) was to nerd snipe some of the engineers present and get their input on this wild idea I have been sitting on for the past two years. Next steps will be to whip up a proper explainer and then take this to the CSS WG to seek broader interest. It will take years to get this done, if it ever gets done.

~

Intro

When it comes to the adoption of new CSS features, web developers often tell me that they won’t use a feature yet because said feature does not have cross-browser support. Within their organization there is often still the expectation that websites need to look exactly the same in every browser. Or they simply want to write code only once – code of which they know that it works fine across various browsers (including some older versions of those browsers).

As a result, the adoption of new CSS features – including features that are a perfect Progressive Enhancement – is blocked until the feature is Baseline Widely available. Assuming an average time-to-interop of ±1.5 years, this means a CSS feature is only getting wider adoption 4 years after it first shipped in a browser.

(There are some exceptions of course, and there are many other factors contributing to the (not-)adoption of a feature, but very often that’s how it goes)

Timeline of a typical feature release. Between the feature shipping in the first browser and the feature become Baseline Widely Available, there is a minimum of 4 years.

To speed up the adoption of new CSS features, polyfills can be created. For example, the polyfill for container queries has proven its worth. However, this polyfill – like any other CSS polyfill – is not perfect and comes with some limitations. Furthermore, ±65% of the code of that polyfill is dedicated to parsing CSS and extracting the necessary information such property values and container at-rules from the CSS – which is a bit ridiculous.

CSS Parser Extensions aims to remove these limitations and to ease this information gathering by allowing authors to extend the CSS Parser with new syntaxes, properties, keywords, etc. for it to support. By tapping directly into the CSS parser, CSS polyfills become easier to author, have a reduced size & performance footprint, and become more robust.

~

How to (try to) polyfill CSS today

The problem is clearly stated in the talk The Dark Side of Polyfilling CSS by Philip Walton. It is recommended to watch this presentation to get a good understanding of the problem. Below is an abbreviated and less-detailed version of the problem statement.

When authors create a polyfill for a CSS feature, they can’t rely on the CSS parser giving them the information about, for example, the declarations they want to polyfill. This is because the CSS Parser throws away rules and declarations it couldn’t successfully parse. Therefore, polyfills need to gather and re-process the stylesheets themselves in order to get the tokens for the feature that they want to polyfill.

While this looks as simple as performing these 3 steps, it’s more complicated than it looks.

  1. Gather all styles
  2. Parse the styles
  3. Apply the styles

Each step has its own challenges and limitations, detailed below, and nicely summarized by this quote by Philip Walton from 2016 (!):

If you’ve never tried writing a CSS polyfill yourself, then you’ve probably never experienced the pain.

– Philip Walton, The Dark Side of Polyfilling CSS, Dec 2016

1. Gather all styles

Collecting all styles in itself already is challenging, as authors need to gather these from a various sources:

  1. document.styleSheets
  2. document.adoptedStyleSheets
  3. Element attached styles

After collecting all references to these stylesheets, the work is done as authors also need to keep an eye out for mutations in any of those sources.

2. Parse the styles

With all style sheets in hand, authors can then continue to parse the contents of the style sheets. This sounds like a no-brainer but it already comes with challenges as in many cases they can’t access contents of stylesheets being served from a CORS-protected origin.

In case they do have access to the style sheet’s contents, authors need to manually tokenize and parse the contents, duplicating work that was already done by the UA.

The custom CSS parser they let loose on the source code must also work with the entire CSS Syntax. For example, when a UA ships a feature like CSS nesting, the polyfill’s CSS parser also needs to support it. As a result, CSS parsers used in CSS polyfills constantly need to play catch-up to support the latest syntax.

3. Apply the styles

With the styles parsed authors must then figure out which elements they need to apply things to. For declarations for example, this basically means that they need to write their own cascade. They also need to implement CSS features such as Media Queries and take those into account. And oh, there’s also the Shadow DOM which complicates things.

~

Proposed Solution

What if, instead of having polyfill authors to write their own CSS parser and cascade, they could teach the parser some new tricks?

As in: give authors access to the CSS Parser using JavaScript – through CSS.parser – so that they can extend it with new syntaxes, properties, keywords, and functions to support.

  • CSS Keywords: CSS.parser.registerKeyword(…)
  • CSS Functions: CSS.parser.registerFunction(…)
  • CSS Syntaxes: CSS.parser.registerSyntax(…)
  • CSS Declarations: CSS.parser.registerProperty(…)

After registering one of these features with the CSS Parser, the parser won’t discard the tokens associated with it and authors can use the feature as if the parser never discarded them.

For example, when registering an unsupported CSS Property + Syntax, the parser will keep the declaration, and the property will show up in things like window.getComputedStyle(). Feature checks using CSS.supports() / @supports() will then also pass.

In addition to these registrations, some utility functions should be made available to authors as well. For example, ways to get the specified style of an element, a way to compute lengths to the pixel value they represent, a way to figure out which registrations have already been done, etc.

~

Examples

⚠️ These examples should give you an idea of what should be possible with CSS Parser Extensions. The syntax here is not set in stone at all. It is something I came up while exploring the possibilities.

Register a keyword: random

In the following example the inexistent random keyword gets registered. Whenever the CSS engine parses that keyword, it will return a random value.

CSS.parser
  .registerKeyword('random:<number>', {
    caching_mode: CSS.parser.caching_modes.PER_MATCH,
    invalidation: CSS.parser.invalidation.NONE,
  })
  .computeTo((match) => {
    return Math.random();
  });
;

The replacement is meant only to happen once per occurrence in the style sheet, which is controlled by the caching_mode and invalidation options.

Register a function: light-dark()

The following snippet polyfills the wonderful light-dark(). It’s a function that returns one of two passed in colors depending on the used color-scheme for the element. When the color-scheme is light the first value gets used and when it’s something else the second value gets returned.

CSS.parser
  .registerFunction(
    'light-dark(light:<color>, dark:<color>):<color>',
    { invalidation: ['color-scheme'] }
  )
  .computeTo((match, args) => {
    const { element,  property, propertyValue } = match;
    const colorScheme =
      CSS.parser.getSpecifiedStyle(element)

      .getPropertyValue('color-scheme');

    if (colorScheme == 'light') return args.light;
    return args.dark;
  })
);

Because the returned value depends on the color-scheme value, the color-scheme property is listed as a property that causes an invalidation.

Register a function: at-rule()

The following code snippet polyfills the wonderful at-rule() function that allows you to feature detect at-rules. It returns a <boolean> based on a check.

CSS.parser
  .registerFunction('at-rule(keyword:<string>):<boolean>', { 
    caching_mode: CSS.parser.computation_modes.GLOBAL,
  })
  .computeTo((match, args) => {
    switch (args.keyword) {
      case '@view-transition':
        return ("CSSViewTransitionRule" in window);
      case '@starting-style':
        return ("CSSStartingStyleRule" in window);
      // &mldr;
      default:
        return false;
    }
  })
;

Because the detection should only be done once, the result of the check can be cached globally.

Custom functions are excluded here. Maybe these should be added, or maybe not.

Register a property: size

The CSS size property is a brand new property that was only resolved on recently. It still needs to be specced and implemented, and will act as a shorthand for setting the width and height in one go.

The property gets registered with the standard traits a property has. In addition to its computeTo method that determines its computed value, the onMatch method returns a block of declarations to be used as a replacement whenever a declaration using the property is detected.

CSS.parser
  .registerProperty('size', {
      syntax: '[<length-percentage [0,∞]> | auto]{1,2}',
      initialValue: 'auto',
      inherits: false,
      percentages: 'inline-size'
      animatable: CSS.parser.animation_types.BY_COMPUTED_VALUE,
  })
  .computeTo(&mldr;)
  .onMatch((match, computedValue) => {
    const { element, specifiedValue } = match;
    return {
      'width': computedValue[0],
      'height': computedValue[1] ?? computedValue[0],
    };
  });
;

Register a property: scroll-timeline

Here’s another example of registering a property, namely the scroll-timeline property. The registration and matching can be done separately, and it also shows that some data on a match can be stored for later use. Here it’s a ResizeObserver that gets added to – and later removed from – the matched element.

CSS.parser.registerProperty('scroll-timeline', { &mldr; });

CSS.parser
  .matchProperty('scroll-timeline')
  // No .computeTo &mldr; so it would just return the declared value
  .onMatch(parserMatch => {
    const resizeObserver = new ResizeObserver((entries) => {
        // &mldr;
    });
    resizeObserver.observe(parserMatch.element);
    parserMatch.data.set('ro', resizeObserver);
  })
  .onElementUnmatch(parserMatch => {
    const resizeObserver = parserMatch.data.get('ro');
    resizeObserver.disconnect();
  })
;

Register a syntax

It’s also possible to register a syntax for later use.

CSS.parser
  .registerSyntax(
    '<single-animation-timeline>',
    'auto | none | <dashed-ident> | <scroll()> | <view()>'
  )
;

CSS.parser
  .registerProperty('animation-timeline', {
    syntax: '<single-animation-timeline>#',
    initialValue: 'auto',
    inherits: false,
    animatable: CSS.parser.ANIMATABLE_NO,
  })
  .onMatch(…);

Fully fledged example: position: fixed / visual

In w3c/csswg-drafts#7475 I suggested an extension to position: fixed that allows you to indicate which thing the element should be fixed to.

  1. position: fixed / layout = current behavior, would be the same as position: fixed)
  2. position: fixed / visual = fixed against the visual viewport, also when zoomed in
  3. position: fixed / fixed (lacking a better name) = positioned against the unzoomed visual viewport

The code to polyfill that could look something like this:

// Register syntaxes used by the polyfill.
CSS.parser.registerSyntax('<position>', 'static | relative | absolute | sticky | fixed');
CSS.parser.registerSyntax('<position-arg>', 'layout | visual | visual-unzoomed');

// Extend the existing &grave;position&grave; property registration, only overriding certain parts.
// The non-overriden parts remain untouched
const positionWithArgRegistration = CSS.parser
  .registerProperty('position', {
    extends: 'position',
    syntax: '<position> [/ arg:<position-arg>]?',
  })
  // No .computeTo &mldr; so the syntax will compute individually
;

const cssPositionFixed =
    positionWithArgRegistration
      .with('position', 'fixed') // Only &grave;position: fixed&grave;
      .with('arg') // Any arg value
    .onMatch((match) => {
        const { element, specifiedValue } = match;
        const { position, arg } = specifiedValue;

        const styles = CSS.parser.getSpecifiedStyle(element);
        const visualViewport = determineVisualViewport();

        switch (arg) {
            case 'layout':
                return {
                    position: 'fixed',
                };

            case 'visual':                    
                return {
                    position: 'fixed',
                    bottom: (() => {
                        if (styles.bottom.toString() != 'auto') {
                            return styles.bottom.add(CSS.px(visualViewport.height));
                        }
                    })(),
                };

            case 'visual-unzoomed':
                return {
                    position: 'fixed',
                    // @TODO: change all other properties
                };
        }
    })
;

window.visualViewport.addEventListener('resize', () => {
    cssPositionFixed.triggerMatch();
});

~

Outcome and considerations

Benefits

By allowing polyfill authors to extend the CSS Parser that ships with the UA, they no longer need to gather all styles, parse stylesheets themselves, or figure out when to apply styles to an element. The resulting polyfills will be easier to author, smaller in size, perform faster, and be more robust and efficient.

With robust CSS polyfills powered by CSS Parser Extensions available, the adoption of CSS features is no longer blocked on Baseline widely available cross-browser support, leading to an increased adoption rate.

Furthermore this would also allow browser vendors to more easily prototype a feature as it would require less investment upfront.

Risks / Caveats

For this to work, the timing of when things get executed are of utmost important. You don’t want to run blocking JavaScript in between the Style-Layout-Paint steps of the pixel pipeline. This is something that needs to be carefully thought about. Maybe this should be modeled as an Observer?

Something that is currently not included is polyfilling selectors. I have not given this any thought yet, so this could be added once it has been properly looked into. My initial guess is that polyfilling selectors like :has-interest could easily be done, but that polyfilling pseudo-elements would be a bit more difficult as you’d also need to modify the DOM for those the work.

Additionally not every CSS feature can be polyfilled. Things like View Transitions come to mind.

And finally, this whole idea falls or stands with buy-in from all browser vendors. If one of the (major) browser vendors is not on board with this, then this project will fail.

~

So, what’s next?

It’s been 12 years since The Extensible Web Manifesto launched and 9 years since Philip Walton shared how hard it is to polyfill CSS, yet somehow not much has changed since then.

To try and move the needle here, next step for me is to whip up a proper explainer and to take this to the CSS WG to seek broader interest. Some of my colleagues at Google have expressed interest in this and have offered their help, and I know that Brian is interested in this as well … so maybe more people (from other browser vendors) will be too.

To set expectations here, though: don’t expect this to land any time soon. This will take years to build, if it gets built, which I hope it will.

~

🔥 Like what you see? Want to stay in the loop? Here's how:

I can also be found on 𝕏 Twitter and 🐘 Mastodon but only post there sporadically.

A decade of employment

May 4 is a special day. Not only because it’s Star Wars day, but because it was on that day in 2015 that I was hired for my first full-time job. Today marks one decade of being employed.

I don’t suppose ten years of being employed is a milestone most people think about. And why should they? Most people work for many years—decades. One decade isn’t so special.

But it’s a sentimental milestone for me, a man with spinal muscular atrophy who, back in 2014—six years unemployed out of grad school—was close to giving up on ever being part of the workforce. After being passed over time and again, I began to believe that my disability was too great an obstacle.

No one will ever hire me, I thought. They won’t take a chance on me when there are plenty of people my age with more experience who don’t have the severe physical limitations I do. They won’t hire me because I make them feel uncomfortable and they don’t know how to act around me. I know that feeling because, inexplicably, I’m the same way. If I meet someone with a severe disability, I too feel uncomfortable and don’t how to act. So how can I blame them?

But all along there was another voice. Quiet and sometimes drowned out, but never fully silenced, it reminded me that I had a knack for making websites. I needed to improve, yes, but I knew my website-making abilities were good and I knew they would get better. More importantly I loved it. I had the bug. It was the perfect medium to work in, melding visual creativity, writing, systems, and problem solving all into one. There was no way I was not going to be making websites.

So make websites I did, any chance I got, for anyone who would take them and (sometimes) pay for them. Eventually I had evidence I could point to and say, “I can make websites like this.” And long though it took, someone finally noticed.

What to do when you can’t do anything

I went through the same phases many kids went through. When I pondered what I wanted to be when I grew up, I thought of the usual suspects—police, firefighter, doctor. I naively assumed I could do anything, and since no one told me otherwise, I went on happily through my childhood assuming I would do what anyone else would.

By high school, reality began setting in. Okay, clearly I’m not going to be a construction worker. What am I good at? What do I like? Can I get paid to play video games? What even is that—video game tester? Alas, these were the olden days before streaming was a way people could get paid to play video games.

I knew even back then in the late 90s that my future was intertwined with the computer—that humble, dorky machine with its quaint desktop metaphors laid atop a promising blue sky. I remembered that first fascination of using Windows 95 and playing solitaire and pinball. I remembered being amazed by the Weezer video on the Windows 95 installation disc. As a kid in the 90s, I immediately and intuitively realized that the computer was a creation machine. I began using it for art and writing. I didn’t know then just how important the computer was going to be in my life, but I knew I would be using one in whatever job I got.

The world wide web

As a kid in the 90s, my first experience with the internet was using AOL via a 14kbps dial-up connection. It was an embarrassingly long time before I knew there was a web outside of AOL. In high school, a friend of mine was into sports writing and he published his articles on a website. That was my first taste of web publishing and I thought it was so cool. I started helping him with his website and I began publishing my own website about my favorite freeware games.

A screenshot of my game page in 2005, shortly after learning a bit of HTML and CSS. It has a simple, old-school feel to it with some image buttons and a sidebar. It has a list of featured games, along with a little thumbnail image in the main content area. The featured pick is a game called Gene Rally that looks to be a racing game. And among the other games is Space Invaders, Tetris, and a game called Inn, where your character is a ninja.
A screenshot of my game page in 2005, shortly after learning a bit of HTML and CSS

I started with WYSIWYG builders and sketchy free hosting. But in the fall of 2005—the start of my junior year of college, majoring in business information systems—I took a course that in hindsight I can say was life-changing. It was Advanced Languages I with Dr. Rodney Pearson. In that course he taught JavaScript and HTML[1] and I couldn’t get enough. I eagerly looked forward to homework assignments (for the first time ever) and I would complete them as quickly as I could without procrastination (which I rarely did for other courses). Even the interactive coding exams were fun.

That course sealed the deal—I was going to learn HTML, CSS, and JavaScript, and I was going to be a ✨ Web Designer ✨. I took to converting with voracity my fledgling web empire from WSYIWYG slop to handcrafted HTML and CSS. I felt the power coursing through my veins with every click of the refresh button. And in August 2005, using a friend’s credit card, I purchased this very domain, blakewatson.com.

I spent the remainder of my college years (including grad school) learning everything I could about web design, which is what I called it back then. I read formative books like Designing for the Web by Mark Boulton. I followed revered professionals like Jeffrey Zeldman on Twitter and read industry publications like A List Apart. I slurped up everything I could about making websites and I continued to build my own. I got started with a new up and coming CMS called WordPress.

Six long years

After grad school I began to apply for jobs. It was tough because Mississippi isn’t exactly a tech hub. We are usually behind in just about any kind of metric, especially ones that are based on technology. But disability programs are notoriously difficult to get on, and each state does them differently. So moving to another state and potentially starting over was not an option I wanted to tackle. I was on a good program that paid for caregivers, and I would need those caregivers if I planned to work. So I was stuck in Mississippi, for better or worse. One promising interview was with that of my alma mater, Mississippi State University.

At least I thought it was promising. After doing pretty well with the technical interview, save for one slightly embarrassing moment, I met with a department head who’s expression as I entered his office told me exactly what was about to happen—I wasn’t getting hired.

No matter, there will be other interviews. And there were. But each one was similar—employers were interested in my work, but upon learning of my disability, they’d pass. At least that’s what it felt like. Maybe that wasn’t it at all. But I didn’t know and as years went on that thought was eating at me.

It wasn’t so bad at first—not having to go to school, not having to go to work; spending my time making websites for fun or just goofing off playing video games. I received a modest SSI check and lived at home with my mother, so there wasn’t a dire need for me to work. And that seemed to fit most people’s expectation of what I should have doing. Nobody would blame me for sitting this mother out.

Nobody but me. My own self-worth seemed inextricably linked to having a job and contributing something to society. Why? Was it part of an American ethos? Was it a biblical imperative? Was it a way to make up for my lack of masculine physicality?[2] No matter how you look at it, society normally takes a dim view of people who don’t work even though they could. I assumed society figured I couldn’t work but I knew I could and it was killing me. Some nights I’d cry myself to sleep, wallowing in despair, and praying that somehow I’d find a job and be more than a burden to those around me.

In hindsight I can see that I was far too hard on myself. I didn’t give myself enough credit for the things I did manage to accomplish—picking up the odd freelance job making a website, volunteering with a local nonprofit to produce nearly all of its media. Making connections, learning.

We still have our dreams

In one of the most defining moments of my unemployed years, I was ghosted by a company after interviewing in person for three hours and being told that the decision was between me and one other candidate. I took it pretty hard. That year in particular had been rough because I was nearly hired by Automattic, the company that runs WordPress.com (and is currently mired in a controversial legal battle).

In the aftermath of those two events I was inspired by an episode of The Big Web Show and ended up penning a blog post, called We still have our dreams, in which I strengthened my resolve to be a website maker, whether anyone would pay me to do it or not.

That emotional roller coaster defined my six years of unemployment as I careened between the pit of despair and the seed of hope. But as the years passed, despair was winning. Then suddenly…

A mad genius approaches

There weren’t a ton of available web development jobs available in Mississippi. Well, maybe there were more than I thought, but I didn’t know where to look. I did find a handful of marketing agencies and one in particular caught my eye. Mad Genius seemed like an awesome place to work. It looked like a breeding ground for creativity and I experienced FOMO immediately.

I contacted them and interviewed even though they didn’t have an open position. It was a casual interview at a coffee shop with then-interactive-lead Ryan Farmer. It was such a refreshing interview because we mostly talked shop. Ryan complimented my work, I complimented Mad G’s work, and we shared our mutual nerd crush on Chris Coyier and CSS-Tricks.

A year later, they had an opening for an Interactive Designer role and I immediately applied. I got a trial offer and I was ecstatic. They threw me right into the fray and I immediately needed to learn new skills like SVG animation. It was hard, but it was fun.

Several weeks later, on May 4, 2015, I got the full-time offer.

After several years at Mad Genius, I got the opportunity to join MRI Technologies working on hardware management apps for NASA and Collins Aerospace. I’ve been working with the team at MRI since 2019. I’ve loved both of these jobs and learned a ton at each one.

Working with a disability

When you have a disability as severe as SMA, you’re going to run into a number of challenges in work, and in life in general. In the United States, there are disparate programs available to help with it by providing education, career help, and personal care assistance. It’s hard to navigate these programs, and they tend to differ between states, which makes it even more difficult.

I know people with severe disabilities who would work if they had the opportunity, if they didn’t fear losing their benefits, and if someone would give them a chance. I would love to see the situation improve, and I have a little bit of a wish list (unfortunately, government moves at a snail’s pace and some of this stuff is either far away or never going to happen).

  • Home and community-based programs—the ones that supply caregivers to those of us who need them—need to be administered at the federal level, or otherwise in such a way that frees participants to move between states. At present, moving is difficult because of differing state rules and the need to reapply for services.
  • We need consistent Medicaid buy-in rules across all states that remove means testing for people with disabilities who are working that require personal care and or other medical necessities to be able to work. I know people who have multiple degrees paid for by state programs who can’t get a job when they graduate because the amount of money they would make would cause them to be ineligible for services, which would then prevent them from working and making the money in the first place.
  • Finally, we need better and consistent documentation and application of these rules. Too often it’s confusing bordering on incomprehensible. And that doesn’t just confuse participants. It can confuse government workers who might apply the rules incorrectly.

Looking forward

If someone had told me in 2014 where I would be at in 2025, I would have been shocked. I’m excited to see what lies ahead. I have fears, of course. I have the fear of my condition worsening to the point where I can no longer work. I fear that AI might render my skill set meaningless eventually.

But I still enjoy making websites. And I think I’ll continue to enjoy making websites in the future. Things get bogged down from time to time with framework wars, technical debt, and questionable design trends. But the fundamental principles of building for the web don’t change so rapidly. When I published HTML for People last year, the primary inspiration was to tap into that feeling I had those 20 years ago now, when I was learning HTML and putting it on the web for the first time. That magic is just as relevant now as it was then, maybe even more so because it stubbornly eschews the walled gardens and exploitive practices of the big corporate platforms that thrive today.

We can always look for ways to improve our work for the people that use it. I’ll never tire of putting the humanity into my work. And I’ll never lose my wonder for the web.

I’m grateful to be working in this field, and to all the people along the way that helped me cross this ten-year milestone.

May the fourth be with you.


  1. I’m intentionally listing JavaScript first because that’s what the course felt like. We were primarily learning to program with JavaScript and learning enough HTML to give us a playground to work in. I know that sounds a bit backward, and I’m not sure he’d even teach it that way now, but it was effective. But just to make it clear that I’m not some JavaScript bro zealot, here’s proof of my adoration of HTML. ↩︎

  2. Spinal muscular atrophy causes just that: muscular atrophy. The muscles become extremely weak, resulting in a multitude of problems, including scoliosis that needs to be corrected with spinal fusion. SMA is progressive, so it continues to worsen as time goes on. ↩︎

My Dygma Raise 2 keyboard review

My previous post explained my journey with keyboards that lead me to the Dygma Raise 2. It is now time to write my actual review of this awesome keyboard!

The purchased configuration

First, let’s talk about the configuration I chose for my Dygma Raise 2:

  • Color: white (to change my usual black keyboard)
  • Language: English UK (ISO keyboard layout)
  • Switches: Kailh Box White (clicky, the closest to cherry MX Blue)
  • Wireless: Yes (low latency radio frequency and bluetooth)
  • Tenting: Yes (ability to raise vertically the keyboard, see the screenshot)
  • RGBW Underglow: No
  • Extra keycaps: Yes: dash ISO
  • Extra switches: No

Mandatory picture (sorry, the quality isn’t great):

Figure 1: Image showing my Dygma Raise 2 keyboard with a mouse between the 2 split parts

Figure 1: Image showing my Dygma Raise 2 keyboard with a mouse between the 2 split parts

My review

The one bad thing

Before talking about the all the great things about this keyboard, let’s address the main issue I have with it. Well, not really with the keyboard itself, but with the extra keycaps I’ve purchased. You may have noticed that I purchased some extra “dash” keycaps. “Dash” keycaps all have a dash on them, as shown here.

I’ve purchased those because the layout I use is azerty, not qwerty. And Dygma does not offer azerty layout. That’s fine because I know the layout enough to not look at the keys. So I thought that the dash one would be perfect, avoiding people looking at the key and not understanding why the “q” was typing an “a” (dash might be confusing too, but for other reasons :P).

So why am I not using them? Because these extra keycaps are the only thing in my order that wasn’t of good quality. What do I mean by that? The keycaps are not all of the same size and height. And I mean for each line. Each line has a different form to adapt to the finger placement, but all keycaps of a line are normally the same. They were not in the extra keycaps I received. Some were a bit larger than others, others were a bit taller… Not by much, but enough to be annoying (and that defect is clearly visible, not just a sensation). The good news is that the default qwerty keycaps don’t have this problem and are perfectly the same.

I was greatly disappointed by that (specially for their price)… I couldn’t use them so I decided to use the default qwerty ones. Again, that’s not really an issue to type for me as I don’t look, but still frustrating. I need to contact them about that, as I’m hopping it is just bad luck and not the case for every dash extra keycaps.

The good

Otherwise, everything is great. The Kailh Box White switches, indeed, are close to cherry MX Blue and while I can feel a small difference, I like them as much as the cherry MX blue. The clicky noise is lovely as well, for me at least, maybe not for people around me :P.

The keyboard can be connected by a cable or wirelessly (as I bought that option). In case of wireless connection, you can either use the low radio frequency which require to connect a small dongle via USB, or connect the keyboard via bluetooth. On my desktop, I use either the cable connection or the wireless radio frequency (mostly to use the battery and not charge them 100% of the time). On my laptop, I use the bluetooth connection.

I’m also planning to connect it via bluetooth to my phone as it can be connected to multiple devices and you can switch from one device to another by using some keyboard shortcuts. It is great to switch quickly which can be extremely useful at times, for example to switch to my phone to type a long text message and then connect it back to my computer. Monitoring the battery level of the 2 parts is accessible via a simple keyboard shortcut so you can avoid bad surprises. A nice touch :).

Even though the keyboard is a split keyboard, you can still connect the 2 pieces together and use it as a 1 piece 60% keyboard. Why would you do this? Well in case of limited space while traveling or using it on your lap. Also a good way to start with a split keyboard by first having it one piece to get familiar with it before separating the 2 pieces.

I started with a small gap between the 2 pieces and now I’m using it with the 2 pieces widely apart, each being in front of my shoulders, meaning my wrists are naturally aligned with my forearms. The gap is big enough to put my mouse between the 2 pieces. I find it relaxing that way, but you can use it the way you like :).

I also love the traveling case, classy and robust, perfect for protecting the keyboard on the move. It is big though and thicker than my laptop so you need space when bringing it with you.

Even though I purchased the tenting option, I’m not using it yet. I didn’t want to risk the change being to big on day one, so first I move little by little the 2 pieces apart, now that they are at the “right” position (to me), I may start testing the tenting, again little by little. Not sure if I will like it or not, but that’s supposed to be better for my wrists too so at least I’m going to give it a try.

Finally, the configuration software (bazecor) is simple to familiarize yourself with. Lots of helpful indication to avoid searching online and be autonomous quickly. It was also package as an AUR package so installing it on Archlinux took one command line and a few seconds. Only disappointing part of it is that I can not configure it via bluetooth even though it is supposed to be possible. It works while connected via RF or cable though.

My current configuration

You can configure multiple layers on the keyboard (not sure if 9 is the maximum of if you can add more) to manage different layout and shortcuts depending on your needs. In my case, I’m using “only” 3 layers (at least for now).

layer 1 (default)

The default layer is my layer 1, which is mostly a default azerty keyboard:

Figure 2: Image showing the layer 1 (default) of my Dygma Raise 2 keyboard

Figure 2: Image showing the layer 1 (default) of my Dygma Raise 2 keyboard

The main differences with a “normal” azerty keyboard are (notice that the usual “big space bar” is splitted into 4 keys here):

  • the left space on the left part: space when tapped, but super (the window key) when hold. Main reason is that I use super+number to switch workspace in i3wm (my window manager) and this is more “thumb friendly” than the usual place between the control and the alt keys
  • the “back space” below the space of the left piece: shortcut for control+backspace to delete one word instead of one character with “just backspace” (which is still there at the usual top right place)
  • the right space of the right piece: space when tapped, altgr when hold. Same reason as the super on the left, it is more thumb friendly at this place as altgr is very useful for an azerty keyboard
  • The blue keys, from left to right: when tapped: left, down, up, right (vim motion <3). When double tapped: home, page down, page up, end
  • the green button of the left piece: switch to layer 2 permanently (meaning until I switch back)
  • the green button of the right piece: when hold, activate layer 3. When clicked activate layer 3 for 1 key (meaning after pressing one key in layer 3, go back to layer 1).

layer 2

Figure 3: Image showing the layer 2 of my Dygma Raise 2 keyboard

Figure 3: Image showing the layer 2 of my Dygma Raise 2 keyboard

  • The top number keys becomes the F1 to F12 keys
  • zqsd are used for arrows (like wasd in qwerty)
  • on the left side, special keys (the one in white in the picture) stay the same, all letter keys are inactive (“No Key”)
  • on the right side, the yellow keys are moving the mouse cursor and the purple ones are either left/right click or mouse wheel scroll. I’m not using them much yet but I need to practice to leave the keyboard even more if possible
  • Blue keys on the right of the mouse ones are the usual insert, home, page up, del, end page down. Not using those much because double tapping the blue keys of layer one do the same thing
  • Blue keys bellow are the arrows again. I have them twice so I could use them with both hands which can be useful for different game hand positions
  • Green button of the left side goes back to layer 1
  • Green button of the right side goes to layer 1 as well

I feel that the layer 3 is under used, but I believe I’ll put more useful shortcuts there with time.

Layer 3

Figure 4: Image showing the layer 3 of my Dygma Raise 2 keyboard

Figure 4: Image showing the layer 3 of my Dygma Raise 2 keyboard

  • The 3 yellow keys are my shortcuts for screenshots, from left to right: ask for selecting an area of the screen to screenshot, screenshot of the current window, screenshot of the entire screen(s)
  • The blue key allow switching to another bluetooth device (if multiple are connected at the same time)
  • the green one will show battery level of the 2 sides
  • the white one toggle the keys led (eg: if I want to save battery)
  • “TRANS” means transparent, meaning the key will do what it is supposed to do in the default layer

Conclusion

As said in the intro, I really love this keyboard! Passed the dash keycaps deception, everything is perfect and the switch has been fast and painless. Initial configuration took a couple of hours but after that I only changed a couple of minor things in a couple of minutes each time thanks to the great configuration tool.

I still have to try the tenting thing, but even without that I already feel a difference by having my shoulder, forearms and wrist aligned so mission already accomplished.

While the dygma raise 2 is expensive and not everyone can afford buying a 500$ keyboard, I don’t regret the investment. I’m hopping it will last as long as my wooting one. I used my wooting one for 7 years (and could still use it as it works still perfectly), so if this one is as good and last 10 years, it would be a 50$/year investment, which wouldn’t be that bad considering the amount of time I spend typing on it per day :). Yes, I know, it is a poor attempt to rationalize such a crazy investment… :D.

Feel free to ping me if you have any question about it :-).

❌