Extending Create React App with Babel Macros

I have a co-dependent relationship with Create React App. When we’re good, we’re really good. When I have an unmet need — when I want to customize some part of the build process — I do my best to make CRA work. And when it doesn’t, for whatever reason, I eject and fall back to Webpack, Babel, & friends, resentful that CRA has let me down. Recently, though, I discovered that you can customize Babel configuration in a Create React App projects without ejecting.

Last month I was working on some shared React components and ran into this again: I wanted to use Tailwind.css — fine — but I also wanted to include it in the resulting Javascript files as CSS-in-JS​*​. I initially despaired, thinking I’d have to eject the components in order to customize the Babel configuration.

And then I discovered Babel Macros, which — lo and behold — are supported by CRA since 2018!

Babel Macros are exactly what they sound like if you’re familiar with Lisp-y languages: they’re code that generates different code. In other words, they give you a functional interface to Babel’s transpiling capabilities. This allows you to write “normal” Javascript (or Typescript) code that CRA can process, but when that code is executed, it hooks into Babel’s runtime.

For my Tailwind CSS-in-JS it looks like this.

First, I tell Babel (and by extension CRA) that I want to enable macros, by adding macros to the list of plugins in my .babelrc:

{
   "presets": ["@babel/env", "@babel/react", "@babel/typescript"],
   "plugins": [
     "@babel/proposal-class-properties",
     "@babel/syntax-object-rest-spread",
     "macros"
   ]
}

Then, when I want to use Tailwind-in-JS, I import the macro and use it to tag a string.

import tw from "@tailwindcssinjs/macro";

...

// in my react component
return (
    <div
      style={tw`font-normal text-sm text-gray-700`}
    >...</div>
);

Note that I’m setting what looks like the Tailwind class list as the style property of my element: that’s because tw is actually a function that utilizes Babel’s macro functionality to map the classes to their underlying CSS properties.

With this small bit of configuration in place, running the CRA build script results in pure Javascript I can use in my downstream projects, including the necessary CSS.

There are other advantages, too: someone reading this code can now “follow their nose” to figure out what’s going on. One of the most persistent problems I’ve encountered when approaching a large codebase is understanding how the source is built: where does a dependency come from? how is the code compiled? where — why!? — does a transformation happen? This component now answers those questions for me: the use of Babel (and the macro) is explicit.


  1. ​*​
    There’s probably another post here: getting shared components to work with external CSS has been a real pain for me.

The Habit

A mentor, Naomi, once told me “it’s more fun to write programs that help you write programs than it is to write programs”. While funny — and true, at least for me — I think what she was getting at is something a little more general: meta-work is a way of distracting ourselves from the real work. Or, more nefariously, meta-work is a way of feeling like we’re accomplishing something when we’re standing still (with respect to our goals).

This has been coming up for me recently around blogging and blogging software. I had a really good blogging habit for a while, and it served me well. Arguably I have my career because of it. Today when I read something interesting and want to comment, amplify, or rebut part of it, my thoughts still go to publishing. But I’ve lost the habit, the muscle, so instead I dither. I focus on how much I pay for WordPress hosting​*​. I view-source and look for tell-tale signs of what tool another author is using. I familiarize myself with headless CMSes, flat files, and “IndieWeb” standards. What I do not do is write.

Almost inevitably two things happen. First, I run out of time: I have to get back to work, I have to make dinner, etc. And second, when I finally return to the open browser tabs — now 90% meta, 10% what I wanted to reflect on — I say to myself, “why the fuck not wordpress? it’s not like it’s done anything to you in the past; yeah, it’s not new, and it’s not written in a language you’re enamoured with today, but it’s not like you have time to hack on it anyway. And if you did, you could do a lot worse than plugging into an ecosystem that big. Suck it up.” And then I close the tabs because I’m annoyed with how much time I’ve plowed into not-writing.

Later me is right: writing my own blogging software is in no way a good use of my time right now; I am not not-writing because of the software; using Jekyll Hugo Pelican Plume will not suddenly cause me to blog; making POSSE work is not a way to rebuild the blogging muscle; my theme is not my problem; post formats do not matter; Guttenberg didn’t reduce my likelihood to post; the list goes on.

Sometimes meta work is an interesting, high leverage way to approach things: there have been times I wrote a program that helped me write a program, and the result was doing something in a few days that we thought would take a few weeks​†​. The difference is, in those cases, I was already actively using the mental muscles I needed, I was already in “the habit”. When it comes to writing, making art, or sewing, the habit is really all that matters.


  1. ​*​
    I pay because a few years ago I was trying to get myself out of a similar rut, this time around what VPS to use and how to secure it. That time I told myself, “it doesn’t fucking matter, just pay to have someone else deal with this, and find a managed service.” And I’ve resented them ever since, despite the great service they provide. Probably because I’m not using the service I pay for.
  2. ​†​
    For example, the time I wrote a codemod using jscodeshift that added sane security defaults to Lob’s entire API codebase.

If you want to succeed as an artist, make as much work as possible. That is the secret sauce. Regardless of whether you are after commercial success or simply want to improve in your craft, the answer is the same, make more work.

How To Become A Successful Artist

I spent most of last week trying to deal with state in a React app; I wound up using my “secret weapon”: Contexts. Recoil looks like an interesting alternative.

An interesting write-up of the Haiku Vector Image Format; what particularly stood out to me, though, was this description of vector formats:

One of the big advantages of vector formats is that they’re easy to resize; because you know what the shape is supposed to be (i.e. a circle), you can render it very small or very large without distorting it.

I’ve always thought of them in terms of math vectors, and while accurate, that doesn’t really capture why we might prefer vectors to bitmaps.

Perhaps unsurprisingly, the “why” is more interesting.

Using ECR images with Elastic Beanstalk and CodeBuild

As I mentioned previously, I’ve been using CodePipeline and CodeBuild for continuous integration and delivery on a new project. For the most part I’ve been happy with it, but ran into an issue last week that took some experimenting to figure out.

In addition to CodePipeline and CodeBuild, we’re also using Elastic Beanstalk for deploying our application. I first experimented with Elastic Beanstalk when it was released, and at the time it had quite a few opinions about how to build an application. When I took another look a few months ago, though, I found that it has grown up considerably. The default model is still dumping your code on an EC2 host and managing the autoscaling group, but it also supports deploying arbitrary Docker images — both single images and ECS clusters. The ability to use CodeBuild with Elastic Beanstalk means you can utilize the same CI pipeline regardless of whether you’re deploying as the result of a web hook or as a one off from the command line.

This was working great until we started using a custom Docker image, hosted in the Elastic Container Registry (ECR), for our CodeBuild jobs. When we moved to that image, one-off builds broke with an error indicating CodeBuild couldn’t pull the Docker image from ECR (“Perhaps you need to login?” The error message helpfully suggested. Yes, perhaps.) This was pretty confusing, since I was able to confirm that both one-off and pipeline builds were using the same IAM Role, which had permission to fetch images from ECR (and was working from the pipeline). The AWS documentation iterates the permissions you need in order to use ECR with CodeBuild, and indeed, that role had the proper permissions assigned.

As I experimented, though, I discovered that the issue was not the CodeBuild role, but rather the ECR Repository Policy. When executing CodeBuild as the result of eb deploy, the ECR repository must be configured to allow image pulls from the CodeBuild service. I suspect this is because eb deploy doesn’t execute as your existing CodeBuild project: it creates a new one with the source and output set to S3, runs the build, and tears it down after collecting the artifacts.

I was able to apply the policy with the following Terraform fragment:

data "aws_iam_policy_document" "ecr_policy" {
  statement {
    sid    = "CodebuildPullPolicy"
    effect = "Allow"
    actions = [
      "ecr:GetDownloadUrlForLayer",
      "ecr:BatchGetImage",
      "ecr:BatchCheckLayerAvailability",
    ]

    principals {
      type = "Service"
      identifiers = [
        "codebuild.amazonaws.com",
      ]
    }
  }
}

resource "aws_ecr_repository_policy" "build" {
  repository = "${aws_ecr_repository.build.name}" # the name of your ECR repository
  policy = "${data.aws_iam_policy_document.ecr_policy.json}"
}

Once this was applied, Pipeline and command line builds both were able to pull the image.

Continuous Integration with CodeBuild and CodePipeline

Last week I started on a new project — venture, perhaps — which means I’m in the thick of figuring out all the things you take for granted when you join an established project. I’m trying to balance pragmatism and perfectionism: for example, I don’t need to scale to 1MM users yet (I probably don’t need to scale to 10 yet), so spinning up a Kubernetes cluster doesn’t make sense. And, I’m not an animal, so it’s not like I’m going to SSH into a machine and build it by hand. So I’m trying to look at it from the perspective of “what do we need for the next 3 months?”

One of the questions was about CI/CD. I’ve previously used Travis and Circle CI, and they’re both fine. They work, they have the features you want, they have outages when you curse them, etc. As I was looking around, though, I came across the AWS offerings: the confusingly named Code Build and Code Pipeline. (There’s also a Code Deploy; who could say why you’d chose a Deploy over, say, a Pipeline?)

It took me a while to figure out how the two are different, and how they work together. My present understanding is that Code Build provides “continuous integration”, and Code Pipeline provides “continuous delivery”. In my past experience these have been squished together into a single thing. Which is fine, except that they’re not quite the same thing.

Continuous Integration (CI) ensures that your code builds, tests pass, etc whenever you make a change. In other words, it ensures that your change integrates with the existing code base.

Continuous Delivery (CD) takes your code and gets it onto servers. This may involve deploying Docker containers, pushing Serverless code to Lambda, etc.

In the Code Build / Code Pipeline world, the Build is defined in the source repository (much like the Circle or Travis configuration I’ve seen before), and the Pipeline is configured as infrastructure, outside of the code. I’m using Terraform, which means I can also treat this as code, albeit in a different language.

The Build is defined by a buildspec.yml file, which defines commands to run for each phase. Unlike CircleCI, where you can define arbitrary steps, the phases are more rigid here: install, pre-build, build, post-build. The build spec also defines the artifacts that the build produces, and the artifacts are what downstream processes can consume.

We’re only two weeks in, but one thing that I liked about CodeBuild is that the pricing required very little thought: you pay per minute, regardless of how many things you run. This is in contrast to CircleCI, where you have to think about parallelism: it’s free for one-at-a-time, but after that you pay per parallel build. More specifically, you pay per potential parallel build.

In our case the Pipeline takes the artifact from Code Build, and deploys it to Elastic Beanstalk. I suspect Beanstalk is the first thing we’ll throw away infrastructure wise, but it’s nice for now: we’re able to configure an EC2 instance with everything running on it we need, and scale that up behind an ALB. Because we’re trying to describe all of our infrastructure with Terraform, the hardest part was getting it to use an ALB rather than a classic ELB. This required an Option to be defined in the Environment declaration:


setting {
namespace = "aws:elasticbeanstalk:environment"
name = "LoadBalancerType"
value = "application"
}

Overall I’m pretty happy with CodeBuild and CodePipeline. We’ve decided to re-evaluate early decisions in a month or so, and I’ll post an update about whether we stick with it then.