Generate Dynamic image with NodeJS
How to use NextJS and NodeJS to generate dynamic images for web and social media covers with the help of Node Canvas.
I've been working recently on a feature for my blog, which involves generating dynamic images for Open Graph (OG) images and fallback covers for my articles and pages. After a lot of effort, I finally got it to work.
I've spent countless days, from morning to night, on this feature. I've lost track of how many times I've searched and debugged errors by reading numerous GitHub issues and documentation, and I've asked all the LLM GPTs to understand the problems. I almost gave up on this feature, but that's the life of a software nerd. 😅
Let's explore what we aim to build.
Our goal is to generate dynamic images for blog post covers. We'll start with an empty canvas, add some images, text and make the title dynamic using node-canvas
. This tool allows us to create a canvas with all necessary elements, and we can retrieve the content as a buffer to return via the API.
This solution is part of a blog built with NextJS 15 App Router and hosted on Vercel. The API router in Vercel functions as a serverless function (utilizing AWS Lambda under the hood). We'll set up a GET
route that accepts a parameter (let's call it "name"). This route will invoke our image generator TypeScript file, pass the extracted "name" to it, retrieve the buffer, and return it with the header Content-Type: image/png
. The caller will then receive an image. The outcome will be accessible by calling /og/some title
in an HTML image element or OG Metadata, which will display our generated image.
For further optimization, we might implement caching by storing the created images in a persistent form, such as a CDN or AWS S3, and only generate a new image if one with the same name doesn't already exist.
Quick links:
Let's dive into the work.
Prerequisites
We will need NodeJS, of course, and a NextJS project for the router (though you can use ExpressJS). However, since we want this feature to be part of a blog project, I will create it as a NextJS API route.
NodeJS (I used version 20)
NextJS 15 project
Knowledge of Canvas API
Setup
Let’s create a NextJS application first. Use the following command to create it:
npx create-next-app@latest --use-pnpm
I choose to work with TypeScript, ESLint, App Router, PNPM, and no TailwindCSS. After creating the project, navigate to the project directory and open your editor.
We will focus only on the API part, so we won't need any styling or display, except for an image element to test our API. Therefore, we will remove all the files except layout.tsx
and page.tsx
.
Update the app/layout.tsx
import type { Metadata } from "next";
export const metadata: Metadata = {
title: "Create Next App",
description: "Generated by create next app",
openGraph: {
images: "/og/My App", // to test the OG image loading
},
};
export default function RootLayout({
children,
}: Readonly<{
children: React.ReactNode;
}>) {
return (
<html lang="en">
<body>{children}</body>
</html>
);
}
Update app/page.tsx
import Image from "next/image";
export default function Home() {
return (
<div>
<h2>My Image</h2>
<Image width={600} height={315} src="/og/My App" alt="cover" />
</div>
);
}
Nothing special until now. All the magic will happen in the API Router.
Create API Route
In NextJS 15 App Router, we create an API route using route.ts
. This indicates to NextJS that it's an API route, as opposed to page.tsx
, which signifies a rendered page.
Let’s create a folder named og
inside the app
folder, following this structure:
.
└── app/
├── og/
│ └── [name]/
│ └── route.ts
├── layout.tsx
└── page.tsx
This structure will generate an API route at /og/something
.
Let’s create a dummy (for now) GET
request handler inside route.ts
:
export function GET() {
return new Response(JSON.stringify({ message: "Hello from API" }), {
headers: {
contentType: "application/json",
},
});
}
Now, if we visit http://localhost:3000/og/something
, we will get this response:
To access the passed parameter (“name”), we need to access the params
property. Since we’re using version 15 of Next, the props parameter is now a Promise.
Let’s update our code:
export async function GET(
_: Request,
props: { params: Promise<{ name: string }> }
) {
const params = await props.params;
return new Response(
JSON.stringify({ message: "Hello from API", name: params.name }),
{
headers: {
contentType: "application/json",
},
}
);
}
If we call GET http://localhost:3000/og/here
, we will receive this response.
We now have the NextJS router part set up. Let’s proceed to create our image.
NodeJS Canvas
The result of our work will be like this:
This image is created using this concept we are working on.
Before we start, we need to install node-canvas
:
pnpm add canvas
And that’s all we will need.
Now, let’s create our generator file.
Generator Class
I’m a fan of OOP, so I work with classes and Object-Oriented Programming patterns all time. Let’s create the generator file:
// app/og/[name]/generator.ts
export class Renderer { // Name it whatever you want
}
But first, what we will need? From the picture you saw previously 👆, we have three picture, colored background and some texts. So, we will need:
Create a canvas
Color the Background
Add the text content
Add the images
Convert the canvas into a buffer and return it
Load the custom font (If you want to use Arial or web-safe system fonts, you can skip this part)
Let’s convert all this into code following the OOP ideology.
Create a canvas:constructor()
initiates theCanvas
and the2D Context
Color the Background:paintCanvas()
Add the text content:addTitle()
,addWebsite()
,addAuthorName()
Add the images:addLogo()
,addAuthorPicture()
,addDecorationImage()
Convert the canvas into a buffer and return it:save()
Load the custom font:loadFont()
Most of these function will be private, the external code will only access one function that handle all the internal work. Let’s call it draw()
.
Let’s update our Renderer
class:
import { Canvas, CanvasRenderingContext2D } from "canvas";
export class Renderer {
private ctx: CanvasRenderingContext2D;
private canvas: Canvas;
private width = 1200;
private height = 630;
/**
* Initiate the Renderer and Canvas instance
* @param title The dynamic title of the image
*/
constructor(private title: string) {
const canvas = new Canvas(this.width, this.height);
this.ctx = canvas.getContext("2d");
this.canvas = canvas;
this.ctx.imageSmoothingEnabled = true; // for better image quality
}
/**
* This is public to be called from external code.
* @return Returns the buffer of the generated image as a Promise
*/
public async draw(): Promise<Buffer> {}
// the _ prefix is just to tell this is a private method
// Just conventions
private _save() {}
private _paintCanvas() {}
private _addTitle() {}
private _addWebsite() {}
private _addAuthorName() {}
private _addLogo() {}
private _addAuthorPicture() {}
private _addDecorationImage() {}
private _loadFont() {}
}
TypeScript will complain about
public async draw(): Promise<Buffer> {}
just ignore it for now 👌
Paint Canvas
Let’s create a rectangle that covers all the canvas with our preferred color:
private _paintCanvas(): void {
this.ctx.beginPath();
this.ctx.rect(0, 0, this.width, this.height);
this.ctx.fillStyle = "rgba(32, 68, 57, 1)";
this.ctx.fill();
}
Add Text content
/**
* Add the title text in the center of the canvas
*/
private _addTitle() {
this.ctx.font = '64px "Urbanist bold"'; // I use a custom font. This will need to be registered to work.
this.ctx.textAlign = "center";
this.ctx.textBaseline = "middle";
this.ctx.fillStyle = "white";
this.ctx.fillText(this.title, this.width / 2, this.height / 2);
}
/**
* Add the website name in the bottom right corner
* You can make it variable, but for the simplicity, it will be hard coded.
*/
private _addWebsite() {
this.ctx.font = '20px "Urbanist"';
this.ctx.textAlign = "end";
this.ctx.textBaseline = "bottom";
this.ctx.fillStyle = "white";
// We will leave a margin of 80px on the right. and 50px on the bottom side
this.ctx.fillText("mehdijai.com", this.width - 80, this.height - 50);
}
private _addAuthorName() {
this.ctx.font = '20px "Urbanist"';
this.ctx.textAlign = "start";
this.ctx.textBaseline = "bottom";
this.ctx.fillStyle = "white";
// The margin on the left needs to take in consideration the image will be added.
// You can tweak it as you desire
this.ctx.fillText("Mehdi Jai", 120, this.height - 50);
}
Call these function inside the draw()
method:
public async draw(): Promise<Buffer> {
this._paintCanvas();
this._addTitle();
this._addWebsite();
this._addAuthorName();
}
Handling image
This one is bit tricky. I tried two ways:
Load from a URL (Remote or CDN)
Load files from local file system
We need to take in consideration the fact that this app will be built and the structure will change. Especially that the API Route is a server-less function.
How I handled the images:
Load the files from the
public
directory usingprocess.cwd()
(Recommended way by Vercel)Convert the content buffer to base64 URL
Load the image using
loadImage
function provided bynode-canvas
(Optimized way)Draw the image in the canvas
We will need extra two helper methods (Or, you can create them in a separate file like utils
or lib
):
getImageBase64Data(fileName: string)
: The files will be in the same directory, so we will only need the file name with its extensiongetMimetype(extension: string)
: Get the mime type for the Data URL
Let’s add these functions at the end of our class:
private _getMimetype(extension: string): string {
const map: Record<string, string> = {
svg: "image/svg+xml",
png: "image/png",
jpeg: "image/jpeg",
jpg: "image/jpeg",
webp: "image/webp",
};
if (extension in map) {
return map[extension];
} else {
return map.png;
}
}
private async _getImageBase64Data(fileName: string): Promise<string> {
const parts = fileName.split(".");
const extension = parts[parts.length - 1];
const mime = this._getMimetype(extension);
const filePath = join(process.cwd(), "public", fileName);
const fileContent = readFileSync(filePath);
const base64Data = fileContent.toString("base64");
return `data:${mime};base64,${base64Data}`;
}
These are basic utility functions, it just reads the file content, and convert the content to base64 string.
After adding these functions, we can get the file sources, we only need now to load them and draw them in the canvas.
Adding Images
/**
* Adds the Logo in the center top of the canvas.
*/
private async _addLogo() {
const source = await this._getImageBase64Data("logo.svg");
const logo = await loadImage(source); // loadImage imported from "canvas"
this.ctx.drawImage(logo, this.width / 2 - 15, 50);
}
/**
* Add the decoration wave SVG in the center with 60% width and transparency of 30%
*/
private async _addDecorationImage() {
const source = await this._getImageBase64Data("wave.svg");
const wave = await loadImage(source);
// Handle the size shrinking
const width = wave.width * 0.6;
const height = wave.height * 0.6;
this.ctx.globalAlpha = 0.7; // to reduce the image opacity
this.ctx.drawImage(
wave,
this.width / 2 - width / 2,
this.height / 2 - height / 2,
width,
height
);
this.ctx.globalAlpha = 1; // get it back to the default value, otherwise, everything after this will have opacity of 0.7
}
/**
* Create the author image.
* The image is square or rectangle by default,
* So, we will make it circle by clipping it with a circle path
*/
private async _addAuthorPicture() {
const source = await this._getImageBase64Data("thumbnail.png");
const pic = await loadImage(source);
const x = 80;
const y = this.height - 60;
const imgSize = 50;
this.ctx.beginPath();
this.ctx.arc(x, y, imgSize / 2, 0, 2 * Math.PI, false); // Create the clipping circle
this.ctx.strokeStyle = "rgba(19, 51, 41, 1)";
this.ctx.fillStyle = "rgba(19, 51, 41, 1)";
this.ctx.stroke();
this.ctx.fill();
this.ctx.clip();
const aspect = pic.height / pic.width; // to keep the image size proportional
this.ctx.drawImage(
pic,
x - imgSize / 2,
y - imgSize / 2,
imgSize,
imgSize * aspect
);
this.ctx.restore(); // Close the clipping
}
Let’s update the draw
function:
public async draw(): Promise<Buffer> {
this._paintCanvas();
this._addTitle();
this._addWebsite();
this._addAuthorName();
await this._addLogo(); // make sure to await
await this._addDecorationImage(); // make sure to await
await this._addAuthorPicture(); // make sure to await
}
Save the Canvas
/**
* Convert the canvas to buffer of type jpeg. (You can use PNG)
* @returns {Buffer} image buffer
*/
private _save() {
return this.canvas.toBuffer("image/jpeg", { quality: 1 });
}
And finally:
public async draw(): Promise<Buffer> {
this._paintCanvas();
this._addTitle();
this._addWebsite();
this._addAuthorName();
await this._addLogo();
await this._addDecorationImage();
await this._addAuthorPicture();
return this._save();
}
Link the generator with the API
First, make sure you have the pictures in public folder with the exact names
We have our generator ready (Almost, if you don’t have “Urbanist” installed in your system, you will get squares instead of letters)
Now, let’s head to our API route
// app/og/[name]/route.ts
import { Renderer } from "./generator";
export async function GET(
_: Request,
props: { params: Promise<{ name: string }> }
) {
const { name } = await props.params; // Fetch the name from params
const _renderer = new Renderer(name); // Initiate the Generator and pass the title
const buffer = await _renderer.draw(); // Returns the image Buffer
return new Response(buffer, {
headers: {
contentType: "image/jpeg", // Make sure the content type is correct
},
});
}
Now, Head to your browser and hit http://localhost:3000/og/here
The result will be 👇
Loading fonts
In my machine, I have already “Urbanist” installed. But, in the VPS or deployment host, you won’t. So, we need to load the font to the canvas using the registerFont
method provided by node-canvas
To do so, we need to get the .ttf
file in our local folder (must be TTF file, I tried WOFF2, but didn’t work).
For “Urbanist”, I found the TTF files in This Repository You can download the appropriate files you need. I use two fonts, the ExtraBold and Medium in my case.
Register Font
Download the files you need, put them inside /public/fonts
folder
We’ve created the _loadFont
method, let’s use it now:
private _getFont(name: string) {
return join(process.cwd(), "public", "fonts", name);
}
private _loadFont() {
// registerFont imported from "canvas"
registerFont(this._getFont("urbanist-medium.ttf"), {
family: "Urbanist",
});
registerFont(this._getFont("urbanist-extra-bold.ttf"), {
family: "Urbanist bold",
});
}
Now update the constructor to call the load font method:
constructor(private title: string) {
const canvas = new Canvas(this.width, this.height);
this.ctx = canvas.getContext("2d");
this.canvas = canvas;
this.ctx.imageSmoothingEnabled = true; // for better image quality
this._loadFont() // ADD THIS
}
We’ve set up everything now. let’s check the result now:
You notice the title is bolder now!
Check the Image element and OG
Remember the <Image>
inside the page.tsx
component? Check it now:
Everything looks fine now. Let’s deploy this!
Deployment
We will deploy this project to Vercel, using GitHub repository integration. Create a repository for your code and create new project on Vercel with that GitHub repository.
It will deploy automatically, and re-deploy whenever a change happens to the main branch.
You can check the live demo version here Demo Link
And Of course the GitHub repository of the whole project is available here too:
Next Dynamic Image GitHub Repo
Summary
During the building of this feature, I’ve learned a lot of techniques and concepts. As I mentioned at the beginning, it was a tedious feature to deploy. I tried to use AWS Lambda, only to find that Lambda Functions can’t handle native dependencies. Vercel uses it under the hood but employs a custom layer to manage the native bindings.
For further optimizations, you can implement caching to store created files and fetch existing images instead of regenerating them, which helps reduce processing usage.
I plan to write another article (Part 2) to address this caching mechanism and rate limits for the API endpoint.
In conclusion, generating dynamic images with NodeJS, particularly for Open Graph images and blog post covers, can significantly enhance the visual appeal and share-ability of your content. By leveraging tools like node-canvas
within a NextJS framework, you can create customized images that dynamically incorporate text and graphics. This process involves setting up an API route, creating a canvas, and rendering images and text onto it. While the initial setup may be challenging, the benefits of having personalized and automatically generated images are substantial. Additionally, deploying this feature on platforms like Vercel ensures scalability and ease of integration. Future enhancements could include implementing caching mechanisms to optimize performance further. This project not only improves the aesthetic quality of your blog but also provides valuable learning experiences in serverless functions and image processing.
The same generator can be utilized in Static Site Generation (SSG), where images are created and stored during the build process. Instead of using an API route, the generated images are saved directly in the public directory at build time. This approach allows for efficient image handling and reduces server load, as the images are pre-generated and readily available for use without the need for dynamic generation on each request.