GitHub API Commit Flow

This blog has no storage. That is because I host it at heroku and no matter how easy heroku might be, they have no simple solution for storage.

I mean hard drive storage, database storage is not an issue. I want to be able to upload my screenshots for articles without hassle nor without pushing them into a postgres database.

Instead I am using GitHub to host my image content, for example:

img

License:

Feel free to take and extend at will, the license is simply don’t do evil.

How does it work?

Basically I use GitHub as a CDN by using GitHub pages to let GitHub host my repository under my domain: cdn.openmindmap.org. All files in the that repo are then reachable via https://cdn.openmindmap.org/content/XXXX.

The next problem was then to update that content. Locally I created a flow that would upload content and store it locally. That does not work with heroku. I needed to create a commit and upload my content to GitHub.

I already have the upload interface - for obvious reasons the upload button isn’t active. It allows me to copy markdown snippets into my documents and upload content. The Node-RED flow for CDN management is continously being improved but available.

What I needed was a way to do a GitHub commit via API without having local storage or copy of the repo. Stackoverflow engineering to the rescue. A quick look on npmjs.org and @octokit/rest seemed to be the package I needed.

Although after unsuccessfully trying to get it to work with the new GitHub fine grained token (which are beta), I reverted to using “raw” API requests with the “classic” GitHub developer token, so the package is basically overkill.

GitHub Functionality flow

All functionality required for doing a commit to a GitHub repo without a local copy is encapsulated in the following flow:

It basically breaks each of the steps in the Stackoverflow answer into a single function node.

To use that flow, I created a bunch of link-call nodes to reference the function nodes.

The upload flow is this:

Everything is there how this works: file data is uploaded to the server, its converted to base64, pushed to GitHub and then committed.

To obtain a list of all available image files, there is an another flow:

It’s an example of getting a tree listing from a GitHub repo. Might be useful for other things.

Corresponding screenshot:

img

Some Notes from the Far Side

Using this setup is great but probably against some T&Cs so it’s probably not a long term good idea.

Also the upload and commit are fairly immediate, what isn’t immediate is the deployment by GitHub of the pages. This means that images are not immediately available. It’s a matter of a few minutes however not to be forgotten when the list of images gets updated and there is a broken link.

Why haven’t I created a Node-RED node package? Haven’t found the time for it and probably won’t either.

Last updated: 2023-07-27T11:01:50.588Z

Comments powered by giscus

The author is available for Node-RED development and consultancy.