" $0 "
" } Although `$n` refers to the n-th records in the line (according to a delimiter, like in a csv), the special `$0` refers to the whole line. -In this case, for each line starting with `#`, awk will print (to the standard output), `<h1> [content of the line] </h1>`. +In this case, for each line starting with `#`, awk will print (to the standard output), `[content of the line]
`. This is the beginning to parse headers in markdown. However, by trying this, we immediatly see that `#` is part of the whole line, hence it also appear in the html whereas it sould not. AWK has a way to prevent this, as it is a complete scripting language, with built-in functions, that enable further manipulations. `substr` acts as its name indicates, it return a substring of its argument. /^#/ { - print "<h1>" substr($0, 3) "</h1>" + print "" substr($0, 3) "
" } In the example above, as per the [documentation](https://www.gnu.org/software/gawk/manual/html_node/String-Functions.html#index-substr_0028_0029-function) @@ -46,11 +46,11 @@ and allows the script to dynamically determine which depth of header it parses. /^#+ / { match($0, /#+ /); n = RLENGTH; - print "<h" n-1 ">" substr($0, n + 1) "</h" n-1 ">" + print "- \n
- " substr($0, 3) " " } } @@ -118,11 +118,11 @@ it does not start with a specific caracter. That is, to match it, we match every I have no idea if this is the best solution, but so far it proved to work: # Matching a simple paragraph - !/^(#|\*|-|\+|>|`|$|\t| )/ { + !/^(#|*|-|+|>|`|$|\t| )/ { env = last() if (env == "none") { # If no block, print a paragraph - print "<p>" $0 "</p>" + print "
- markdown testing suite -
- awk for static site generation +
- awk static blog generator
created at 2025-12-03 17:52:35
updated at 2025-12-03 17:52:35
+ - awk to parse markdown
created at 2025-12-03 17:49:06
updated at 2025-12-03 17:49:06
+ - markdown testing suite
created at 2024-12-09 14:51:41
updated at 2025-02-03 14:05:14
- a
cssfolder containing...css files
+ - a
draftfolder containing...drafts written in markdown. These shall not be published yet.
+ - a
draft/publishedsubfolder, where all the published posts shall be, still in the markdown format
+ - a
postsfolder containing the actual HTML files generated from the posts indraft/published
+ - RSTART : the index of the first character matching the group +
- RLENGTH: the length of the matched group +
" $0 "
" } else if (env == "blockquote") { print $0 } @@ -136,7 +136,7 @@ It only is a while loop, until the last environement is "none", as it way initia env = last() while (env != "none") { env = pop() - print "</" env ">" + print "" env ">" env = last() } } @@ -155,19 +155,19 @@ Whenever the pattern is found, two global variables are filled : For the following, `line` represents the line processed by the function, as the following `while` loops are actually part of a single function. -This way `match(line, /\*([^*]+)\*/)` matches a string (that does not start with a `*`) surrounded by two `*`, corresponding to an emphasis text. -The `*` are espaced as they are special characters, and the *group* is delimited by the parenthesis. +This way `match(line, /*([^*]+)*/)` matches a string (that does not start with a `*`) surrounded by two `*`, corresponding to an emphasis text. +The `*` are espaced as they are special characters, and the *group* is delimited by the parenthesis. To match several instances of emphasis text within a line, a simple `while` will do the trick. -We now only have to insert html tags `<em>` are the right space around the matched text, and we are good to go. +We now only have to insert html tags `` are the right space around the matched text, and we are good to go. We can save the global variables `RSTART` and `RLENGTH` for further use, in case they were to be change. Using them we also can extract the matched substrings and reconstruct the actual html string : - while (match(line, /\*([^*]+)\*/)) { + while (match(line, /*([^*]+)*/)) { start = RSTART end = RSTART + RLENGTH - 1 - # Build the result: before match, <em>, content, </em>, after match - line = substr(line, 1, start-1) "<em>" substr(line, start+1, RLENGTH-2) "</em>" substr(line, end+1) + # Build the result: before match, , content, , after match + line = substr(line, 1, start-1) "" substr(line, start+1, RLENGTH-2) "" substr(line, end+1) } The while loop enables us to repeat this process as many times as this pattern is encountered within the line. @@ -196,7 +196,7 @@ It is possible to apply the match fonction on this `matched` string, and extract As the link text and the url are stored, using the variables `start` and `end`, it is easy to reconstruct the html line : - line = substr(line, 1, start-1) "<a href=\"" matched_url "\">" matched_link "</a>" substr(line, end+1) + line = substr(line, 1, start-1) "" matched_link "" substr(line, end+1) The inline parsing function is now complete, all we have to do it apply is systematically on the text within html tags and this finished the markdown parser. diff --git a/index.html b/index.html index 158400f..80a5370 100644 --- a/index.html +++ b/index.html @@ -12,8 +12,9 @@simpet
-
-
simpet
+Created at:
+Updated at:
+Bob, a static blog generator
+The blog engine
+Starting from my markdown AWK parser, which was litterally done to achieve this blog engine, I've added an extra layer to turn it into a statis blog generator +Of course the parser is only one of the several components required for a blog generator, but I shall start from the beginning. +Initially I wanted to blog for me, and as described here, it was to mostly talk about tech. +The desire to make everything from scratch and reinvent the wheel is very strong, but we'll see how this evolve in the future.
+Now that I have my markdown to HTML converter I don't lack much to turn in into bob my blog generator.
the boilerplate
+After thinking about it, I did want to rely on git to store my drafts and posts, and have a CI listening to my blog repository that would do all the publishing work on the actual webserver. Hence the need for a self hosted git instance, and CI (reinventing the wheel I said).
+Maybe I shall post about gitea and drone CI later on.
For this to happend, bob shall be a simple CLI, and screw it, a docker image as well.
I also wanted to only handle the markdown file, and let the html build itself.
+I came up with a very simple folder architecture :
+-
+
The idea is as simple as it gets : I write my drafts in the folder of the same name, when I want to publish them, I simply move them into the published subfolder and bob and the CI handle the rest.
But the markdown converter does not create a full html page, so here comes the need for boilerplating :
+I made an index.html template, for the home page, and a post.html one, for the actual articles.
Once again this is very simple : the post page template's body looks like this :
+<body>
+ <h1 class='title'><a href="../index.html">simpet</a></h1>
+ <article>
+ {{article}}
+ <footer>
+ <div></div>
+ </footer>
+ </article>
+</body>
+
+
+and I use awk to replace {{article}} with the actual content of the posts, like so :
publish_one()
+{
+ # Storing the path of the post/article to publish
+ # The path is supposed to have this format "./drafts/published/<article>.*
+ article_path=$1
+
+ # from the relative path, only retrieving the name of the article (without file extension)
+ article_name=$(echo $article_path | cut -d '/' -f 4 | cut -d '.' -f 1)
+
+ # Convert the markdown draft into an html article and storing it locally
+ post=$(awk -f ${BOB_LIB}/markdown.awk ./$article_path)
+
+ # Retrieving the html article template
+ template="${BOB_LIB}/template/post.html"
+
+ # Escaping the & for next step to not confuse awk
+ escaped_post=$(echo "$post" | sed 's/&/\\&/g')
+
+ # In the template, replacing the string {{article}} by the actual content parsed above
+ awk -v content="$escaped_post" '{gsub(/{{article}}/, content); print}' "$template" > "./posts/$article_name.html"
+}
+
+
+The home page template is similar :
+<body>
+ <h1 class='title'>simpet</h1>
+ {{articles}}
+</body>
+
+
+and updated this way :
+update_index()
+{
+ # Listing all posts and making an html list (with there link) out of them
+ posts=$(ls -t ./posts | awk '
+ BEGIN {
+ print "<ul>"
+ }
+ {
+ ref=$0
+ gsub(".html","",ref)
+ gsub(/[_-]/, " ", ref)
+ print "<li><a href=\"./posts/" $0 "\">" ref "</a></li>"
+ }
+ END {
+ print "</ul>"
+ }')
+ # retrieving the template for the index.html
+ template="${BOB_LIB}/template/index.html"
+ # replacing {{articles}} in the template with the actual list of articles from above
+ awk -v content="$posts" '{gsub(/{{articles}}/, content); print}' "$template" > "./index.html"
+}
+
+
+Whenever an new article is added or removed of the drafts/published folder, the update_index() will adjust the home page, because call by this function :
publish_all()
+{
+ # List all drafts to be published
+ published=$(ls -1 ./drafts/published)
+ # turning it into an array
+ published_array=($published)
+
+ # Remove all html articles in case a previously published one was removed
+ rm ./posts/*.html
+
+ # Publish them one by one (ie turning md into html)
+ for file in "${published_array[@]}"; do
+ publish_one ./drafts/published/$file
+ done
+ # updating the index.html as new articles are supposedly present and some may be removed
+ update_index
+}
+
+
+which basically only reads the ready to be published posts and turn them into an html file, using the template, and then update the index.html
That's it !
+To sum up
+I've made a very simple, not very customisable static blog generator, mostly using awk. It clearly is not optimized as it regenerated all the articles everytime, but awk is quite efficient, and for a few posts, I don't think it really matters.
+The real benefit is that I only handle markdown files, the CI and bob do the rest...
Also, a statis site is blazing fast as loading in the browser, and since I do not use images (yet) nor javascript, I get a very very fast blog.
+To be continued...
+ +simpet
+Created at:
+Updated at:
+Markdown to HTML using AWK
+when I decided to start blogging, it was mostly for me to learn and remember all tech thing I learnt throughout time. +I also want to explore a wide diversity of technology, not focus on a particular one.
+Hence to start blogging, I obviously needed a static site generator. +Many of them exist already, like Hugo for example, however rewriting one from scratch is typically the kind of exercise I want to throw myself into. +The advantage of a static site is clearly its loading speed : a simple html file, combined with a small licked css, and a whole new blog is born +Anyway, writing this static site generator from scratch is also the perfect excuse to explore a not so widely know technology to manipulate text files.
+Introduction to AWK
+AWK, from the intials of its creator, is an old an powerful text file maniulation. Syntactically close to C, it is a scripting language to manipulation text entries. +Its wikipedia page sums up nicely its story. +I thought it was clever to use is for a site generator, to parse markdown files and generate html ones. +However, according to this listing of static site generator programs, another one has had the same idea. +Hence, the following, as well as my code is heavily inspired by Zodiac (even though the repo has not been touched for 8 years).
+Parsing markdown
+Following the official syntax, is a good start for a parser. +AWK works as follow : it takes an optional regex and execute some code between bracket, as a function, at each line of the text input. +For example :
+/^#/ {
+ print "<h1>" $0 "</h1>"
+}
+
+
+Although $n refers to the n-th records in the line (according to a delimiter, like in a csv), the special $0 refers to the whole line.
+In this case, for each line starting with #, awk will print (to the standard output), <h1> [content of the line] </h1>.
+This is the beginning to parse headers in markdown.
+However, by trying this, we immediatly see that # is part of the whole line, hence it also appear in the html whereas it sould not.
+AWK has a way to prevent this, as it is a complete scripting language, with built-in functions, that enable further manipulations.
/^#/ {
+ print "<h1>" substr($0, 3) "</h1>"
+}
+
+
+In the example above, as per the documentation
+it returns the subtring of $0 starting at 3 (1 being # and 2 the whitespace following it) to the end of the line.
Now this is better, but we now are able to generalized it to all headers. Another function, match can return the number of char matched by a regex,
+and allows the script to dynamically determine which depth of header it parses. This length is stored is the global variable RLENGTH:
/^#+ / {
+ match($0, /#+ /);
+ n = RLENGTH;
+ print "<h" n-1 ">" substr($0, n + 1) "</h" n-1 ">"
+}
+
+
+Reproducing this technique to parse the rest proves to be difficult, as lists for example, are not contained in a single line, hence
+how to know when to close it with </ul> or </ol>
Introducing a LIFO stack
+Since according to the markown syntax, it is possible to have nested blocks such as headers and lists withing blockquotes, or lists withing lists, I came with the simple idea to track to current environnement in a stack in AWK. +Turns out it came out to be easy, I only needed a pointer to track the size of the lifo, a fonction to push an element, an another one to pop one out :
+BEGIN {
+ env = "none"
+ stack_pointer = 0
+ push(env)
+}
+# Function to push a value onto the stack
+function push(value) {
+ stack_pointer++
+ stack[stack_pointer] = value
+}
+# Function to pop a value from the stack (LIFO)
+function pop() {
+ if (stack_pointer > 0) {
+ value = stack[stack_pointer]
+ delete stack[stack_pointer]
+ stack_pointer--
+ return value
+ } else {
+ return "empty"
+ }
+}
+
+
+The stack does not have to be strictly declared. The value of inside the LIFO correspond to the current markdown environment.
+This is a clever trick, because when I need to close an html tag, I use the poped element between a </ and a > instead of having a matching table.
I also used a simple last() function to return the last pushed value in the stack without popping it out :
# Function to get last value in LIFO
+function last() {
+ return stack[stack_pointer]
+}
+
+
+This way, parsing lists became trivial :
+# Matching unordered lists
+/^[-+*] / {
+ env = last()
+ if (env == "ul" ) {
+ # In a unordered list block, print a new item
+ print "<li>" substr($0, 3) "</li>"
+ } else {
+ # Otherwise, init the unordered list block
+ push("ul")
+ print "<ul>\n<li>" substr($0, 3) "</li>"
+ }
+}
+
+
+I believe the code is pretty self explanatory, but when the last environement is not ul, then we enter this environement.
+This translates as pushing it to the stack.
+Otherwise, it means we are already reading a list, and we only need to add a new element to it.
Parsing the simple paragraph and ending the parser
+I showed examples of lists and headers, but it works the same way for code blocks, blockquotes, etc.. Only the simple paragraph is different : +it does not start with a specific caracter. That is, to match it, we match everything that is not a special character. +I have no idea if this is the best solution, but so far it proved to work:
+# Matching a simple paragraph
+!/^(#|*|-|+|>|`|$|\t| )/ {
+ env = last()
+ if (env == "none") {
+ # If no block, print a paragraph
+ print "<p>" $0 "</p>"
+ } else if (env == "blockquote") {
+ print $0
+ }
+}
+
+
+AS BEGIN, AWK provide the possibilty to execute code at the very end of the file, with the END keyword.
+Naturally we need to empty the stack and close all html tags that might have been opened during the parsing.
+It only is a while loop, until the last environement is "none", as it way initiated :
END {
+ env = last()
+ while (env != "none") {
+ env = pop()
+ print "</" env ">"
+ env = last()
+ }
+}
+
+
+This way we are able to simply parse markdown and turn it into an HTML file.
+Parsing in-line fonctionnalities
+For now we have seen a way to parse blocks, but markdown also handles strong, emphasis and links. However, these tags can appear anywhere in a line. +Hence we need to be able to parse these lines apart from the block itself : indeed a header can container a strong and a link.
+The previously introduced but very useful function match fits this need : it literally is a regex engine, looking for a pattern in a string.
+Whenever the pattern is found, two global variables are filled :
-
+