completing draft
All checks were successful
continuous-integration/drone/push Build is passing

This commit is contained in:
Simon Petit 2024-12-06 09:23:22 +01:00
parent 68b0b1ae3f
commit 92bff32f9a
2 changed files with 104 additions and 7 deletions

View File

@ -27,7 +27,7 @@ I know less about CI/CD, and narrowed my short list to two of them:
In order to move on my project, I tried not to overthink it and went for Drone, even though I feel Buildbot would be maybe a more complete solution.
One aspect of Drone that pleased me was the possibility to run all with docker images, and its documented integration with Gitea.
Let me clarify that :
- Indeed, drone interfaces easyli with gitea as per its [documentation](https://docs.drone.io/server/provider/gitea/) (and not so much with gitlab, which conforted my initial choice)
- Indeed, drone interfaces easily with gitea as per its [documentation](https://docs.drone.io/server/provider/gitea/) (and not so much with gitlab, which conforted my initial choice)
- The other point is that it can run its pipeline all within docker container, as seen [here](https://docs.drone.io/quickstart/docker/), that is each step of the pipeline is the execution of a container. As I wanted to improve my docker skills as well, this was a nice touch. However this means that I would have to package my static blog generator as a docker image to run it in my pipeline.
The web UI is also very nice and intuitive, and its uses the Gitea SSO for signing in.
@ -45,4 +45,103 @@ Hence here are all the containers that must be up and running at the end :
- Drone
- Drone runner (indeed, this container ACTUALLY runs the pipeline, the Drone one only acts as a scheduler)
For clarity let us create three folders in the server, all of them containing the `docker-compose.yml` that ups the services :
- gitea : it holds the gitea container as well as the postgres (as it is only used by gitea here)
- nginx : as the proxy it is separated from the others
- drone : of course, the last piece of the puzzle, containing drone server and runner
All of them needs to be on the same docker network to comunicate with each other.
### Creating the network
As said above, all these running containers need to be on the same docker netword so that they can communicate.
Frist this is then to create this network, with this simple command (mine is simply called gitea, as all revolve around it)
docker network create gitea
All default parameters for this network is quite fine here.
Hence, all the `docker-compose.yml` shall start with :
networks:
gitea:
external: true
that indicates the use of the previously created network as an externally created network.
### The proxy
To make this project a serious one, we need to have https enabled. For this, there is a very convenient initiative : [letsencrypt](https://letsencrypt.org/fr/), and the very nice [certbot](https://certbot.eff.org/) that acts like its CLI.
Since we are going all docker, we shall use the official [nginx](https://hub.docker.com/_/nginx) and [certbot](https://hub.docker.com/r/certbot/certbot).
The point of doing this in a separate folder, is to organize the mounted volumes : indeed between nginx logs, configurations and certbot, we need to know for sure where to find all those files.
Only the `/var/www/html` will be map as is to the container, that is `/var/www/html:/var/www/html`. I find it easier to follow the default path for nginx even on the host machine, as if nginx was not running on docker.
In the `nginx` folder, let us make two subfolders : `certbot` and `nginx`, self explanatory.
#### Configuring the nginx web server
We considerd being in the folder $HOME/nginx, in which there are two subfolders : nginx and certbot
I will not go for a nginx tutorial as the official [documentation](http://nginx.org/en/docs/beginners_guide.html) is quite exhaustive.
However here is the part of the `docker-compose.yml` that concerns nginx :
services:
webserver:
image: nginx:latest
ports:
- 80:80
- 443:443
restart: always
volumes:
- ./nginx/conf:/etc/nginx/conf.d
- ./certbot/www:/var/www/certbot/
- ./certbot/conf:/etc/nginx/ssl/:ro
- ./nginx/logs:/var/log/nginx/
- /var/www/html:/var/www/html/
networks:
- gitea
It obviously expose ports 80 (for redirection) and 443. It always restart, as we do not want off time.
Concerning the volumes : the configuration is in the ./nginx/conf, this way we can update it anytime.
It will also need access to the the certbot confg.
As said above, we also want to access the logs, as I prepare a web analytics sofware, that insetad of using cookies and intrusive ways, only uses the nginx logs.
Of course, it is in the gitea network. As containers can belong to several networks, if in the future I want to add another web server, I will add this container to the second network.
Notice also the `/var/www/certbot` path within the container : it is useful for the acme challenge. We will talk about it later.
#### Adding certbot to the loop
We will also use `docker-compose.yml` to configure certbot, even though it will not be a running container, whereas a short lived one.
certbot:
image: certbot/certbot:latest
networks:
- gitea
volumes:
- ./certbot/www:/var/www/certbot/:rw
- ./certbot/conf:/etc/letsencrypt/:rw
It also belongs to the gitea networks, to communicate to nginx. Even though this is probably not the best idea : a dedicated network should have been more relevant.
Two volumes are also mounted. The certbot/conf contains the actual certificates that will generate promptly.
#### The initial nginx config
For certbot to generate the certificate, we need nginx to have this minimal config :
server {
listen 80;
listen [::]:80;
server_name example.org www.example.org;
server_tokens off;
location /.well-known/acme-challenge/ {
root /var/www/certbot;
}
location / {
return 301 https://example.org$request_uri;
}
}

View File

@ -142,9 +142,6 @@ It only is a while loop, until the last environement is "none", as it way initia
}
This way we are able to simply parse markdown and turn it into an HTML file.
Of course I am aware that is lacks emphasis, strong and code within a line of text.
However I did implement it, but maybe it will be explained in another edit of this post.
Nonetheless the code can still be consulted on [github](https://github.com/SiwonP/bob).
## Parsing in-line fonctionnalities
@ -159,8 +156,8 @@ Whenever the pattern is found, two global variables are filled :
For the following, `line` represents the line processed by the function, as the following `while` loops are actually part of a single function.
This way `match(line, /\*([^*]+)\*/)` matches a string surrounded by two `*`, corresponding to an emphasis text.
The `*` are espaced are thez are special characters, and the *group* is inside the parenthesis.
To matche several instances of emphasis text within a line, a simple `while` will do the trick.
The `*` are espaced are thez are special characters, and the *group* is inside the parenthesis.
To match several instances of emphasis text within a line, a simple `while` will do the trick.
We now only have to insert html tags `<em>` are the right space around the matched text, and we are good to go.
We can save the global variables `RSTART` and `RLENGTH` for further use, in case they were to be change. Using them we also can extract the
matched substrings and reconstruct the actual html string :
@ -176,7 +173,7 @@ matched substrings and reconstruct the actual html string :
We now can repeat the pattern for all inline fonctionnalities, e.g. strong and code.
The case of url is a bit more deep as we need to match two groups : the actual text and the url itself.
No real issue here, the naïve way is to match thd whole, and looking for both the link and the url within the matched whole.
No real issue here, the naïve way is to match the whole, and looking for both the link and the url within the matched whole.
This way `match(line, /\[([^\]]+)\]\([^\)]+\)/)` matches a text between `[]` followed by a text between `()` : the markdown representation of links.
As above, we store the `start` and `end` and also the whole match :
@ -204,3 +201,4 @@ The inline parsing function is now complete, all we have to do it apply is syste
This, of course, is the first brick of a static site generator, maybe the most complexe one.
We shall see up next how to orchestrate this parser to make is a actual site generator.
The code is available in the [repo](https://git.simonpetit.top/simonpetit/top).