Compare commits

...

102 commits

Author SHA1 Message Date
wizzdom
7b769cebfe
wiki: use utf8 encoding, improve db performance, more backups (#100)
* wiki: use utf8 encoding, improve db performance, more backups

* add medik skin colour, logo
2025-03-01 00:51:17 +00:00
Ayden
07f1f032b7
Esports: update discord bot job and add minecraft server (#101)
* socs: update esports discord bot

* esports: add minecraft server job
2025-02-28 19:58:56 +00:00
wizzdom
44ac151512
add uptime kuma (#99)
* add uptime kuma

* uptime-kuma: move to monitoring/
2025-02-28 14:34:00 +00:00
wizzdom
12278b1b44
bastion-vm-backup: remove unreliable backups over scp (#98) 2025-02-28 03:03:06 +00:00
wizzdom
737dd00e06
hedgedoc: bump image version 2025-02-28 01:01:16 +00:00
wizzdom
cfaf7a4309
mediawiki: bump db RAM 2025-02-28 00:58:55 +00:00
wizzdom
b58c812a3e
Use separate DB for all services (#95)
* migrate vaultwarden to seperate db

* plausible: add separate db, move click mount

* privatebin: separate db, cleanup

* add privatebin backup job

* remove postgres job
2025-02-09 19:54:17 +00:00
wizzdom
fc337777cb
add C&S room bookings job (#93) 2025-02-06 12:47:36 +00:00
wizzdom
24911a2907
add redbrick subdomain for style (#94) 2025-02-06 12:44:47 +00:00
wizzdom
8819180c25
add uri-meetups job (#92) 2025-02-05 18:18:04 +00:00
wizzdom
ad4cfbbaf6
bump version, players, view distance (#89)
we have the RAM for it anyways ;)
2025-01-13 18:42:53 +00:00
wizzdom
d0b3c14a85
atlas: add www & www redirects (#88)
make more old links work again ;)
2025-01-04 01:03:16 +00:00
Gavin Holahan
2263558f4a
Moved The Look Online to aperature (#87)
Co-authored-by: Wizzdom <wizzdom@redbrick.dcu.ie>
2025-01-03 00:38:23 +00:00
wizzdom
14e0b7eae3
traefik: add dynamic redirects via consul kv (#85) 2024-12-29 17:59:30 +00:00
wizzdom
e951e1ba17
thecollegeview: much caches, permissions (#84)
- add `redis` object cache
- add `nginx` static page cache with WP Super Cache
- add `nginx` to `www-data` group to avoid permissions conflicts
- increase PHP `max_children`, `upload_max_filesize`, `post_max_size`
  configs
2024-12-15 18:33:44 +00:00
wizzdom
c169d75001
traefik: add ssh, voice, tracing, access log (#83)
mumble voice configs taken from here: https://github.com/DistroByte/nomad/blob/master/jobs/traefik.hcl

Co-authored-by: DistroByte <james@distrobyte.io>
2024-12-15 18:12:06 +00:00
wizzdom
b22f9d8b75
minecraft/vanilla: increase memory (#82) 2024-12-12 01:50:19 +00:00
wizzdom
9f16d94cbb
add thecollegeview.ie (#80)
* add thecollegeview.ie

* thecollegeview: migrate to phpfpm + nginx

* thecollegeview: pass rest api to phpfpm

* mps-site: remove hacky workaround for tcv

* thecollegeview: pass all dirs to phpfpm

* backup the correct db
2024-12-11 14:35:46 +00:00
wizzdom
37e6facab6
amikon: update to support node docker image (#81) 2024-12-10 23:27:40 +00:00
Ayden
f3e5ae5e2b
update db job to be tied to postgres alpine 16 (#78)
Co-authored-by: wizzdom <dom@wizzdom.xyz>
2024-12-02 22:55:48 +00:00
wizzdom
a464a915f0
hedgeodoc: add mount for banner, set default permission (#76) 2024-11-24 09:31:42 +00:00
wizzdom
d38f434a13
postgres-backup: delete old backups from the correct location (#77) 2024-11-24 09:31:28 +00:00
Ayden
6ae4ea0c8f
update env variables (#75) 2024-11-18 15:30:13 +00:00
Ayden
7ae45f6cd9
update env vars for solar racing website (#74) 2024-11-18 00:10:11 +00:00
Ayden
2b1a8e68dc
add esports bot job (#72)
Co-authored-by: wizzdom <wizzdom@redbrick.dcu.ie>
2024-11-01 23:10:33 +00:00
wizzdom
198d269d37
add github actions runner for CI/CD deployments with Nomad (#71) 2024-10-24 16:01:33 +01:00
wizzdom
7dc24a13bd
add paperless for document tracking and indexing (#70) 2024-10-22 16:13:29 +01:00
wizzdom
4b64070d7c games/mc/vanilla: update plugins, increase RAM 2024-10-22 02:25:44 +01:00
Ayden
55926dd4e0
add dcusr listmonk job (#69)
Co-authored-by: wizzdom <wizzdom@redbrick.dcu.ie>
2024-10-21 13:51:03 +01:00
wizzdom
d251d0e154
ansible/nomad: enable bridge hairpin mode (#68) 2024-10-17 18:16:35 +01:00
wizzdom
c993ceb6ed
games/minecraft: add plugins, bluemap config (#64) 2024-10-15 13:35:08 +01:00
wizzdom
29d57b8081
hedgedoc: update to latest, move db, fix backups (#66) 2024-10-15 12:52:46 +01:00
nova
33b05a1d3e
Add cns support to nova-timetable job (#63) 2024-10-14 18:58:09 +01:00
wizzdom
fe6f66754d
Minecraft refactor: dynamic routes, separate jobs (#59)
refactors minecraft jobs allowing for more flexibility in the future (e.g. adding an rcon web interface):

- split `minecraft.hcl` into multiple job files
- update ports used
- add consul service attached to mc and rcon ports
- remove `gate-proxy.hcl` static configuration - it is now consul aware via a consul template
-  add fallback route on `gate-proxy` with message to inform users
- remove unused jobs
- move all minecraft jobs to use template blocks instead of env blocks for envvars - this lets us define the `RCON_PASSWORD` and store it in consul
2024-10-14 09:12:43 +01:00
Ayden
e67953631c
add nova discord bot task (#61)
Co-authored-by: nova <110734810+novanai@users.noreply.github.com>
Co-authored-by: wizzdom <dom@wizzdom.xyz>
2024-10-01 14:23:17 +01:00
wizzdom
6ecd7df30d
socs: add dcumps.ie (#60) 2024-09-14 13:14:09 +01:00
Jed Hazaymeh
61c624fc89
Add minecraft server job for magma (#52)
Co-authored-by: wizzdom <dom@wizzdom.xyz>
2024-09-13 15:52:49 +01:00
wizzdom
8245a1226b
ingress/traefik: mail ports, persist acme (#58)
- add ports required for upcoming mailserver
- make `acme.json` persistent for sanity
2024-09-13 15:51:37 +01:00
wizzdom
83e51c361c
wiki: refactor and cleanup (#57)
- tidy up traefik middlewares
- mount db to `/storage`, remove constraint on single host
- separate `LocalSettings.php` into its own file for sanity
- move wiki-related files to `wiki/`
- use traditional wiki skin by default
2024-09-13 15:51:02 +01:00
wizzdom
2a1f84163c
cleanup: traefik entrypoints and format (#56) 2024-09-11 23:50:08 +01:00
Ayden
808451749c
Pretix: add pretix job for dcu solar racing (#55)
Co-authored-by: wizzdom <wizzdom@redbrick.dcu.ie>
Co-authored-by: wizzdom <dom@wizzdom.xyz>
2024-09-11 23:26:54 +01:00
wizzdom
a4094cff61
user-vms: refactor, add bastion backup and serve (#53) 2024-09-04 02:19:28 +01:00
wizzdom
58d0f8f803
add minio (#48) 2024-08-14 01:06:01 +01:00
nova
5fff486c9c
Update nova timetable-sync job (#51)
Co-authored-by: wizzdom <dom@wizzdom.xyz>
2024-08-09 17:53:29 +01:00
wizzdom
3f0ae5d23b
add dcusr outline, add socs dir (#50) 2024-08-08 21:38:11 +01:00
wizzdom
c06d80fc63
add wiki.redbrick.dcu.ie (#49)
Adds `mediawiki.hcl`
- migrated to latest mediawiki LTS
- using php-fpm and nginx

Adds `mediawiki-backup.hcl`
- backup of mariadb
- full xml dump of mediawiki

Co-authored-by: Ayden <info@aydenjahola.com>
2024-08-06 00:34:31 +01:00
wizzdom
804fc4c5e5
add members-mysql (#46) 2024-07-26 21:58:11 +01:00
wizzdom
3c1b2894a0
vault: enable SMTP (#47) 2024-07-26 21:57:56 +01:00
wizzdom
48a729ef40
plausible: update and add variables (#45) 2024-07-26 21:10:14 +01:00
wizzdom
3995bc5659
wetty: set base path and add all the domains (#44) 2024-07-11 01:07:09 +01:00
wizzdom
51aa2f3e78
jobs: update traefik rule for hosts following latest traefik standard (#43) 2024-07-09 02:06:53 +01:00
wizzdom
3c36b5a605 give brickbot a username 2024-05-29 23:30:23 +01:00
wizzdom
9a0b740dcd
update gitignore to prevent oopsie (#42) 2024-04-30 19:27:41 +01:00
wizzdom
69103b5d2b
update minecraft jobs (#41) 2024-04-30 19:19:38 +01:00
wizzdom
d88b5726be
refactor amikon job: auto redeploy every 6 hours (#40) 2024-04-11 17:51:14 +01:00
wizzdom
d849019a20
update ldap secret perms (#39) 2024-04-11 16:26:06 +01:00
wizzdom
0d1cc51818
Update MC conf for fugatives and regaus (#38)
Co-authored-by: hypnoant <gholahan9@gmail.com>
2024-04-11 14:49:40 +01:00
wizzdom
48bad91a31
bastion-vm: use preconfigured image (#19) 2024-04-04 13:18:12 +01:00
Ayden
31f93e0c1b
add brickbot2 job (#37)
Co-authored-by: wizzdom <wizzdom@redbrick.dcu.ie>
2024-04-03 15:21:01 +01:00
wizzdom
46e007987d
add wetty to aperture (#36) 2024-04-02 16:16:19 +01:00
wizzdom
3511ad653b
add admin api job (#35) 2024-03-31 17:22:28 +01:00
wizzdom
c00b1e9243
ansible: mount oldstorage (#34)
Co-authored-by: James Hackett <jamesthackett1@gmail.com>
2024-03-28 20:01:04 +00:00
wizzdom
b22483197f
update minecraft servers, move ingress to ingress/ (#33) 2024-03-17 01:31:17 +00:00
wizzdom
cc1b9f83cc
add midnight calendarbot job (#32) 2024-03-14 17:39:54 +00:00
wizzdom
cbcc4100cb
ayden discord bot: add mongodb 2024-03-11 15:16:15 +00:00
wizzdom
9b120392c2
atlas: new name, new domain, new redirects (#31) 2024-02-22 22:46:28 +00:00
wizzdom
3170f824e5
privatebin: add paste.redbrick (#30) 2024-02-20 23:47:44 +00:00
wizzdom
fd122c6297
add vaultwarden job (#29)
Co-authored-by: James Hackett <jamesthackett1@gmail.com>
2024-02-20 03:46:31 +00:00
wizzdom
8a47822eef
add plausible analytics (#28) 2024-02-19 04:34:17 +00:00
wizzdom
a843ccd653
pastebin: add URL shortener (#27) 2024-02-19 03:56:04 +00:00
wizzdom
c3e9e6e4f6
add shlink job (#26)
Co-authored-by: James Hackett <jamesthackett1@gmail.com>
2024-02-19 03:02:06 +00:00
wizzdom
f6fbf5f1b7
add postgres backup job (#25) 2024-02-19 03:01:22 +00:00
James Hackett
6a5d4ed0fc Include all directories in script 2024-02-19 00:25:08 +00:00
wizzdom
c224004ccf
add privatebin (#24) 2024-02-18 22:47:24 +00:00
wizzdom
8d9e835f64
add postgres job (#23)
Co-authored-by: James Hackett <jamesthackett1@gmail.com>
2024-02-18 22:21:57 +00:00
wizzdom
6585eb3f33
add hedgedoc job (#22) 2024-02-18 00:02:48 +00:00
wizzdom
f05a112264
user-vms: add config admin exam VMs (#21) 2024-02-12 18:06:58 +00:00
wizzdom
ff1b340c44
add 11ty-website (#20) 2024-02-04 13:47:12 +00:00
wizzdom
50fd5db59e add force_pull since latest tag isn't latest 2024-02-03 17:35:39 +00:00
wizzdom
5eec2db32f move dcusr to solarracing.ie 2024-02-03 17:29:24 +00:00
wizzdom
db1300b355
Merge pull request #17 from redbrick/ingress-node-pool
add ingress node pool
2024-01-27 04:44:05 +00:00
wizzdom
523a98f906 add node pool docs 2024-01-27 04:40:13 +00:00
wizzdom
4267bf26e0 minecraft: add eikar flags, rename proxy 2024-01-25 22:32:26 +00:00
wizzdom
1407f662c9 dcusr: update domains and env vars 2024-01-25 22:28:57 +00:00
wizzdom
431e086df8
Merge pull request #18 from redbrick/add-ayden-discord-bot
add ayden discord bot
2024-01-23 14:42:17 +00:00
wizzdom
ffbff35316 add ayden discord bot 2024-01-23 04:58:27 +00:00
wizzdom
0159e15643
ansible: add consul dns via systemd-resolved (#16)
* ansible: add consul dns via systemd-resolved

* consolidate apt remove jobs
2024-01-23 04:48:00 +00:00
wizzdom
f7c934da59 misc: update domains 2024-01-23 04:46:04 +00:00
wizzdom
8d91938ee2 add ingress node pool 2024-01-23 04:38:58 +00:00
wizzdom
6f6692d89a move games into games/ 2024-01-19 04:08:02 +00:00
wizzdom
709bfd1323 add gamessoc minecraft server 2024-01-19 03:54:43 +00:00
wizzdom
b455f8473e mc: update main vanilla server 2024-01-16 02:36:35 +00:00
James Hackett
4fe992adf5 Add another minecraft server 2024-01-06 00:09:10 +00:00
James Hackett
b1dbed2e91 Add vanilla server and update router 2024-01-04 16:48:49 +00:00
James Hackett
8c1f0b4778 Update fixperms script 2024-01-04 01:42:05 +00:00
James Hackett
20fe4a112e Change traefik to a service running on bastion host 2024-01-04 01:41:41 +00:00
James Hackett
e1e79362c5 Allow multiple minecraft servers on one port 2024-01-04 01:40:33 +00:00
James Hackett
1c26c4401b Create bastion host for network wide ingress 2024-01-04 01:39:28 +00:00
James Hackett
5374f720a7 Increase volume size of base image 2024-01-04 01:38:01 +00:00
wizzdom
244f6bc354 fix typo 2023-12-15 23:09:06 +00:00
wizzdom
cc23af5889
Merge pull request #15 from redbrick/solar-racing-site
add Solar Racing website
2023-12-15 22:21:10 +00:00
wizzdom
2712348c85 add Solar Racing website 2023-12-15 21:49:53 +00:00
70 changed files with 4865 additions and 364 deletions

2
.gitignore vendored
View file

@ -1,3 +1,5 @@
ansible/hosts
ansible/group_vars/all.yml
.vscode/
**/*.ovpn
**/*.sql

View file

@ -51,6 +51,45 @@
DNSSEC=false
Domains=~consul node.consul service.consul
- name: Configure Docker to use systemd-resolved
become: true
copy:
dest: /etc/systemd/resolved.conf.d/docker.conf
content: |
[Resolve]
DNSStubListener=yes
DNSStubListenerExtra=172.17.0.1
- name: Configure Docker to use systemd-resolved
become: true
copy:
dest: /etc/docker/daemon.json
content: |
{
"dns": ["172.17.0.1"]
}
- name: Restart docker daemon
become: true
systemd:
name: docker
enabled: yes
state: restarted
when: ansible_check_mode == false
# this is to stop bind9 and pdns from conflicting with systemd-resolved
- name: Remove bind9 and pdns
become: true
ansible.builtin.apt:
name:
- bind9
- pdns-backend-bind
- pdns-recursor
- pdns-server
state: absent
purge: true
when: ansible_os_family == "Debian"
- name: Restart systemd-resolved
become: true
systemd:
@ -58,16 +97,3 @@
enabled: yes
state: restarted
when: ansible_check_mode == false
- name: Remove resolv.conf symlink
become: true
file:
path: /etc/resolv.conf
state: absent
- name: Create resolv.conf symlink
become: true
file:
src: /run/systemd/resolve/stub-resolv.conf
dest: /etc/resolv.conf
state: link

View file

@ -6,7 +6,7 @@
- nfs-common
when: ansible_os_family == "Debian"
- name: create mount point
- name: create /storage mount point
become: true
ansible.builtin.file:
path: /storage
@ -14,6 +14,14 @@
mode: "0755"
when: ansible_os_family == "Debian"
- name: create /oldstorage mount directory
become: true
ansible.builtin.file:
path: /oldstorage
state: directory
mode: "0755"
when: ansible_os_family == "Debian"
- name: add nfs entry to fstab
become: true
ansible.builtin.lineinfile:
@ -23,6 +31,7 @@
create: yes
with_items:
- "10.10.0.7:/storage /storage nfs defaults 0 0"
- "192.168.0.150:/zbackup /oldstorage nfs defaults 0 0"
- name: mount nfs
become: true

View file

@ -1,5 +1,11 @@
client {
enabled = true
# for minecraft modpack zip bombing allowance
artifact {
decompression_size_limit = "0"
decompression_file_count_limit = 12000
}
bridge_network_hairpin_mode = true
}
plugin "raw_exec" {

11
cluster-config/README.md Normal file
View file

@ -0,0 +1,11 @@
# Nomad Cluster Configuration
This directory contains configuration relating to the configuration of the cluster including:
- node pools
- agent config
## Node Pools
[Node pools](https://developer.hashicorp.com/nomad/docs/concepts/node-pools) are a way to group nodes together into logical groups which jobs can target that can be used to enforce where allocations are placed.
e.g. [`ingress-pool.hcl`](./ingress-pool.hcl) is a node pool that is used for ingress nodes such as the [bastion-vm](https://docs.redbrick.dcu.ie/aperture/bastion-vm/). Any jobs that are defined to use `node_pool = "ingress"` such as `traefik.hcl` and `gate-proxy.hcl` will only be assigned to one of the nodes in the `ingress` node pool (i.e. the [bastion VM](https://docs.redbrick.dcu.ie/aperture/bastion-vm/))

View file

@ -0,0 +1,3 @@
node_pool "ingress" {
description = "Nodes for ingress to aperture. e.g. bastion-vm"
}

View file

@ -1,5 +1,7 @@
#!/bin/bash
sudo chown -R root:nomad ./
sudo find . -type d -exec chmod 775 {} \;
sudo find . -type f -exec chmod 664 {} \;

View file

@ -0,0 +1,61 @@
job "minecraft-cjaran" {
datacenters = ["aperture"]
type = "service"
group "cjaran-mc" {
count = 1
network {
port "mc" {
to = 25565
}
port "rcon" {
to = 25575
}
}
service {
name = "cjaran-mc"
port = "mc"
}
service {
name = "cjaran-mc-rcon"
port = "rcon"
}
task "minecraft-cjaran" {
driver = "docker"
config {
image = "itzg/minecraft-server"
ports = ["mc", "rcon"]
volumes = [
"/storage/nomad/${NOMAD_TASK_NAME}:/data"
]
}
resources {
cpu = 3000 # 3000 MHz
memory = 4096 # 4GB
}
template {
data = <<EOF
EULA = "TRUE"
TYPE = "PAPER"
VERSION = "1.20.4"
USE_AIKAR_FLAGS = true
OPS = "BloThen"
MAX_PLAYERS = "10"
ENABLE_RCON = true
RCON_PASSWORD = {{ key "games/mc/cjaran-mc/rcon/password" }}
EOF
destination = "local/.env"
env = true
}
}
}
}

View file

@ -0,0 +1,64 @@
job "esports-minecraft" {
datacenters = ["aperture"]
type = "service"
group "esports-mc" {
count = 1
network {
port "mc" {
to = 25565
}
port "rcon" {
to = 25575
}
}
service {
name = "esports-mc"
port = "mc"
}
service {
name = "esports-mc-rcon"
port = "rcon"
}
task "esports-minecraft" {
driver = "docker"
config {
image = "itzg/minecraft-server"
ports = ["mc", "rcon"]
volumes = [
"/storage/nomad/${NOMAD_TASK_NAME}:/data"
]
}
resources {
cpu = 5000 # 5000 MHz
memory = 20480 # 20 GB
}
template {
data = <<EOF
EULA = "TRUE"
TYPE = "PAPER"
VERSION = "1.21.4"
ICON = "https://liquipedia.net/commons/images/thumb/5/53/DCU_Esports_allmode.png/37px-DCU_Esports_allmode.png"
USE_AIKAR_FLAGS = true
MAX_MEMORY = 18G
MOTD = "Powered by Redbrick"
MAX_PLAYERS = "32"
VIEW_DISTANCE = "32"
ENABLE_RCON = true
RCON_PASSWORD = {{ key "games/mc/esports-mc/rcon/password" }}
# Auto-download plugins
SPIGET_RESOURCES=83581,62325,118271,28140,102931 # RHLeafDecay, GSit, GravesX, Luckperms, NoChatReport
MODRINTH_PROJECTS=datapack:no-enderman-grief,thizzyz-tree-feller,imageframe,bmarker,datapack:players-drop-heads,viaversion,viabackwards
EOF
destination = "local/.env"
env = true
}
}
}
}

View file

@ -0,0 +1,58 @@
job "minecraft-fugitives" {
datacenters = ["aperture"]
type = "service"
group "fugitives-mc" {
count = 1
network {
port "mc" {
to = 25565
}
port "rcon" {
to = 25575
}
}
service {
name = "fugitives-mc"
port = "mc"
}
service {
name = "fugitives-mc-rcon"
port = "rcon"
}
task "minecraft-fugitives" {
driver = "docker"
config {
image = "itzg/minecraft-server"
ports = ["mc", "rcon"]
volumes = [
"/storage/nomad/${NOMAD_TASK_NAME}:/data"
]
}
resources {
cpu = 3000 # 3000 MHz
memory = 8192 # 8GB
}
template {
data = <<EOF
EULA = "TRUE"
TYPE = "PAPER"
USE_AIKAR_FLAGS = true
MOTD = "Fugitives"
MAX_PLAYERS = "20"
MEMORY = "6G"
ENABLE_RCON = true
RCON_PASSWORD = {{ key "games/mc/fugitives-mc/rcon/password" }}
EOF
destination = "local/.env"
env = true
}
}
}
}

View file

@ -0,0 +1,62 @@
job "minecraft-games" {
datacenters = ["aperture"]
type = "service"
group "games-mc" {
count = 1
network {
port "mc" {
to = 25565
}
port "rcon" {
to = 25575
}
}
service {
name = "games-mc"
port = "mc"
}
service {
name = "games-mc-rcon"
port = "rcon"
}
task "minecraft-games" {
driver = "docker"
config {
image = "itzg/minecraft-server"
ports = ["mc", "rcon"]
volumes = [
"/storage/nomad/${NOMAD_TASK_NAME}:/data"
]
}
resources {
cpu = 3000 # 3000 MHz
memory = 8192 # 8GB
}
template {
data = <<EOF
EULA = "TRUE"
TYPE = "PURPUR"
VERSION = "1.20.1"
MOTD = "DCU Games Soc Minecraft Server"
USE_AIKAR_FLAGS = true
OPS = ""
MAX_PLAYERS = "20"
ENABLE_RCON = true
RCON_PASSWORD = {{ key "games/mc/games-mc/rcon/password" }}
EOF
destination = "local/.env"
env = true
}
}
}
}

View file

@ -0,0 +1,70 @@
job "minecraft-magma" {
datacenters = ["aperture"]
type = "service"
group "fabric-server" {
count = 1
network {
port "mc" {
to = 25565
}
port "rcon" {
to = 25575
}
}
service {
name = "magma-mc"
port = "mc"
}
service {
name = "magma-mc-rcon"
port = "rcon"
}
service {
name = "magma-mc-voice"
port = "voice"
tags = [
"traefik.enable=true",
"traefik.tcp.routers.magma-mc-voice.rule=HostSNI(`magma-mc.rb.dcu.ie`)",
"traefik.tcp.routers.magma-mc-voice.tls.passthrough=true",
"traefik.udp.routers.magma-mc-voice.entrypoints=voice-udp",
]
}
task "minecraft-magma" {
driver = "docker"
config {
image = "itzg/minecraft-server:java17-alpine"
ports = ["mc", "rcon", "voice"]
volumes = [
"/storage/nomad/${NOMAD_TASK_NAME}:/data"
]
}
resources {
cpu = 3000 # 3GHz
memory = 10240 # 10GB
}
template {
data = <<EOF
EULA = "TRUE"
TYPE = "FABRIC"
VERSION = "1.20.4"
ICON = "https://raw.githubusercontent.com/redbrick/design-system/main/assets/logos/logo.png"
MEMORY = "8G"
USE_AIKAR_FLAGS = true
JVM_XX_OPTS = "-XX:+AlwaysPreTouch -XX:+DisableExplicitGC -XX:+ParallelRefProcEnabled -XX:+PerfDisableSharedMem -XX:+UnlockExperimentalVMOptions -XX:+UseG1GC -XX:G1HeapRegionSize=8M -XX:G1HeapWastePercent=5 -XX:G1MaxNewSizePercent=40 -XX:G1MixedGCCountTarget=4 -XX:G1MixedGCLiveThresholdPercent=90 -XX:G1NewSizePercent=30 -XX:G1RSetUpdatingPauseTimePercent=5 -XX:G1ReservePercent=20 -XX:InitiatingHeapOccupancyPercent=15 -XX:MaxGCPauseMillis=200 -XX:MaxTenuringThreshold=1 -XX:SurvivorRatio=32"
ENABLE_RCON=true
RCON_PASSWORD = {{{ key "games/mc/magma-mc/rcon/password" }}
EOF
destination = "local/.env"
env = true
}
}
}
}

View file

@ -0,0 +1,78 @@
job "minecraft-vanilla" {
datacenters = ["aperture"]
type = "service"
group "vanilla-mc" {
count = 1
network {
port "mc" {
to = 25565
}
port "rcon" {
to = 25575
}
port "bluemap" {
to = 8100
}
}
service {
name = "vanilla-mc"
port = "mc"
}
service {
name = "vanilla-mc-rcon"
port = "rcon"
}
service {
name = "vanilla-mc-bluemap"
port = "bluemap"
tags = [
"traefik.enable=true",
"traefik.http.routers.vanilla-mc-bluemap.rule=Host(`vanilla-mc.rb.dcu.ie`)",
"traefik.http.routers.vanilla-mc-bluemap.entrypoints=web,websecure",
"traefik.http.routers.vanilla-mc-bluemap.tls.certresolver=lets-encrypt",
]
}
task "minecraft-vanilla" {
driver = "docker"
config {
image = "itzg/minecraft-server"
ports = ["mc", "rcon", "bluemap"]
volumes = [
"/storage/nomad/${NOMAD_TASK_NAME}:/data"
]
}
resources {
cpu = 5000 # 5000 MHz
memory = 20480 # 20 GB
}
template {
data = <<EOF
EULA = "TRUE"
TYPE = "PAPER"
VERSION = "1.21.3"
ICON = "https://docs.redbrick.dcu.ie/assets/logo.png"
USE_AIKAR_FLAGS = true
MAX_MEMORY = 18G
MOTD = "LONG LIVE THE REDBRICK"
MAX_PLAYERS = "32"
VIEW_DISTANCE = "32"
ENABLE_RCON = true
RCON_PASSWORD = {{ key "games/mc/vanilla-mc/rcon/password" }}
# Auto-download plugins
SPIGET_RESOURCES=83581,62325,118271,28140,102931 # RHLeafDecay, GSit, GravesX, Luckperms, NoChatReport
MODRINTH_PROJECTS=datapack:no-enderman-grief,thizzyz-tree-feller,imageframe,bluemap,bmarker,datapack:players-drop-heads,viaversion,viabackwards
EOF
destination = "local/.env"
env = true
}
}
}
}

View file

@ -0,0 +1,78 @@
job "gate-proxy" {
datacenters = ["aperture"]
node_pool = "ingress"
type = "service"
group "gate-proxy" {
count = 1
network {
port "mc" {
static = 25565
}
}
service {
port = "mc"
check {
type = "tcp"
interval = "10s"
timeout = "2s"
}
}
task "gate-proxy" {
driver = "docker"
config {
image = "ghcr.io/minekube/gate"
ports = ["mc"]
volumes = [
"local/config.yaml:/config.yaml"
]
}
template {
data = <<EOH
# This is a simplified config where the rest of the
# settings are omitted and will be set by default.
# See config.yml for the full configuration options.
config:
bind: 0.0.0.0:{{ env "NOMAD_PORT_mc" }}
forwarding:
mode: legacy
lite:
enabled: true
routes:
# Consul template to generate routes
# matches against all consul services ending in "-mc"
# NOTE: each minecraft job must have both:
# - a name ending in "-mc"
# - a port attached to the service
{{- range services }}
{{- if .Name | regexMatch ".*-mc$" }}
{{- range service .Name }}
- host: {{ .Name }}.rb.dcu.ie
backend: {{ .Name }}.service.consul:{{ .Port }}{{ end -}}{{ end -}}{{ end }}
# Fallback route for when any service is unavailable
- host: '*'
backend: localhost:2000 # backend must exist - this is a dummy value
fallback:
motd: |
§cThis server is offline/does not exist!
§eCheck back later!
version:
name: '§cTry again later!'
protocol: -1
EOH
destination = "local/config.yaml"
}
}
}
}

205
jobs/ingress/traefik.hcl Normal file
View file

@ -0,0 +1,205 @@
job "traefik" {
datacenters = ["aperture"]
node_pool = "ingress"
type = "service"
group "traefik" {
network {
port "http" {
static = 80
}
port "https" {
static = 443
}
port "admin" {
static = 8080
}
port "ssh" {
static = 22
}
port "smtp" {
static = 25
}
port "submission" {
static = 587
}
port "submissions" {
static = 465
}
port "imap" {
static = 143
}
port "imaps" {
static = 993
}
port "pop3" {
static = 110
}
port "pop3s" {
static = 995
}
port "managesieve" {
static = 4190
}
port "voice-tcp" {
static = 4502
}
port "voice-udp" {
static = 4503
}
}
service {
name = "traefik-http"
provider = "nomad"
port = "https"
}
task "traefik" {
driver = "docker"
config {
image = "traefik"
network_mode = "host"
volumes = [
"local/traefik.toml:/etc/traefik/traefik.toml",
"/storage/nomad/traefik/acme/acme.json:/acme.json",
"/storage/nomad/traefik/access.log:/access.log",
]
}
template {
data = <<EOF
[entryPoints]
[entryPoints.web]
address = ":80"
[entryPoints.web.http.redirections.entryPoint]
to = "websecure"
scheme = "https"
[entryPoints.websecure]
address = ":443"
[entryPoints.traefik]
address = ":8080"
[entryPoints.ssh]
address = ":22"
[entryPoints.smtp]
address = ":25"
[entryPoints.submission]
address = ":587"
[entryPoints.submissions]
address = ":465"
[entryPoints.imap]
address = ":143"
[entryPoints.imaps]
address = ":993"
[entryPoints.pop3]
address = ":110"
[entryPoints.pop3s]
address = ":995"
[entryPoints.managesieve]
address = ":4190"
[entryPoints.voice-tcp]
address = ":4502"
[entryPoints.voice-udp]
address = ":4503/udp"
[entryPoints.voice-udp.udp]
timeout = "15s" # this will help reduce random dropouts in audio https://github.com/mumble-voip/mumble/issues/3550#issuecomment-441495977
[tls.options]
[tls.options.default]
minVersion = "VersionTLS12"
cipherSuites = [
"TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384",
"TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305",
"TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256",
"TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256",
"TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256",
"TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256"
]
[api]
dashboard = true
insecure = true
# Enable Consul Catalog configuration backend.
[providers.consulCatalog]
prefix = "traefik"
exposedByDefault = false
[providers.consulCatalog.endpoint]
address = "127.0.0.1:8500"
scheme = "http"
# Enable the file provider for dynamic configuration.
[providers.file]
filename = "/local/dynamic.toml"
#[providers.nomad]
# [providers.nomad.endpoint]
# address = "127.0.0.1:4646"
# scheme = "http"
[certificatesResolvers.lets-encrypt.acme]
email = "elected-admins@redbrick.dcu.ie"
storage = "acme.json"
[certificatesResolvers.lets-encrypt.acme.tlsChallenge]
[tracing]
[accessLog]
filePath = "/access.log"
EOF
destination = "/local/traefik.toml"
}
template {
data = <<EOF
[http]
[http.middlewares]
# handle redirects for short links
# NOTE: this is a consul template, add entries via consul kv
# create the middlewares with replacements for each redirect
{{ range $pair := tree "redirect/redbrick" }}
[http.middlewares.redirect-{{ trimPrefix "redirect/redbrick/" $pair.Key }}.redirectRegex]
regex = ".*" # match everything - hosts are handled by the router
replacement = "{{ $pair.Value }}"
permanent = true
{{- end }}
[http.routers]
# create routers with middlewares for each redirect
{{ range $pair := tree "redirect/redbrick" }}
[http.routers.{{ trimPrefix "redirect/redbrick/" $pair.Key }}-redirect]
rule = "Host(`{{ trimPrefix "redirect/redbrick/" $pair.Key }}.redbrick.dcu.ie`)"
entryPoints = ["web", "websecure"]
middlewares = ["redirect-{{ trimPrefix "redirect/redbrick/" $pair.Key }}"]
service = "dummy-service" # all routers need a service, this isn't used
[http.routers.{{ trimPrefix "redirect/redbrick/" $pair.Key }}-redirect.tls]
{{- end }}
[http.services]
[http.services.dummy-service.loadBalancer]
[[http.services.dummy-service.loadBalancer.servers]]
url = "http://127.0.0.1" # Dummy service - not used
EOF
destination = "local/dynamic.toml"
change_mode = "noop"
}
}
}
}

View file

@ -1,87 +0,0 @@
job "minecraft" {
datacenters = ["aperture"]
type = "service"
group "vanilla" {
constraint {
attribute = "${attr.unique.hostname}"
value = "glados"
}
count = 1
network {
port "mc-vanilla-port" {
static = 25565
to = 25565
}
port "mc-vanilla-rcon" {
to = 25575
}
#mode = "bridge"
}
service {
name = "minecraft-vanilla"
}
task "minecraft-server" {
driver = "docker"
config {
image = "itzg/minecraft-server"
ports = ["mc-vanilla-port","mc-vanilla-rcon"]
volumes = [
"/storage/nomad/${NOMAD_TASK_NAME}:/data/world"
]
}
resources {
cpu = 3000 # 500 MHz
memory = 6144 # 6gb
}
env {
EULA = "TRUE"
MEMORY = "6G"
}
}
}
group "create-astral" {
count = 1
network {
port "mc-astral-port" {
static = 25566
to = 25565
}
port "mc-astral-rcon" {
to = 25575
}
mode = "bridge"
}
service {
name = "minecraft-astral"
}
task "minecraft-astral" {
driver = "docker"
config {
image = "ghcr.io/maxi0604/create-astral:main"
ports = ["mc-astral-port","mc-astral-rcon"]
volumes = [
"/storage/nomad/${NOMAD_TASK_NAME}:/data/world"
]
}
resources {
cpu = 3000 # 500 MHz
memory = 8168 # 8gb
}
env {
EULA = "TRUE"
MEMORY = "6G"
}
}
}
}

View file

@ -0,0 +1,44 @@
job "uptime-kuma" {
datacenters = ["aperture"]
type = "service"
group "web" {
count = 1
network {
port "http" {
to = 3001
}
}
service {
port = "http"
check {
type = "http"
path = "/"
interval = "10s"
timeout = "2s"
}
tags = [
"traefik.enable=true",
"traefik.http.routers.uptime-kuma.rule=Host(`status.redbrick.dcu.ie`)",
"traefik.http.routers.uptime-kuma.entrypoints=web,websecure",
"traefik.http.routers.uptime-kuma.tls.certresolver=lets-encrypt",
]
}
task "web" {
driver = "docker"
config {
image = "louislam/uptime-kuma:1"
ports = ["http"]
volumes = [
"/storage/nomad/uptime-kuma/data:/app/data"
]
}
}
}
}

View file

@ -1,46 +0,0 @@
job "nginx-ams" {
datacenters = ["aperture"]
type = "service"
group "ams-web" {
count = 1
network {
port "http" {
to = 80
}
}
service {
port = "http"
check {
type = "http"
path = "/"
interval = "10s"
timeout = "2s"
}
tags = [
"traefik.enable=true",
"traefik.http.routers.nginx-ams.rule=Host(`amikon.me`,`www.amikon.me`)",
"traefik.http.routers.nginx-ams.entrypoints=web,websecure",
"traefik.http.routers.nginx-ams.tls.certresolver=lets-encrypt",
]
}
task "webserver" {
driver = "docker"
config {
image = "ghcr.io/dcuams/amikon-site-v2:latest"
ports = ["http"]
}
resources {
cpu = 100
memory = 500
}
}
}
}

60
jobs/nginx/atlas.hcl Normal file
View file

@ -0,0 +1,60 @@
job "atlas" {
datacenters = ["aperture"]
type = "service"
meta {
git-sha = ""
}
group "nginx-atlas" {
count = 1
network {
port "http" {
to = 80
}
}
service {
port = "http"
check {
type = "http"
path = "/"
interval = "10s"
timeout = "2s"
}
tags = [
"traefik.enable=true",
"traefik.http.routers.nginx-atlas.rule=Host(`redbrick.dcu.ie`) || Host(`www.redbrick.dcu.ie`) || Host(`www.rb.dcu.ie`) || Host(`rb.dcu.ie`)",
"traefik.http.routers.nginx-atlas.entrypoints=web,websecure",
"traefik.http.routers.nginx-atlas.tls.certresolver=lets-encrypt",
"traefik.http.routers.nginx-atlas.middlewares=atlas-www-redirect,redirect-user-web",
# redirect redbrick.dcu.ie/~user to user.redbrick.dcu.ie
"traefik.http.middlewares.redirect-user-web.redirectregex.regex=https://redbrick\\.dcu\\.ie/~([^/]*)/?([^/].*)?",
"traefik.http.middlewares.redirect-user-web.redirectregex.replacement=https://$1.redbrick.dcu.ie/$2",
"traefik.http.middlewares.redirect-user-web.redirectregex.permanent=true",
# redirect www.redbrick.dcu.ie to redbrick.dcu.ie
"traefik.http.middlewares.atlas-www-redirect.redirectregex.regex=^https?://www.redbrick.dcu.ie/(.*)",
"traefik.http.middlewares.atlas-www-redirect.redirectregex.replacement=https://redbrick.dcu.ie/$${1}",
"traefik.http.middlewares.atlas-www-redirect.redirectregex.permanent=true",
]
}
task "web" {
driver = "docker"
config {
image = "ghcr.io/redbrick/atlas:latest"
ports = ["http"]
force_pull = true
}
resources {
cpu = 100
memory = 50
}
}
}
}

View file

@ -20,10 +20,10 @@ job "cawnj-test" {
port = "http"
check {
type = "http"
path = "/"
type = "http"
path = "/"
interval = "10s"
timeout = "2s"
timeout = "2s"
}
tags = [
"traefik.enable=true",

View file

@ -20,14 +20,14 @@ job "nginx-karting" {
port = "http"
check {
type = "http"
path = "/"
type = "http"
path = "/"
interval = "10s"
timeout = "2s"
timeout = "2s"
}
tags = [
"traefik.enable=true",
"traefik.http.routers.nginx-karting.rule=Host(`dcukartingsociety.ie`)",
"traefik.http.routers.nginx-karting.rule=Host(`karting.rb.dcu.ie`)",
"traefik.http.routers.nginx-karting.entrypoints=web,websecure",
"traefik.http.routers.nginx-karting.tls.certresolver=lets-encrypt"
]

View file

@ -20,10 +20,10 @@ job "nginx" {
port = "http"
check {
type = "http"
path = "/"
type = "http"
path = "/"
interval = "10s"
timeout = "2s"
timeout = "2s"
}
tags = [
"traefik.enable=true",
@ -49,7 +49,7 @@ job "nginx" {
}
template {
source = "local/index.html"
source = "local/index.html"
destination = "local/index.html"
}
}
@ -72,10 +72,10 @@ job "nginx" {
port = "http"
check {
type = "http"
path = "/"
type = "http"
path = "/"
interval = "10s"
timeout = "2s"
timeout = "2s"
}
tags = [
"traefik.enable=true",
@ -88,7 +88,7 @@ job "nginx" {
task "glados" {
constraint {
attribute = "${attr.unique.hostname}"
value = "glados"
value = "glados"
}
driver = "docker"
@ -105,7 +105,7 @@ job "nginx" {
}
template {
source = "local/glados.html"
source = "local/glados.html"
destination = "local/index.html"
}
}
@ -128,10 +128,10 @@ job "nginx" {
port = "http"
check {
type = "http"
path = "/"
type = "http"
path = "/"
interval = "10s"
timeout = "2s"
timeout = "2s"
}
tags = [
"traefik.enable=true",
@ -144,7 +144,7 @@ job "nginx" {
task "wheatley" {
constraint {
attribute = "${attr.unique.hostname}"
value = "wheatley"
value = "wheatley"
}
driver = "docker"
@ -162,7 +162,7 @@ job "nginx" {
}
template {
source = "local/wheatley.html"
source = "local/wheatley.html"
destination = "local/index.html"
}
}
@ -185,10 +185,10 @@ job "nginx" {
port = "http"
check {
type = "http"
path = "/"
type = "http"
path = "/"
interval = "10s"
timeout = "2s"
timeout = "2s"
}
tags = [
"traefik.enable=true",
@ -201,7 +201,7 @@ job "nginx" {
task "chell" {
constraint {
attribute = "${attr.unique.hostname}"
value = "chell"
value = "chell"
}
driver = "docker"
@ -219,7 +219,7 @@ job "nginx" {
}
template {
source = "local/chell.html"
source = "local/chell.html"
destination = "local/index.html"
}
}

83
jobs/services/api.hcl Normal file
View file

@ -0,0 +1,83 @@
job "api" {
datacenters = ["aperture"]
type = "service"
group "api" {
count = 1
network {
port "http" {
to = 80
}
}
service {
name = "api"
port = "http"
check {
type = "http"
path = "/"
interval = "10s"
timeout = "2s"
}
tags = [
"traefik.enable=true",
"traefik.http.routers.api.rule=Host(`api.redbrick.dcu.ie`)",
"traefik.http.routers.api.entrypoints=web,websecure",
"traefik.http.routers.api.tls.certresolver=lets-encrypt",
]
}
task "api" {
driver = "docker"
config {
image = "ghcr.io/redbrick/api:latest"
ports = ["http"]
volumes = [
"/oldstorage:/storage",
"/oldstorage/home:/home",
"local/ldap.secret:/etc/ldap.secret",
]
auth {
username = "${DOCKER_USER}"
password = "${DOCKER_PASS}"
}
}
template {
destination = "local/.env"
env = true
change_mode = "restart"
data = <<EOH
DOCKER_USER={{ key "api/ghcr/username" }}
DOCKER_PASS={{ key "api/ghcr/password" }}
AUTH_USERNAME={{ key "api/auth/username" }}
AUTH_PASSWORD={{ key "api/auth/password" }}
LDAP_URI={{ key "api/ldap/uri" }}
LDAP_ROOTBINDDN={{ key "api/ldap/rootbinddn" }}
LDAP_SEARCHBASE={{ key "api/ldap/searchbase" }}
EMAIL_DOMAIN=redbrick.dcu.ie
EMAIL_SERVER={{ key "api/smtp/server" }}
EMAIL_PORT=587
EMAIL_USERNAME={{ key "api/smtp/username" }}
EMAIL_PASSWORD={{ key "api/smtp/password" }}
EMAIL_SENDER={{ key "api/smtp/sender" }}
EOH
}
template {
destination = "local/ldap.secret"
perms = "600"
data = "{{ key \"api/ldap/secret\" }}" # this is necessary as the secret has no EOF
}
resources {
cpu = 300
memory = 1024
}
}
}
}

View file

@ -0,0 +1,56 @@
job "brickbot2" {
datacenters = ["aperture"]
type = "service"
group "brickbot2" {
count = 1
task "brickbot2" {
driver = "docker"
config {
image = "ghcr.io/redbrick/brickbot2:latest"
auth {
username = "${DOCKER_USER}"
password = "${DOCKER_PASS}"
}
volumes = [
"local/ldap.secret:/etc/ldap.secret:ro",
]
}
template {
destination = "local/ldap.secret"
perms = "600"
data = "{{ key \"api/ldap/secret\" }}" # this is necessary as the secret has no EOF
}
template {
destination = "local/.env"
env = true
change_mode = "restart"
data = <<EOH
DOCKER_USER={{ key "brickbot/ghcr/username" }}
DOCKER_PASS={{ key "brickbot/ghcr/password" }}
BOT_DB={{ key "brickbot/db" }}
BOT_TOKEN={{ key "brickbot/discord/token" }}
BOT_PRIVILEGED={{ key "brickbot/discord/privileged" }}
BOT_PREFIX=.
BOT_GUILD={{ key "brickbot/discord/guild" }}
LDAP_HOST={{ key "brickbot/ldap/host" }}
SMTP_DOMAIN={{ key "brickbot/smtp/domain" }}
SMTP_HOST={{ key "brickbot/smtp/host" }}
SMTP_PORT=587
SMTP_USERNAME={{ key "brickbot/smtp/username" }}
SMTP_PASSWORD={{ key "brickbot/smtp/password" }}
SMTP_SENDER={{ key "brickbot/smtp/sender" }}
API_USERNAME={{ key "brickbot/api/username" }}
API_PASSWORD={{ key "brickbot/api/password" }}
VERIFIED_ROLE={{ key "brickbot/discord/verified_role" }}
USER=brickbot
EOH
}
}
}
}

View file

@ -0,0 +1,59 @@
job "github-actions-runner" {
datacenters = ["aperture"]
type = "service"
meta {
version = "2.320.0"
sha256 = "93ac1b7ce743ee85b5d386f5c1787385ef07b3d7c728ff66ce0d3813d5f46900"
}
group "github-actions" {
count = 3
spread {
attribute = "${node.unique.id}"
weight = 100
}
task "actions-runner" {
driver = "raw_exec"
# user = "nomad"
config {
command = "/bin/bash"
args = ["${NOMAD_TASK_DIR}/bootstrap.sh"]
}
template {
data = <<EOF
#!/bin/bash
export RUNNER_ALLOW_RUNASROOT=1
echo "Querying API for registration token..."
reg_token=$(curl -L \
-X POST \
-H "Accept: application/vnd.github+json" \
-H "Authorization: Bearer {{ key "github/actions-runner/token" }}" \
-H "X-GitHub-Api-Version: 2022-11-28" \
https://api.github.com/orgs/redbrick/actions/runners/registration-token | jq -r '.token')
echo "Configuring runner..."
bash -c "${NOMAD_TASK_DIR}/config.sh --unattended --url https://github.com/redbrick --token ${reg_token} --name $(hostname) --labels aperture,deployment-runner --replace"
echo "Running actions runner..."
bash "${NOMAD_TASK_DIR}/run.sh"
EOF
destination = "local/bootstrap.sh"
}
artifact {
source = "https://github.com/actions/runner/releases/download/v2.320.0/actions-runner-linux-x64-2.320.0.tar.gz"
options {
checksum = "sha256:93ac1b7ce743ee85b5d386f5c1787385ef07b3d7c728ff66ce0d3813d5f46900"
}
}
}
}
}

View file

@ -0,0 +1,50 @@
job "hedgedoc-backup" {
datacenters = ["aperture"]
type = "batch"
periodic {
crons = ["0 */3 * * * *"]
prohibit_overlap = true
}
group "db-backup" {
task "postgres-backup" {
driver = "raw_exec"
config {
command = "/bin/bash"
args = ["local/script.sh"]
}
template {
data = <<EOH
#!/bin/bash
file=/storage/backups/nomad/hedgedoc/postgresql-hedgedoc-$(date +%Y-%m-%d_%H-%M-%S).sql
mkdir -p /storage/backups/nomad/hedgedoc
alloc_id=$(nomad job status hedgedoc | grep running | tail -n 1 | cut -d " " -f 1)
job_name=$(echo ${NOMAD_JOB_NAME} | cut -d "/" -f 1)
nomad alloc exec -task hedgedoc-db $alloc_id pg_dumpall -U {{ key "hedgedoc/db/user" }} > "${file}"
find /storage/backups/nomad/hedgedoc/postgresql-hedgedoc* -ctime +3 -exec rm {} \; || true
if [ -s "$file" ]; then # check if file exists and is not empty
echo "Backup successful"
exit 0
else
rm $file
curl -H "Content-Type: application/json" -d \
'{"content": "<@&585512338728419341> `PostgreSQL` backup for **'"${job_name}"'** has just **FAILED**\nFile name: `'"$file"'`\nDate: `'"$(TZ=Europe/Dublin date)"'`\nTurn off this script with `nomad job stop '"${job_name}"'` \n\n## Remember to restart this backup job when fixed!!!"}' \
{{ key "postgres/webhook/discord" }}
fi
EOH
destination = "local/script.sh"
}
}
}
}

121
jobs/services/hedgedoc.hcl Normal file
View file

@ -0,0 +1,121 @@
job "hedgedoc" {
datacenters = ["aperture"]
type = "service"
group "web" {
network {
# mode = "bridge"
port "http" {
to = 3000
}
port "db" {
to = 5432
}
}
service {
name = "hedgedoc"
port = "http"
check {
type = "http"
path = "/_health"
interval = "10s"
timeout = "2s"
}
tags = [
"traefik.frontend.headers.STSSeconds=63072000",
"traefik.frontend.headers.browserXSSFilter=true",
"traefik.frontend.headers.contentTypeNosniff=true",
"traefik.enable=true",
"traefik.port=${NOMAD_PORT_http}",
"traefik.http.routers.md.entrypoints=web,websecure",
"traefik.http.routers.md.rule=Host(`md.redbrick.dcu.ie`) || Host(`md.rb.dcu.ie`)",
"traefik.http.routers.md.tls.certresolver=lets-encrypt",
]
}
task "app" {
driver = "docker"
config {
image = "quay.io/hedgedoc/hedgedoc:1.10.2"
ports = ["http"]
volumes = [
"/storage/nomad/hedgedoc/banner:/hedgedoc/public/banner",
]
}
template {
data = <<EOH
CMD_DB_URL = "postgres://{{ key "hedgedoc/db/user" }}:{{ key "hedgedoc/db/password" }}@{{ env "NOMAD_ADDR_db" }}/{{ key "hedgedoc/db/name" }}"
CMD_ALLOW_FREEURL = "false"
CMD_FORBIDDEN_NOTE_IDS = ['robots.txt', 'favicon.ico', 'api', 'build', 'css', 'docs', 'fonts', 'js', 'uploads', 'vendor', 'views', 'auth']
CMD_DOMAIN = "md.redbrick.dcu.ie"
CMD_ALLOW_ORIGIN = ["redbrick.dcu.ie", "rb.dcu.ie"]
CMD_USE_CDN = "true"
CMD_PROTOCOL_USESSL = "true"
CMD_URL_ADDPORT = "false"
CMD_LOG_LEVEL = "debug"
CMD_ENABLE_STATS_API = "true"
# Accounts
CMD_ALLOW_EMAIL_REGISTER = "false"
CMD_ALLOW_ANONYMOUS = "false"
CMD_ALLOW_ANONYMOUS_EDITS = "false"
CMD_EMAIL = "false"
CMD_LDAP_URL = "{{ key "hedgedoc/ldap/url" }}"
CMD_LDAP_SEARCHBASE = "ou=accounts,o=redbrick"
CMD_LDAP_SEARCHFILTER = "{{`(uid={{username}})`}}"
CMD_LDAP_PROVIDERNAME = "Redbrick"
CMD_LDAP_USERIDFIELD = "uidNumber"
CMD_LDAP_USERNAMEFIELD = "uid"
CMD_SESSION_SECRET = "{{ key "hedgedoc/session/secret" }}"
CMD_DEFAULT_PERMISSION = "limited"
# Security/Privacy
CMD_HSTS_PRELOAD = "true"
CMD_CSP_ENABLE = "true"
CMD_HSTS_INCLUDE_SUBDOMAINS = "true"
CMD_CSP_ADD_DISQUS = "false"
CMD_CSP_ADD_GOOGLE_ANALYTICS= "false"
CMD_CSP_ALLOW_PDF_EMBED = "true"
CMD_ALLOW_GRAVATAR = "true"
# Uploads
CMD_IMAGE_UPLOAD_TYPE = "imgur"
CMD_IMGUR_CLIENTID = "{{ key "hedgedoc/imgur/clientid" }}"
CMD_IMGUR_CLIENTSECRET = "{{ key "hedgedoc/imgur/clientsecret" }}"
EOH
destination = "local/.env"
env = true
}
}
task "hedgedoc-db" {
driver = "docker"
config {
image = "postgres:13.4-alpine"
ports = ["db"]
volumes = [
"/storage/nomad/hedgedoc:/var/lib/postgresql/data",
]
}
template {
data = <<EOH
POSTGRES_PASSWORD={{ key "hedgedoc/db/password" }}
POSTGRES_USER={{ key "hedgedoc/db/user" }}
POSTGRES_NAME={{ key "hedgedoc/db/name" }}
EOH
destination = "local/db.env"
env = true
}
}
}
}

View file

@ -0,0 +1,50 @@
job "members-mysql-backup" {
datacenters = ["aperture"]
type = "batch"
periodic {
crons = ["0 */3 * * * *"]
prohibit_overlap = true
}
group "db-backup" {
task "mysql-backup" {
driver = "raw_exec"
config {
command = "/bin/bash"
args = ["local/script.sh"]
}
template {
data = <<EOH
#!/bin/bash
file=/storage/backups/nomad/mysql/members/members-mysql-$(date +%Y-%m-%d_%H-%M-%S).sql
mkdir -p /storage/backups/nomad/mysql/members
alloc_id=$(nomad job status members-mysql | grep running | tail -n 1 | cut -d " " -f 1)
job_name=$(echo ${NOMAD_JOB_NAME} | cut -d "/" -f 1)
nomad alloc exec $alloc_id mariadb-dump -u root -p'{{ key "members-mysql/root/password"}}' --all-databases > "${file}"
find /storage/backups/nomad/mysql/members/members-mysql* -ctime +3 -exec rm {} \; || true
if [ -s "$file" ]; then # check if file exists and is not empty
echo "Backup successful"
exit 0
else
rm $file
curl -H "Content-Type: application/json" -d \
'{"content": "<@&585512338728419341> `MySQL` backup for **'"${job_name}"'** has just **FAILED**\nFile name: `'"$file"'`\nDate: `'"$(TZ=Europe/Dublin date)"'`\nTurn off this script with `nomad job stop '"${job_name}"'` \n\n## Remember to restart this backup job when fixed!!!"}' \
{{ key "mysql/webhook/discord" }}
fi
EOH
destination = "local/script.sh"
}
}
}
}

View file

@ -0,0 +1,78 @@
job "members-mysql" {
datacenters = ["aperture"]
constraint {
attribute = "${attr.unique.hostname}"
value = "wheatley"
}
group "db" {
network {
port "db" {
static = 3306
}
}
task "mariadb" {
driver = "docker"
template {
data = <<EOH
MYSQL_ROOT_PASSWORD="{{ key "members-mysql/root/password" }}"
MYSQL_USER="{{ key "members-mysql/user/username" }}"
MYSQL_PASSWORD="{{ key "members-mysql/user/password" }}"
EOH
destination = "local/file.env"
env = true
}
config {
image = "mariadb:latest"
ports = ["db"]
volumes = [
"/opt/members-mysql:/var/lib/mysql",
"local/server.cnf:/etc/mysql/mariadb.conf.d/50-server.cnf",
]
}
template {
data = <<EOH
[server]
[mariadbd]
pid-file = /run/mysqld/mysqld.pid
basedir = /usr
bind-address = 0.0.0.0
expire_logs_days = 10
character-set-server = utf8mb4
character-set-collations = utf8mb4=uca1400_ai_ci
[mariadbd]
EOH
destination = "local/server.cnf"
}
resources {
cpu = 400
memory = 800
}
service {
name = "members-mysql"
port = "db"
check {
type = "tcp"
interval = "2s"
timeout = "2s"
}
}
}
}
}

68
jobs/services/minio.hcl Normal file
View file

@ -0,0 +1,68 @@
job "minio" {
datacenters = ["aperture"]
type = "service"
group "minio" {
count = 1
network {
port "api" {
}
port "console" {
}
}
service {
name = "minio-console"
port = "console"
check {
type = "http"
path = "/"
interval = "10s"
timeout = "2s"
}
tags = [
"traefik.enable=true",
"traefik.http.routers.minio-api.service=minio-api",
"traefik.http.services.minio-api.loadbalancer.server.port=${NOMAD_PORT_api}",
"traefik.http.routers.minio-api.rule=Host(`cdn.redbrick.dcu.ie`)",
"traefik.http.routers.minio-api.entrypoints=web,websecure",
"traefik.http.routers.minio-api.tls.certresolver=lets-encrypt",
"traefik.http.routers.minio-console.service=minio-console",
"traefik.http.services.minio-console.loadbalancer.server.port=${NOMAD_PORT_console}",
"traefik.http.routers.minio-console.rule=Host(`minio.rb.dcu.ie`)",
"traefik.http.routers.minio-console.entrypoints=web,websecure",
"traefik.http.routers.minio-console.tls.certresolver=lets-encrypt",
]
}
task "minio" {
driver = "docker"
config {
image = "quay.io/minio/minio"
ports = ["api", "console"]
command = "server"
args = ["/data", "--address", ":${NOMAD_PORT_api}", "--console-address", ":${NOMAD_PORT_console}"]
volumes = [
"/storage/nomad/minio:/data",
]
}
template {
destination = "local/.env"
env = true
change_mode = "restart"
data = <<EOH
MINIO_ROOT_USER={{ key "minio/root/username" }}
MINIO_ROOT_PASSWORD={{ key "minio/root/password" }}
EOH
}
}
}
}

View file

@ -0,0 +1,50 @@
job "paperless-backup" {
datacenters = ["aperture"]
type = "batch"
periodic {
crons = ["0 */3 * * * *"]
prohibit_overlap = true
}
group "db-backup" {
task "postgres-backup" {
driver = "raw_exec"
config {
command = "/bin/bash"
args = ["local/script.sh"]
}
template {
data = <<EOH
#!/bin/bash
file=/storage/backups/nomad/paperless/postgresql-paperless-$(date +%Y-%m-%d_%H-%M-%S).sql
mkdir -p /storage/backups/nomad/paperless
alloc_id=$(nomad job status paperless | grep running | tail -n 1 | cut -d " " -f 1)
job_name=$(echo ${NOMAD_JOB_NAME} | cut -d "/" -f 1)
nomad alloc exec -task db $alloc_id pg_dumpall -U {{ key "paperless/db/user" }} > "${file}"
find /storage/backups/nomad/paperless/postgresql-paperless* -ctime +3 -exec rm {} \; || true
if [ -s "$file" ]; then # check if file exists and is not empty
echo "Backup successful"
exit 0
else
rm $file
curl -H "Content-Type: application/json" -d \
'{"content": "<@&585512338728419341> `PostgreSQL` backup for **'"${job_name}"'** has just **FAILED**\nFile name: `'"$file"'`\nDate: `'"$(TZ=Europe/Dublin date)"'`\nTurn off this script with `nomad job stop '"${job_name}"'` \n\n## Remember to restart this backup job when fixed!!!"}' \
{{ key "postgres/webhook/discord" }}
fi
EOH
destination = "local/script.sh"
}
}
}
}

118
jobs/services/paperless.hcl Normal file
View file

@ -0,0 +1,118 @@
job "paperless" {
datacenters = ["aperture"]
type = "service"
group "paperless-web" {
network {
port "http" {
to = 8000
}
port "redis" {
to = 6379
}
port "db" {
to = 5432
}
}
service {
name = "paperless"
port = "http"
check {
type = "http"
path = "/"
interval = "10s"
timeout = "2s"
}
tags = [
"traefik.enable=true",
"traefik.http.routers.paperless.rule=Host(`paperless.redbrick.dcu.ie`) || Host(`paperless.rb.dcu.ie`)",
"traefik.http.routers.paperless.entrypoints=websecure",
"traefik.http.routers.paperless.tls=true",
"traefik.http.routers.paperless.tls.certresolver=lets-encrypt",
"traefik.http.middlewares.paperless.headers.contentSecurityPolicy=default-src 'self'; img-src 'self' data:"
]
}
task "web" {
driver = "docker"
config {
image = "ghcr.io/paperless-ngx/paperless-ngx:latest"
ports = ["http"]
volumes = [
"/storage/nomad/paperless/consume:/usr/src/paperless/consume",
"/storage/nomad/paperless/data:/usr/src/paperless/data",
"/storage/nomad/paperless/media:/usr/src/paperless/media",
"/storage/nomad/paperless/export:/usr/src/paperless/export",
"/storage/nomad/paperless/preconsume:/usr/src/paperless/preconsume",
]
}
template {
data = <<EOH
PAPERLESS_REDIS = "redis://{{ env "NOMAD_ADDR_redis" }}"
PAPERLESS_DBHOST = "{{ env "NOMAD_IP_db" }}"
PAPERLESS_DBPORT = "{{ env "NOMAD_HOST_PORT_db" }}"
PAPERLESS_DBPASS={{ key "paperless/db/password" }}
PAPERLESS_DBUSER={{ key "paperless/db/user" }}
PAPERLESS_DBNAME={{ key "paperless/db/name" }}
PAPERLESS_SECRETKEY={{ key "paperless/secret_key" }}
PAPERLESS_URL=https://paperless.redbrick.dcu.ie
PAPERLESS_ADMIN_USER={{ key "paperless/admin/user" }}
PAPERLESS_ADMIN_PASSWORD={{ key "paperless/admin/password" }}
PAPERLESS_ALLOWED_HOSTS="paperless.redbrick.dcu.ie,paperless.rb.dcu.ie,10.10.0.4,10.10.0.5,10.10.0.6" # allow internal aperture IPs for health check
PAPERLESS_CONSUMER_POLLING=1
EOH
destination = "local/.env"
env = true
}
# PAPERLESS_PRE_CONSUME_SCRIPT={{ key "paperless/env/preconsume-script" }}
resources {
cpu = 800
memory = 1000
}
}
task "broker" {
driver = "docker"
config {
image = "docker.io/library/redis:7"
ports = ["redis"]
}
resources {
cpu = 300
memory = 50
}
}
task "db" {
driver = "docker"
config {
image = "postgres:16-alpine"
ports = ["db"]
volumes = [
"/storage/nomad/paperless/db:/var/lib/postgresql/data"
]
}
template {
data = <<EOH
POSTGRES_PASSWORD={{ key "paperless/db/password" }}
POSTGRES_USER={{ key "paperless/db/user" }}
POSTGRES_NAME={{ key "paperless/db/name" }}
EOH
destination = "local/db.env"
env = true
}
}
}
}

174
jobs/services/plausible.hcl Normal file
View file

@ -0,0 +1,174 @@
job "plausible" {
datacenters = ["aperture"]
type = "service"
group "web" {
network {
port "http" {
to = 8000
}
port "clickhouse" {
static = 8123
}
port "db" {
static = 5432
}
}
task "app" {
service {
name = "plausible"
port = "http"
check {
type = "http"
path = "/"
interval = "10s"
timeout = "2s"
}
tags = [
"traefik.enable=true",
"traefik.http.routers.plausible.rule=Host(`plausible.redbrick.dcu.ie`)",
"traefik.http.routers.plausible.entrypoints=web,websecure",
"traefik.http.routers.plausible.tls.certresolver=lets-encrypt"
]
}
driver = "docker"
config {
image = "ghcr.io/plausible/community-edition:v2.1"
ports = ["http"]
volumes = [
"/storage/nomad/${NOMAD_JOB_NAME}/${NOMAD_TASK_NAME}:/var/lib/plausible"
]
command = "/bin/sh"
args = ["-c", "sleep 10 && /entrypoint.sh db migrate && /entrypoint.sh run"]
}
template {
data = <<EOH
TMPDIR=/var/lib/plausible/tmp
BASE_URL=https://plausible.redbrick.dcu.ie
SECRET_KEY_BASE={{ key "plausible/secret" }}
TOTP_VAULT_KEY={{ key "plausible/totp/key" }}
# Maxmind/GeoIP settings (for regions/cities)
MAXMIND_LICENSE_KEY={{ key "plausible/maxmind/license" }}
MAXMIND_EDITION=GeoLite2-City
# Google search console integration
GOOGLE_CLIENT_ID={{ key "plausible/google/client_id" }}
GOOGLE_CLIENT_SECRET={{ key "plausible/google/client_secret" }}
# Database settings
DATABASE_URL=postgres://{{ key "plausible/db/user" }}:{{ key "plausible/db/password" }}@{{ env "NOMAD_ADDR_db" }}/{{ key "plausible/db/name" }}
CLICKHOUSE_DATABASE_URL=http://{{ env "NOMAD_ADDR_clickhouse" }}/plausible_events_db
# Email settings
MAILER_NAME="Redbrick Plausible"
MAILER_EMAIL={{ key "plausible/smtp/from" }}
MAILER_ADAPTER=Bamboo.SMTPAdapter
SMTP_HOST_ADDR={{ key "plausible/smtp/host" }}
SMTP_HOST_PORT={{ key "plausible/smtp/port" }}
SMTP_USER_NAME={{ key "plausible/smtp/user" }}
SMTP_USER_PWD={{ key "plausible/smtp/password" }}
DISABLE_REGISTRATION=invite_only
EOH
destination = "local/file.env"
env = true
}
resources {
memory = 1000
}
}
task "db" {
driver = "docker"
config {
image = "postgres:17-alpine"
ports = ["db"]
volumes = [
"/storage/nomad/${NOMAD_JOB_NAME}/${NOMAD_TASK_NAME}:/var/lib/postgresql/data",
]
}
template {
data = <<EOH
POSTGRES_PASSWORD={{ key "plausible/db/password" }}
POSTGRES_USER={{ key "plausible/db/user" }}
POSTGRES_NAME={{ key "plausible/db/name" }}
EOH
destination = "local/db.env"
env = true
}
}
task "clickhouse" {
service {
name = "plausible-clickhouse"
port = "clickhouse"
}
driver = "docker"
config {
image = "clickhouse/clickhouse-server:24.3.3.102-alpine"
ports = ["clickhouse"]
volumes = [
"/storage/nomad/${NOMAD_JOB_NAME}/${NOMAD_TASK_NAME}:/var/lib/clickhouse",
"local/clickhouse.xml:/etc/clickhouse-server/config.d/logging.xml:ro",
"local/clickhouse-user-config.xml:/etc/clickhouse-server/users.d/logging.xml:ro"
]
}
template {
data = <<EOH
<clickhouse>
<logger>
<level>warning</level>
<console>true</console>
</logger>
<!-- Stop all the unnecessary logging -->
<query_thread_log remove="remove"/>
<query_log remove="remove"/>
<text_log remove="remove"/>
<trace_log remove="remove"/>
<metric_log remove="remove"/>
<asynchronous_metric_log remove="remove"/>
<session_log remove="remove"/>
<part_log remove="remove"/>
</clickhouse>
EOH
destination = "local/clickhouse.xml"
}
template {
data = <<EOH
<clickhouse>
<profiles>
<default>
<log_queries>0</log_queries>
<log_query_threads>0</log_query_threads>
</default>
</profiles>
</clickhouse>
EOH
destination = "local/clickhouse-user-config.xml"
}
resources {
memory = 1000
}
}
}
}

View file

@ -0,0 +1,50 @@
job "privatebin-backup" {
datacenters = ["aperture"]
type = "batch"
periodic {
crons = ["0 */3 * * * *"]
prohibit_overlap = true
}
group "db-backup" {
task "postgres-backup" {
driver = "raw_exec"
config {
command = "/bin/bash"
args = ["local/script.sh"]
}
template {
data = <<EOH
#!/bin/bash
file=/storage/backups/nomad/privatebin/postgresql-privatebin-$(date +%Y-%m-%d_%H-%M-%S).sql
mkdir -p /storage/backups/nomad/privatebin
alloc_id=$(nomad job status privatebin | grep running | tail -n 1 | cut -d " " -f 1)
job_name=$(echo ${NOMAD_JOB_NAME} | cut -d "/" -f 1)
nomad alloc exec -task db $alloc_id pg_dumpall -U {{ key "privatebin/db/user" }} > "${file}"
find /storage/backups/nomad/privatebin/postgresql-privatebin* -ctime +3 -exec rm {} \; || true
if [ -s "$file" ]; then # check if file exists and is not empty
echo "Backup successful"
exit 0
else
rm $file
curl -H "Content-Type: application/json" -d \
'{"content": "<@&585512338728419341> `PostgreSQL` backup for **'"${job_name}"'** has just **FAILED**\nFile name: `'"$file"'`\nDate: `'"$(TZ=Europe/Dublin date)"'`\nTurn off this script with `nomad job stop '"${job_name}"'` \n\n## Remember to restart this backup job when fixed!!!"}' \
{{ key "postgres/webhook/discord" }}
fi
EOH
destination = "local/script.sh"
}
}
}
}

View file

@ -0,0 +1,218 @@
job "privatebin" {
datacenters = ["aperture"]
type = "service"
group "privatebin" {
count = 1
network {
port "http" {
to = 8080
}
port "db" {
to = 5432
}
}
service {
name = "privatebin"
port = "http"
check {
type = "http"
path = "/"
interval = "10s"
timeout = "2s"
}
tags = [
"traefik.enable=true",
"traefik.http.routers.privatebin.rule=Host(`paste.redbrick.dcu.ie`) || Host(`paste.rb.dcu.ie`)",
"traefik.http.routers.privatebin.entrypoints=web,websecure",
"traefik.http.routers.privatebin.tls.certresolver=lets-encrypt",
]
}
task "privatebin" {
driver = "docker"
config {
image = "privatebin/nginx-fpm-alpine:stable"
ports = ["http"]
volumes = [
"local/conf.php:/srv/data/conf.php",
]
}
env {
TZ = "Europe/Dublin"
PHP_TZ = "Europe/Dublin"
CONFIG_PATH = "/srv/data/"
}
template {
destination = "local/conf.php"
data = <<EOH
[main]
name = "Redbrick PasteBin"
basepath = "https://paste.redbrick.dcu.ie/"
discussion = true
opendiscussion = false
password = true
fileupload = true
burnafterreadingselected = false
defaultformatter = "markdown"
; (optional) set a syntax highlighting theme, as found in css/prettify/
syntaxhighlightingtheme = "sons-of-obsidian"
; size limit per paste or comment in bytes, defaults to 10 Mebibytes
sizelimit = 10485760
; template to include, default is "bootstrap" (tpl/bootstrap.php)
template = "bootstrap-dark"
; (optional) info text to display
; use single, instead of double quotes for HTML attributes
;info = "More information on the <a href='https://privatebin.info/'>project page</a>."
; (optional) notice to display
; notice = "Note: Distro is a Goombean."
languageselection = false
languagedefault = "en"
; (optional) URL shortener address to offer after a new paste is created.
; It is suggested to only use this with self-hosted shorteners as this will leak
; the pastes encryption key.
urlshortener = "https://s.rb.dcu.ie/rest/v1/short-urls/shorten?apiKey={{ key "privatebin/shlink/api" }}&format=txt&longUrl="
qrcode = true
email = true
; Can be set to one these values:
; "none" / "identicon" (default) / "jdenticon" / "vizhash".
icon = "identicon"
; Content Security Policy headers allow a website to restrict what sources are
; allowed to be accessed in its context. You need to change this if you added
; custom scripts from third-party domains to your templates, e.g. tracking
; scripts or run your site behind certain DDoS-protection services.
; Check the documentation at https://content-security-policy.com/
; Notes:
; - If you use a bootstrap theme, you can remove the allow-popups from the
; sandbox restrictions.
; - By default this disallows to load images from third-party servers, e.g. when
; they are embedded in pastes. If you wish to allow that, you can adjust the
; policy here. See https://github.com/PrivateBin/PrivateBin/wiki/FAQ#why-does-not-it-load-embedded-images
; for details.
; - The 'unsafe-eval' is used in two cases; to check if the browser supports
; async functions and display an error if not and for Chrome to enable
; webassembly support (used for zlib compression). You can remove it if Chrome
; doesn't need to be supported and old browsers don't need to be warned.
; cspheader = "default-src 'none'; base-uri 'self'; form-action 'none'; manifest-src 'self'; connect-src * blob:; script-src 'self' 'unsafe-eval'; style-src 'self'; font-src 'self'; frame-ancestors 'none'; img-src 'self' data: blob:; media-src blob:; object-src blob:; sandbox allow-same-origin allow-scripts allow-forms allow-popups allow-modals allow-downloads"
zerobincompatibility = false
httpwarning = true
compression = "zlib"
[expire]
; make sure the value exists in [expire_options]
default = "1week"
[expire_options]
5min = 300
10min = 600
1hour = 3600
1day = 86400
1week = 604800
2week = 1209600
; Well this is not *exactly* one month, it's 30 days:
1month = 2592000
1year = 31536000
never = 0
[formatter_options]
plaintext = "Plain Text"
markdown = "Markdown"
syntaxhighlighting = "Source Code"
[traffic]
; time limit between calls from the same IP address in seconds
; Set this to 0 to disable rate limiting.
limit = 10
; (optional) Set IPs addresses (v4 or v6) or subnets (CIDR) which are exempted
; from the rate-limit. Invalid IPs will be ignored. If multiple values are to
; be exempted, the list needs to be comma separated. Leave unset to disable
; exemptions.
; exempted = "1.2.3.4,10.10.10/24"
; (optional) If you want only some source IP addresses (v4 or v6) or subnets
; (CIDR) to be allowed to create pastes, set these here. Invalid IPs will be
; ignored. If multiple values are to be exempted, the list needs to be comma
; separated. Leave unset to allow anyone to create pastes.
; creators = "1.2.3.4,10.10.10/24"
; (optional) if your website runs behind a reverse proxy or load balancer,
; set the HTTP header containing the visitors IP address, i.e. X_FORWARDED_FOR
; header = "X_FORWARDED_FOR"
[purge]
; minimum time limit between two purgings of expired pastes, it is only
; triggered when pastes are created
; Set this to 0 to run a purge every time a paste is created.
limit = 300
; maximum amount of expired pastes to delete in one purge
; Set this to 0 to disable purging. Set it higher, if you are running a large
; site
batchsize = 10
[model]
class = Database
[model_options]
dsn = "pgsql:host={{ env "NOMAD_ADDR_db" }};dbname={{ key "privatebin/db/name" }}"
tbl = "{{ key "privatebin/db/name" }}" ; table prefix
usr = "{{ key "privatebin/db/user" }}"
pwd = "{{ key "privatebin/db/password" }}"
opt[12] = true ; PDO::ATTR_PERSISTENT ; use persistent connections - default
EOH
}
}
task "db" {
driver = "docker"
config {
image = "postgres:17-alpine"
ports = ["db"]
volumes = [
"/storage/nomad/${NOMAD_JOB_NAME}/${NOMAD_TASK_NAME}:/var/lib/postgresql/data",
]
}
template {
data = <<EOH
POSTGRES_PASSWORD={{ key "privatebin/db/password" }}
POSTGRES_USER={{ key "privatebin/db/user" }}
POSTGRES_NAME={{ key "privatebin/db/name" }}
EOH
destination = "local/db.env"
env = true
}
}
}
}

93
jobs/services/shlink.hcl Normal file
View file

@ -0,0 +1,93 @@
job "shlink" {
datacenters = ["aperture"]
type = "service"
group "web" {
network {
port "api" {
to = 8080
}
port "web" {
to = 8080
}
}
service {
name = "shlink"
port = "api"
tags = [
"traefik.enable=true",
"traefik.http.routers.shlink-api.entrypoints=web,websecure",
"traefik.http.routers.shlink-api.rule=Host(`s.rb.dcu.ie`)",
"traefik.http.routers.shlink-api.tls=true",
"traefik.http.routers.shlink-api.tls.certresolver=lets-encrypt",
]
}
task "shlink" {
driver = "docker"
config {
image = "shlinkio/shlink"
ports = ["api"]
}
template {
data = <<EOH
DEFAULT_DOMAIN=s.rb.dcu.ie
IS_HTTPS_ENABLED=true
DB_DRIVER=postgres
DB_USER={{ key "shlink/db/user" }}
DB_PASSWORD={{ key "shlink/db/password" }}
DB_NAME={{ key "shlink/db/name" }}
DB_HOST=postgres.service.consul
GEOLITE_LICENSE_KEY={{ key "shlink/geolite/key" }}
EOH
destination = "local/file.env"
env = true
}
resources {
memory = 1000
}
}
# task "shlink-web-client" {
# driver = "docker"
#
# config {
# image = "shlinkio/shlink-web-client"
# ports = ["web"]
# }
#
# template {
# data = <<EOH
#SHLINK_SERVER_URL=https://s.rb.dcu.ie
#SHLINK_API_KEY={{ key "shlink/api/key" }}
#EOH
# destination = "local/file.env"
# env = true
# }
#
#
#
# service {
# name = "shlink"
# port = "api"
#
# tags = [
# "traefik.enable=true",
# "traefik.http.routers.shlink-web.entrypoints=web,websecure",
# "traefik.http.routers.shlink-web.rule=Host(`shlink.rb.dcu.ie`)",
# "traefik.http.routers.shlink-web.tls=true",
# "traefik.http.routers.shlink-web.tls.certresolver=lets-encrypt",
# ]
# }
# resources {
# memory = 500
# }
# }
}
}

View file

@ -0,0 +1,50 @@
job "vaultwarden-backup" {
datacenters = ["aperture"]
type = "batch"
periodic {
crons = ["0 */3 * * * *"]
prohibit_overlap = true
}
group "db-backup" {
task "postgres-backup" {
driver = "raw_exec"
config {
command = "/bin/bash"
args = ["local/script.sh"]
}
template {
data = <<EOH
#!/bin/bash
file=/storage/backups/nomad/vaultwarden/postgresql-vaultwarden-$(date +%Y-%m-%d_%H-%M-%S).sql
mkdir -p /storage/backups/nomad/vaultwarden
alloc_id=$(nomad job status vaultwarden | grep running | tail -n 1 | cut -d " " -f 1)
job_name=$(echo ${NOMAD_JOB_NAME} | cut -d "/" -f 1)
nomad alloc exec -task db $alloc_id pg_dumpall -U {{ key "vaultwarden/db/user" }} > "${file}"
find /storage/backups/nomad/vaultwarden/postgresql-vaultwarden* -ctime +3 -exec rm {} \; || true
if [ -s "$file" ]; then # check if file exists and is not empty
echo "Backup successful"
exit 0
else
rm $file
curl -H "Content-Type: application/json" -d \
'{"content": "<@&585512338728419341> `PostgreSQL` backup for **'"${job_name}"'** has just **FAILED**\nFile name: `'"$file"'`\nDate: `'"$(TZ=Europe/Dublin date)"'`\nTurn off this script with `nomad job stop '"${job_name}"'` \n\n## Remember to restart this backup job when fixed!!!"}' \
{{ key "postgres/webhook/discord" }}
fi
EOH
destination = "local/script.sh"
}
}
}
}

View file

@ -0,0 +1,95 @@
job "vaultwarden" {
datacenters = ["aperture"]
type = "service"
group "vaultwarden" {
count = 1
network {
port "http" {
to = 80
}
port "db" {
to = 5432
}
}
service {
name = "vaultwarden"
port = "http"
tags = [
"traefik.enable=true",
"traefik.http.routers.vaultwarden.rule=Host(`vault.redbrick.dcu.ie`)",
"traefik.http.routers.vaultwarden.entrypoints=websecure",
"traefik.http.routers.vaultwarden.tls.certresolver=lets-encrypt",
]
}
task "vaultwarden" {
driver = "docker"
config {
image = "vaultwarden/server:latest-alpine"
ports = ["http"]
volumes = [
"/storage/nomad/${NOMAD_JOB_NAME}:/data",
"/etc/localtime:/etc/localtime:ro"
]
}
template {
data = <<EOF
DOMAIN=https://vault.redbrick.dcu.ie
DATABASE_URL=postgresql://{{ key "vaultwarden/db/user" }}:{{ key "vaultwarden/db/password" }}@{{ env "NOMAD_ADDR_db" }}/{{ key "vaultwarden/db/name" }}
SIGNUPS_ALLOWED=false
INVITATIONS_ALLOWED=true
# This is not the actual token, but a hash of it. Vaultwarden does not like the actual token.
ADMIN_TOKEN={{ key "vaultwarden/admin/hash" }}
SMTP_HOST={{ key "vaultwarden/smtp/host" }}
SMTP_FROM={{ key "vaultwarden/smtp/from" }}
SMTP_PORT={{ key "vaultwarden/smtp/port" }}
SMTP_SECURITY=starttls
SMTP_USERNAME={{ key "vaultwarden/smtp/username" }}
SMTP_PASSWORD={{ key "vaultwarden/smtp/password" }}
EOF
destination = "local/env"
env = true
}
# These yubico variables are not necessary for yubikey support, only to verify the keys with yubico.
#YUBICO_CLIENT_ID={{ key "vaultwarden/yubico/client_id" }}
#YUBICO_SECRET_KEY={{ key "vaultwarden/yubico/secret_key" }}
resources {
cpu = 500
memory = 500
}
}
task "db" {
driver = "docker"
config {
image = "postgres:17-alpine"
ports = ["db"]
volumes = [
"/storage/nomad/${NOMAD_JOB_NAME}/${NOMAD_TASK_NAME}:/var/lib/postgresql/data",
]
}
template {
data = <<EOH
POSTGRES_PASSWORD={{ key "vaultwarden/db/password" }}
POSTGRES_USER={{ key "vaultwarden/db/user" }}
POSTGRES_NAME={{ key "vaultwarden/db/name" }}
EOH
destination = "local/db.env"
env = true
}
}
}
}

52
jobs/services/wetty.hcl Normal file
View file

@ -0,0 +1,52 @@
job "wetty" {
datacenters = ["aperture"]
type = "service"
group "wetty" {
count = 1
network {
port "http" {
to = 3000
}
}
service {
name = "wetty"
port = "http"
check {
type = "http"
path = "/"
interval = "10s"
timeout = "2s"
}
tags = [
"traefik.enable=true",
"traefik.http.routers.wetty.rule=Host(`wetty.rb.dcu.ie`) || Host(`wetty.redbrick.dcu.ie`) || Host(`term.redbrick.dcu.ie`) || Host(`anyterm.redbrick.dcu.ie`) || Host(`ajaxterm.redbrick.dcu.ie`)",
"traefik.http.routers.wetty.entrypoints=web,websecure",
"traefik.http.routers.wetty.tls.certresolver=lets-encrypt",
]
}
task "wetty" {
driver = "docker"
config {
image = "wettyoss/wetty"
ports = ["http"]
}
template {
destination = "local/.env"
env = true
data = <<EOH
SSHHOST={{ key "wetty/ssh/host" }}
SSHPORT=22
BASE=/
EOH
}
}
}
}

View file

@ -0,0 +1,174 @@
<?php
# Protect against web entry
if ( !defined( 'MEDIAWIKI' ) ) {
exit;
}
$wgSitename = "Redbrick Wiki";
$wgScriptPath = "";
$wgArticlePath = "/$1";
$wgUsePathInfo = true;
$wgScriptExtension = ".php";
$wgServer = "https://{{ env "NOMAD_META_domain" }}";
## The URL path to static resources (images, scripts, etc.)
$wgResourceBasePath = $wgScriptPath;
$wgLogo = "$wgResourceBasePath/Resources/assets/logo.png";
$wgFavicon = "$wgResourceBasePath/Resources/assets/favicon.ico";
$wgAllowExternalImages = true;
## UPO: this is also a user preference option
$wgEnableEmail = false;
$wgEnableUserEmail = false; # UPO
$wgEmergencyContact = "{{ key "mediawiki/mail/emergency/contact" }}";
$wgPasswordSender = "{{ key "mediawiki/mail/password/sender" }}";
$wgEnotifUserTalk = false; # UPO
$wgEnotifWatchlist = false; # UPO
$wgEmailAuthentication = true;
## Database settings
$wgDBtype = "mysql";
$wgDBserver = "{{ env "NOMAD_ALLOC_IP_db" }}";
$wgDBport = "{{ env "NOMAD_ALLOC_PORT_db" }}";
$wgDBname = "{{ key "mediawiki/db/name" }}";
$wgDBuser = "{{ key "mediawiki/db/username" }}";
$wgDBpassword = "{{ key "mediawiki/db/password" }}";
# MySQL specific settings
$wgDBprefix = "rbwiki_";
# MySQL table options to use during installation or update
$wgDBTableOptions = "ENGINE=InnoDB, DEFAULT CHARSET=utf8mb4";
## Shared memory settings
$wgMainCacheType = CACHE_NONE;
$wgMemCachedServers = [];
$wgEnableUploads = true;
$wgUseImageMagick = true;
$wgImageMagickConvertCommand = "/usr/bin/convert";
$wgUploadPath = "$wgScriptPath/images";
$wgUploadDirectory = "{$IP}/images";
$wgHashedUploadDirectory = true;
$wgDirectoryMode = 0755;
umask(0022);
# InstantCommons allows wiki to use images from https://commons.wikimedia.org
$wgUseInstantCommons = false;
$wgPingback = false;
$wgShellLocale = "C.UTF-8";
$wgLanguageCode = "en";
$wgSecretKey = "{{ key "mediawiki/key/secret" }}";
# Changing this will log out all existing sessions.
$wgAuthenticationTokenVersion = "1";
# Site upgrade key. Must be set to a string (default provided) to turn on the
# web installer while LocalSettings.php is in place
$wgUpgradeKey = "{{ key "mediawiki/key/upgrade" }}";
$wgRightsPage = ""; # Set to the title of a wiki page that describes your license/copyright
$wgRightsUrl = "";
$wgRightsText = "";
$wgRightsIcon = "";
$wgDiff3 = "/usr/bin/diff3";
$wgDefaultSkin = "vector";
$wgDefaultMobileSkin = 'vector-2022';
# Enabled skins.
wfLoadSkin( 'Vector' );
wfLoadSkin( 'Citizen' );
wfLoadSkin( 'Timeless' );
wfLoadSkin( 'MinervaNeue' );
wfLoadSkin( 'Medik' );
$wgCitizenThemeColor = "#a81e22";
$wgCitizenShowPageTools = "permission";
$wgCitizenSearchDescriptionSource = "pagedescription";
$wgMedikColor = "#a81e22";
$wgMedikShowLogo = "main";
$wgLocalisationUpdateDirectory = "$IP/cache";
# load extensions
wfLoadExtension( 'HitCounters' );
wfLoadExtension( 'LDAPProvider' );
wfLoadExtension( 'LDAPAuthentication2' );
wfLoadExtension( 'PluggableAuth' );
$wgPluggableAuth_ButtonLabel = "Redbrick Log In";
wfLoadExtension( 'LDAPAuthorization' );
wfLoadExtension( 'OpenGraphMeta' );
wfLoadExtension( 'Description2' );
$wgEnableMetaDescriptionFunctions = true;
wfLoadExtension( 'PageImages' );
$wgPageImagesOpenGraphFallbackImage = $wgLogo;
wfLoadExtension( 'Plausible' );
$wgPlausibleDomain = "https://plausible.redbrick.dcu.ie";
$wgPlausibleDomainKey = "{{ env "NOMAD_META_domain" }}";
$wgPlausibleTrackOutboundLinks = true;
$wgPlausibleTrackLoggedIn = true;
$wgPlausibleTrack404 = true;
$wgPlausibleTrackSearchInput = true;
$wgPlausibleTrackCitizenSearchLinks = true;
$wgPlausibleTrackCitizenMenuLinks = true;
wfLoadExtension( 'WikiMarkdown' );
$wgAllowMarkdownExtra = true;
$wgAllowMarkdownExtended = true;
wfLoadExtension( 'RSS' );
wfLoadExtension( 'SyntaxHighlight_GeSHi' );
wfLoadExtension( 'WikiEditor' );
wfLoadExtension( 'MobileFrontend' );
$LDAPProviderDomainConfigs = "/etc/mediawiki/ldapprovider.json";
$wgPluggableAuth_Config['Redbrick Log In'] = [
'plugin' => 'LDAPAuthentication2',
'data' => [
'domain' => 'LDAP'
],
];
# RBOnly Namespace
# To allow semi-public pages
$wgExtraNamespaces = array(100 => "RBOnly", 101 => "RBOnly_talk");
$wgNamespacesWithSubpages = array( -1 => 0, 0 => 0, 1 => 1, 2 => 1, 3 => 1, 4 => 0, 5 => 1, 6 => 0, 7 => 1, 8 => 0, 9 => 1, 10 => 0, 11 => 1,100 => 1,101 => 1);
$wgNamespacesToBeSearchedDefault = array( -1 => 0, 0 => 1, 1 => 0, 2 => 0, 3 => 0, 4 => 0, 5 => 0, 6 => 0, 7 => 0, 8 => 0, 9 => 0, 10 => 0, 11 => 0,100 => 0,101 => 0);
$wgNonincludableNamespaces[] = 100;
$wgGroupPermissions['*']['readrbonly'] = false;
$wgGroupPermissions['sysop']['readrbonly'] = true;
$wgNamespaceProtection[ 100 ] = array( 'readrbonly' );
# group permissions
$wgGroupPermissions['*']['autocreateaccount'] = true;
$wgGroupPermissions['*']['createaccount'] = false;
$wgGroupPermissions['*']['read'] = true;
$wgGroupPermissions['*']['edit'] = false;
# Exclude user group page views from counting.
$wgGroupPermissions['sysop']['hitcounter-exempt'] = true;
# When set to true, it adds the PageId to the special page "PopularPages". The default value is false.
$wgEnableAddPageId = false;
# When set to true, it adds the TextLength to the special page "PopularPages". The default value is false.
$wgEnableAddTextLength = true;
# debug logs
# $wgDebugDumpSql = true;
$wgShowExceptionDetails = true;
$wgShowDBErrorBacktrace = true;
$wgShowSQLErrors = true;
$wgDebugLogFile = "/dev/stderr";

View file

@ -0,0 +1,89 @@
job "mediawiki-backup" {
datacenters = ["aperture"]
type = "batch"
periodic {
crons = ["0 */3 * * * *"]
prohibit_overlap = true
}
group "db-backup" {
task "mysql-backup" {
driver = "raw_exec"
config {
command = "/bin/bash"
args = ["local/mysql-backup.sh"]
}
template {
data = <<EOH
#!/bin/bash
file=/storage/backups/nomad/wiki/mysql/rbwiki-mysql-$(date +%Y-%m-%d_%H-%M-%S).sql
mkdir -p /storage/backups/nomad/wiki/mysql
alloc_id=$(nomad job status mediawiki | grep running | tail -n 1 | cut -d " " -f 1)
job_name=$(echo ${NOMAD_JOB_NAME} | cut -d "/" -f 1)
nomad alloc exec -task rbwiki-db $alloc_id mariadb-dump -u {{ key "mediawiki/db/username" }} -p'{{ key "mediawiki/db/password"}}' {{ key "mediawiki/db/name" }} > "${file}"
find /storage/backups/nomad/wiki/mysql/rbwiki-mysql* -ctime +30 -exec rm {} \; || true
if [ -s "$file" ]; then # check if file exists and is not empty
echo "Backup successful"
exit 0
else
rm $file
curl -H "Content-Type: application/json" -d \
'{"content": "<@&585512338728419341> `MySQL` backup for **'"${job_name}"'** has just **FAILED**\nFile name: `'"$file"'`\nDate: `'"$(TZ=Europe/Dublin date)"'`\nTurn off this script with `nomad job stop '"${job_name}"'` \n\n## Remember to restart this backup job when fixed!!!"}' \
{{ key "mysql/webhook/discord" }}
fi
EOH
destination = "local/mysql-backup.sh"
}
}
}
group "xml-dump" {
task "xml-dump" {
driver = "raw_exec"
config {
command = "/bin/bash"
args = ["local/xml-dump.sh"]
}
template {
data = <<EOH
#!/bin/bash
file=/storage/backups/nomad/wiki/xml/rbwiki-dump-$(date +%Y-%m-%d_%H-%M-%S).xml
mkdir -p /storage/backups/nomad/wiki/xml
alloc_id=$(nomad job status mediawiki | grep running | tail -n 1 | cut -d " " -f 1)
job_name=$(echo ${NOMAD_JOB_NAME} | cut -d "/" -f 1)
nomad alloc exec -task rbwiki-php $alloc_id /usr/local/bin/php /var/www/html/maintenance/dumpBackup.php --full --include-files --uploads > "${file}"
find /storage/backups/nomad/wiki/xml/rbwiki-dump* -ctime +3 -exec rm {} \; || true
if [ -n "$(find ${file} -prune -size +100000000c)" ]; then # check if file exists and is not empty
echo "Backup successful"
exit 0
else
rm $file
curl -H "Content-Type: application/json" -d \
'{"content": "<@&585512338728419341> `dumpBackup.php` backup for **'"${job_name}"'** has just **FAILED**\nFile name: `'"$file"'`\nDate: `'"$(TZ=Europe/Dublin date)"'`\nTurn off this script with `nomad job stop '"${job_name}"'` \n\n## Remember to restart this backup job when fixed!!!"}' \
{{ key "mysql/webhook/discord" }}
fi
EOH
destination = "local/xml-dump.sh"
}
}
}
}

View file

@ -0,0 +1,270 @@
job "mediawiki" {
datacenters = ["aperture"]
type = "service"
meta {
domain = "wiki.redbrick.dcu.ie"
}
group "rbwiki" {
count = 1
network {
mode = "bridge"
port "http" {
to = 80
}
port "fpm" {
to = 9000
}
port "db" {
to = 3306
}
}
service {
name = "rbwiki-web"
port = "http"
check {
type = "http"
path = "/Main_Page"
interval = "10s"
timeout = "5s"
}
tags = [
"traefik.enable=true",
"traefik.port=${NOMAD_PORT_http}",
"traefik.http.routers.rbwiki.rule=Host(`${NOMAD_META_domain}`) || Host(`wiki.rb.dcu.ie`)",
"traefik.http.routers.rbwiki.entrypoints=web,websecure",
"traefik.http.routers.rbwiki.tls.certresolver=lets-encrypt",
"traefik.http.routers.rbwiki.middlewares=rbwiki-redirect-root, rbwiki-redirect-mw",
"traefik.http.middlewares.rbwiki-redirect-root.redirectregex.regex=^https://wiki\\.redbrick\\.dcu\\.ie/?$",
"traefik.http.middlewares.rbwiki-redirect-root.redirectregex.replacement=https://wiki.redbrick.dcu.ie/Main_Page",
"traefik.http.middlewares.rbwiki-redirect-mw.redirectregex.regex=https://wiki\\.redbrick\\.dcu\\.ie/Mw/(.*)",
"traefik.http.middlewares.rbwiki-redirect-mw.redirectregex.replacement=https://wiki.redbrick.dcu.ie/$1",
]
}
task "rbwiki-nginx" {
driver = "docker"
config {
image = "nginx:alpine"
ports = ["http"]
volumes = [
"local/nginx.conf:/etc/nginx/nginx.conf",
"/storage/nomad/mediawiki/extensions:/var/www/html/extensions",
"/storage/nomad/mediawiki/images:/var/www/html/images",
"/storage/nomad/mediawiki/skins:/var/www/html/skins",
"/storage/nomad/mediawiki/resources/assets:/var/www/html/Resources/assets",
]
}
resources {
cpu = 200
memory = 100
}
template {
data = <<EOH
# user www-data www-data;
error_log /dev/stderr error;
events {
worker_connections 1024;
}
http {
include /etc/nginx/mime.types;
server_tokens off;
error_log /dev/stderr error;
access_log /dev/stdout;
charset utf-8;
server {
server_name {{ env "NOMAD_META_domain" }};
listen 80;
listen [::]:80;
root /var/www/html;
index index.php index.html index.htm;
client_max_body_size 5m;
client_body_timeout 60;
# MediaWiki short URLs
location / {
try_files $uri $uri/ @rewrite;
}
location @rewrite {
rewrite ^/(.*)$ /index.php?title=$1&$args;
}
location ~* \.(js|css|png|jpg|jpeg|gif|ico|svg|woff|woff2|ttf|otf|eot|ico)$ {
try_files $uri /index.php;
expires max;
log_not_found off;
}
# Pass the PHP scripts to FastCGI server
location ~ \.php$ {
include fastcgi_params;
fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name;
fastcgi_pass {{ env "NOMAD_HOST_ADDR_fpm" }};
fastcgi_index index.php;
}
location ~ /\.ht {
deny all;
}
}
}
EOH
destination = "local/nginx.conf"
}
}
task "rbwiki-php" {
driver = "docker"
config {
image = "ghcr.io/wizzdom/mediawiki-fpm-ldap-alpine:latest"
ports = ["fpm"]
volumes = [
"/storage/nomad/mediawiki/extensions:/var/www/html/extensions",
"/storage/nomad/mediawiki/images:/var/www/html/images",
"/storage/nomad/mediawiki/skins:/var/www/html/skins",
"/storage/nomad/mediawiki/resources/assets:/var/www/html/Resources/assets",
"local/LocalSettings.php:/var/www/html/LocalSettings.php",
"local/ldapprovider.json:/etc/mediawiki/ldapprovider.json"
]
}
resources {
cpu = 4000
memory = 1200
}
template {
data = <<EOH
{
"LDAP": {
"authorization": {
"rules": {
"groups": {
"required": []
}
}
},
"connection": {
"server": "{{ key "mediawiki/ldap/server" }}",
"user": "{{ key "mediawiki/ldap/user" }}",
"pass": "{{ key "mediawiki/ldap/password" }}",
"options": {
"LDAP_OPT_DEREF": 1
},
"grouprequest": "MediaWiki\\Extension\\LDAPProvider\\UserGroupsRequest\\GroupMemberUid::factory",
"basedn": "o=redbrick",
"groupbasedn": "ou=groups,o=redbrick",
"userbasedn": "ou=accounts,o=redbrick",
"searchattribute": "uid",
"searchstring": "uid=USER-NAME,ou=accounts,o=redbrick",
"usernameattribute": "uid",
"realnameattribute": "cn",
"emailattribute": "altmail"
}
}
}
EOH
destination = "local/ldapprovider.json"
}
template {
data = file("LocalSettings.php")
destination = "local/LocalSettings.php"
}
}
service {
name = "rbwiki-db"
port = "db"
check {
name = "mariadb_probe"
type = "tcp"
interval = "10s"
timeout = "2s"
}
}
task "rbwiki-db" {
driver = "docker"
config {
image = "mariadb"
ports = ["db"]
volumes = [
"/storage/nomad/mediawiki/db:/var/lib/mysql",
"/oldstorage/wiki_backups:/wiki-backups/backup",
"local/conf.cnf:/etc/mysql/mariadb.conf.d/50-server.cnf",
]
}
template {
data = <<EOH
[mysqld]
# Ensure full UTF-8 support
character-set-server = utf8mb4
collation-server = utf8mb4_unicode_ci
skip-character-set-client-handshake
# Fix 1000-byte key length issue
innodb_large_prefix = 1
innodb_file_format = Barracuda
innodb_file_per_table = 1
innodb_default_row_format = dynamic
# Performance optimizations (Keep these based on your system)
max_connections = 100
key_buffer_size = 2G
query_cache_size = 0
innodb_buffer_pool_size = 6G
innodb_log_file_size = 512M
innodb_flush_log_at_trx_commit = 1
innodb_flush_method = O_DIRECT
innodb_io_capacity = 200
tmp_table_size = 5242K
max_heap_table_size = 5242K
innodb_log_buffer_size = 16M
# Logging
slow_query_log = 1
slow_query_log_file = /var/log/mysql/slow.log
long_query_time = 1
# Network
bind-address = 0.0.0.0
EOH
destination = "local/conf.cnf"
}
resources {
cpu = 800
memory = 2500
}
template {
data = <<EOH
MYSQL_DATABASE={{ key "mediawiki/db/name" }}
MYSQL_USER={{ key "mediawiki/db/username" }}
MYSQL_PASSWORD={{ key "mediawiki/db/password" }}
MYSQL_RANDOM_ROOT_PASSWORD=yes
EOH
destination = "local/.env"
env = true
}
}
}
}

View file

@ -0,0 +1,36 @@
job "ams-amikon-update" {
datacenters = ["aperture"]
type = "batch"
periodic {
crons = ["0 */6 * * * *"]
prohibit_overlap = true
}
group "ams-amikon-update" {
task "ams-amikon-update" {
driver = "raw_exec"
config {
command = "/bin/bash"
args = ["local/script.sh"]
}
template {
data = <<EOH
#!/bin/bash
# stop the ams-amikon job
nomad job stop ams-amikon
sleep 1
# revert the ams-amikon job to the previous version
# this will trigger a new deployment, which will pull the latest image
nomad job revert ams-amikon $(($(nomad job inspect ams-amikon | jq '.Job.Version')-1))
EOH
destination = "local/script.sh"
}
}
}
}

65
jobs/socs/ams-amikon.hcl Normal file
View file

@ -0,0 +1,65 @@
job "ams-amikon" {
datacenters = ["aperture"]
meta {
run_uuid = "${uuidv4()}"
}
type = "service"
group "ams-amikon" {
count = 1
network {
port "http" {
to = 3000
}
}
service {
port = "http"
check {
type = "http"
path = "/"
interval = "10s"
timeout = "2s"
}
tags = [
"traefik.enable=true",
"traefik.http.routers.ams-amikon.rule=Host(`amikon.me`) || Host(`www.amikon.me`)",
"traefik.http.routers.ams-amikon.entrypoints=web,websecure",
"traefik.http.routers.ams-amikon.tls.certresolver=lets-encrypt",
"traefik.http.routers.ams-amikon.middlewares=amikon-www-redirect",
"traefik.http.middlewares.amikon-www-redirect.redirectregex.regex=^https?://www.amikon.me/(.*)",
"traefik.http.middlewares.amikon-www-redirect.redirectregex.replacement=https://amikon.me/$${1}",
"traefik.http.middlewares.amikon-www-redirect.redirectregex.permanent=true",
]
}
task "amikon-node" {
driver = "docker"
config {
image = "ghcr.io/dcuams/amikon-site-v2:latest"
force_pull = true
ports = ["http"]
}
template {
data = <<EOF
EMAIL={{ key "ams/amikon/email/user" }}
EMAIL_PASS={{ key "ams/amikon/email/password" }}
TO_EMAIL={{ key "ams/amikon/email/to" }}
EOF
destination = ".env"
env = true
}
resources {
cpu = 800
memory = 500
}
}
}
}

View file

@ -0,0 +1,107 @@
job "dcusr-listmonk" {
datacenters = ["aperture"]
type = "service"
meta {
domain = "lists.solarracing.ie"
}
group "listmonk" {
network {
port "http" {
}
port "db" {
to = 5432
}
}
service {
name = "listmonk"
port = "http"
check {
type = "http"
path = "/"
interval = "10s"
timeout = "2s"
}
tags = [
"traefik.enable=true",
"traefik.port=${NOMAD_PORT_http}",
"traefik.http.routers.dcusr-listmonk.entrypoints=web,websecure",
"traefik.http.routers.dcusr-listmonk.rule=Host(`${NOMAD_META_domain}`)",
"traefik.http.routers.dcusr-listmonk.tls=true",
"traefik.http.routers.dcusr-listmonk.tls.certresolver=lets-encrypt",
]
}
task "app" {
driver = "docker"
config {
image = "listmonk/listmonk:latest"
ports = ["http"]
entrypoint = ["./listmonk", "--static-dir=/listmonk/static"]
volumes = [
"/storage/nomad/dcusr-listmonk/static:/listmonk/static",
"/storage/nomad/dcusr-listmonk/postgres/:/var/lib/postgresql/data",
"local/config.toml:/listmonk/config.toml"
]
}
resources {
cpu = 1000
memory = 500
}
template {
data = <<EOH
[app]
address = "0.0.0.0:{{ env "NOMAD_PORT_http" }}"
admin_username = "{{ key "dcusr/listmonk/admin/username" }}"
admin_password = "{{ key "dcusr/listmonk/admin/password" }}"
# Database.
[db]
host = "{{ env "NOMAD_HOST_IP_db" }}"
port = {{ env "NOMAD_HOST_PORT_db" }}
user = "{{ key "dcusr/listmonk/db/username" }}"
password = "{{ key "dcusr/listmonk/db/password" }}"
database = "{{ key "dcusr/listmonk/db/name" }}"
ssl_mode = "disable"
max_open = 25
max_idle = 25
max_lifetime = "300s"
EOH
destination = "local/config.toml"
}
}
task "listmonk-db" {
driver = "docker"
config {
image = "postgres:17-alpine"
ports = ["db"]
volumes = [
"/storage/nomad/dcusr-listmonk/postgres:/var/lib/postgresql/data"
]
}
template {
data = <<EOH
POSTGRES_DB = "{{ key "dcusr/listmonk/db/name" }}"
POSTGRES_USER = "{{ key "dcusr/listmonk/db/username" }}"
POSTGRES_PASSWORD = "{{ key "dcusr/listmonk/db/password" }}"
EOH
destination = "local/db.env"
env = true
}
}
}
}

View file

@ -0,0 +1,50 @@
job "dcusr-outline-backup" {
datacenters = ["aperture"]
type = "batch"
periodic {
crons = ["0 */3 * * * *"]
prohibit_overlap = true
}
group "db-backup" {
task "postgres-backup" {
driver = "raw_exec"
config {
command = "/bin/bash"
args = ["local/script.sh"]
}
template {
data = <<EOH
#!/bin/bash
file=/storage/backups/nomad/postgres/dcusr/outline/postgresql-outline-$(date +%Y-%m-%d_%H-%M-%S).sql
mkdir -p /storage/backups/nomad/postgres/dcusr/outline
alloc_id=$(nomad job status dcusr-outline | grep running | tail -n 1 | cut -d " " -f 1)
job_name=$(echo ${NOMAD_JOB_NAME} | cut -d "/" -f 1)
nomad alloc exec -task outline-db $alloc_id pg_dumpall -U {{ key "outline/db/username" }} > "${file}"
find /storage/backups/nomad/postgres/dcusr/outline/postgresql-outline* -ctime +3 -exec rm {} \; || true
if [ -s "$file" ]; then # check if file exists and is not empty
echo "Backup successful"
exit 0
else
rm $file
curl -H "Content-Type: application/json" -d \
'{"content": "<@&585512338728419341> `PostgreSQL` backup for **'"${job_name}"'** has just **FAILED**\nFile name: `'"$file"'`\nDate: `'"$(TZ=Europe/Dublin date)"'`\nTurn off this script with `nomad job stop '"${job_name}"'` \n\n## Remember to restart this backup job when fixed!!!"}' \
{{ key "postgres/webhook/discord" }}
fi
EOH
destination = "local/script.sh"
}
}
}
}

148
jobs/socs/dcusr-outline.hcl Normal file
View file

@ -0,0 +1,148 @@
job "dcusr-outline" {
datacenters = ["aperture"]
type = "service"
meta {
domain = "outline.solarracing.ie"
}
group "outline" {
network {
# mode = "bridge"
port "http" {
static = 3000
to = 3000
}
port "db" {
to = 5432
}
port "redis" {
to = 6379
}
}
service {
name = "outline"
port = "http"
check {
type = "http"
path = "/"
interval = "10s"
timeout = "2s"
}
tags = [
"traefik.enable=true",
"traefik.port=${NOMAD_PORT_http}",
"traefik.http.routers.dcusr-outline.entrypoints=web,websecure",
"traefik.http.routers.dcusr-outline.rule=Host(`${NOMAD_META_domain}`)",
"traefik.http.routers.dcusr-outline.tls=true",
"traefik.http.routers.dcusr-outline.tls.certresolver=lets-encrypt",
]
}
task "app" {
driver = "docker"
config {
image = "docker.getoutline.com/outlinewiki/outline:latest"
ports = ["http"]
volumes = [
"/storage/nomad/outline/data:/var/lib/outline/data"
]
}
resources {
cpu = 1000
memory = 500
}
template {
data = <<EOH
NODE_ENV=production
SECRET_KEY={{ key "outline/secret/key" }}
UTILS_SECRET={{ key "outline/secret/utils" }}
DATABASE_URL=postgres://{{ key "outline/db/username" }}:{{ key "outline/db/password" }}@{{ env "NOMAD_ADDR_db" }}/{{ key "outline/db/name" }}
DATABASE_CONNECTION_POOL_MIN=
DATABASE_CONNECTION_POOL_MAX=
# Uncomment this to disable SSL for connecting to Postgres
PGSSLMODE=disable
REDIS_URL=redis://{{ env "NOMAD_ADDR_redis" }}
URL=https://{{ env "NOMAD_META_domain" }}
PORT=3000
COLLABORATION_URL=https://{{ env "NOMAD_META_domain" }}
FILE_STORAGE=local
FILE_STORAGE_LOCAL_ROOT_DIR=/var/lib/outline/data
# Maximum allowed size for the uploaded attachment.
FILE_STORAGE_UPLOAD_MAX_SIZE=262144000
GOOGLE_CLIENT_ID = "{{ key "outline/google/client/id" }}"
GOOGLE_CLIENT_SECRET = "{{ key "outline/google/client/secret" }}"
FORCE_HTTPS = false
ENABLE_UPDATES = true
WEB_CONCURRENCY = 1
DEBUG = http
# error, warn, info, http, verbose, debug and silly
LOG_LEVEL=info
SMTP_HOST = "{{ key "outline/smtp/host" }}"
SMTP_PORT = "{{ key "outline/smtp/port" }}"
SMTP_USERNAME = "{{ key "outline/smtp/username" }}"
SMTP_PASSWORD = "{{ key "outline/smtp/password" }}"
SMTP_FROM_EMAIL = "{{ key "outline/smtp/from" }}"
DEFAULT_LANGUAGE=en_US
EOH
destination = "local/.env"
env = true
}
}
task "outline-redis" {
driver = "docker"
config {
image = "redis:latest"
ports = ["redis"]
}
}
task "outline-db" {
driver = "docker"
config {
image = "postgres:16-alpine"
ports = ["db"]
volumes = [
"/storage/nomad/outline/postgres:/var/lib/postgresql/data"
]
}
template {
data = <<EOH
POSTGRES_DB = "{{ key "outline/db/name" }}"
POSTGRES_USER = "{{ key "outline/db/username" }}"
POSTGRES_PASSWORD = "{{ key "outline/db/password" }}"
EOH
destination = "local/db.env"
env = true
}
}
}
}

137
jobs/socs/dcusr-pretix.hcl Normal file
View file

@ -0,0 +1,137 @@
job "dcusr-pretix" {
datacenters = ["aperture"]
type = "service"
meta {
domain = "tickets.solarracing.ie"
}
group "web" {
network {
# mode = "bridge"
port "http" {
to = 80
}
port "db" {
to = 5432
}
port "redis" {
to = 6379
}
}
service {
name = "pretix-web"
port = "http"
tags = [
"traefik.enable=true",
"traefik.port=${NOMAD_PORT_http}",
"traefik.http.routers.pretix.entrypoints=web,websecure",
"traefik.http.routers.pretix.rule=Host(`${NOMAD_META_domain}`)",
"traefik.http.routers.pretix.tls=true",
"traefik.http.routers.pretix.tls.certresolver=lets-encrypt",
]
}
task "pretix" {
driver = "docker"
config {
image = "pretix/standalone:stable"
ports = ["http"]
volumes = [
"local/pretix.cfg:/etc/pretix/pretix.cfg",
"/storage/nomad/pretix/data:/data",
"/etc/timezone:/etc/timezone:ro",
]
}
resources {
memory = 5000
cores = 1
}
env {
NUM_WORKERS = 1
}
template {
data = <<EOH
[pretix]
instance_name=DCU Solar Racing
url=https://{{ env "NOMAD_META_domain" }}
currency=EUR
datadir=/data
registration=off
[locale]
timezone=Europe/Dublin
[database]
backend=postgresql
name={{ key "pretix/db/name" }}
user={{ key "pretix/db/user" }}
password={{ key "pretix/db/password" }}
host={{ env "NOMAD_IP_db" }}
port={{ env "NOMAD_HOST_PORT_db" }}
[mail]
from={{ key "pretix/mail/from" }}
host={{ key "pretix/mail/host" }}
user={{ key "pretix/mail/user" }}
password={{ key "pretix/mail/password" }}
port=587
tls=on
ssl=off
[redis]
location=redis://{{ env "NOMAD_ADDR_redis" }}/0
sessions=true
[celery]
backend=redis://{{ env "NOMAD_ADDR_redis" }}/1
broker=redis://{{ env "NOMAD_ADDR_redis" }}/2
worker_prefetch_multiplier = 0
EOH
destination = "local/pretix.cfg"
}
}
task "pretix-db" {
driver = "docker"
config {
image = "postgres:16-alpine"
ports = ["db"]
volumes = [
"/storage/nomad/pretix/db:/var/lib/postgresql/data",
]
}
template {
data = <<EOH
POSTGRES_USER={{ key "pretix/db/user" }}
POSTGRES_PASSWORD={{ key "pretix/db/password" }}
EOH
destination = "local/db.env"
env = true
}
}
task "redis" {
driver = "docker"
config {
image = "redis:latest"
ports = ["redis"]
}
}
}
}

65
jobs/socs/dcusr.hcl Normal file
View file

@ -0,0 +1,65 @@
job "dcusr" {
datacenters = ["aperture"]
type = "service"
group "dcusr" {
count = 1
network {
port "http" {
to = 3000
}
}
service {
name = "dcusr"
port = "http"
check {
type = "http"
path = "/"
interval = "10s"
timeout = "2s"
}
tags = [
"traefik.enable=true",
"traefik.http.routers.dcusr.rule=Host(`solarracing.rb.dcu.ie`) || Host(`solarracing.ie`) || Host(`www.solarracing.ie`)",
"traefik.http.routers.dcusr.entrypoints=web,websecure",
"traefik.http.routers.dcusr.tls.certresolver=lets-encrypt",
]
}
task "nextjs-website" {
driver = "docker"
config {
image = "ghcr.io/dcu-solar-racing/nextjs-website:main"
ports = ["http"]
force_pull = true
auth {
username = "${DOCKER_USER}"
password = "${DOCKER_PASS}"
}
}
template {
destination = "secrets/secret.env"
env = true
change_mode = "restart"
data = <<EOH
DOCKER_USER={{ key "dcusr/ghcr/username" }}
DOCKER_PASS={{ key "dcusr/ghcr/password" }}
TO_EMAIL={{ key "dcusr/nodemailer/to" }}
EMAIL={{ key "dcusr/nodemailer/from" }}
EMAIL_PASS={{ key "dcusr/nodemailer/password" }}
LISTMONK_ENDPOINT={{ key "dcusr/listmonk/endpoint" }}
LISTMONK_USERNAME={{ key "dcusr/listmonk/username" }}
LISTMONK_PASSWORD={{ key "dcusr/listmonk/password" }}
LISTMONK_LIST_IDS={{ key "dcusr/listmonk/list/id" }}
RECAPTCHA_SECRET_KEY={{ key "dcusr/recaptcha/secret/key" }}
EOH
}
}
}
}

View file

@ -0,0 +1,47 @@
job "esports-discord-bot" {
datacenters = ["aperture"]
type = "service"
group "esports-bot" {
count = 1
task "esports-bot" {
driver = "docker"
config {
image = "ghcr.io/aydenjahola/discord-multipurpose-bot:main"
force_pull = true
}
resources {
cpu = 500
memory = 256
}
template {
data = <<EOH
BOT_TOKEN={{ key "socs/esports/bot/discord/token" }}
EMAIL_NAME={{ key "socs/esports/bot/email/name" }}
EMAIL_PASS={{ key "socs/esports/bot/email/pass" }}
EMAIL_USER={{key "socs/esports/bot/email/user" }}
MONGODB_URI={{key "socs/esports/bot/mongodb/uri"}}
RAPIDAPI_KEY={{ key "socs/esports/bot/rapidapi/key" }}
TRACKER_API_KEY={{ key "socs/esports/bot/trackerapi/key" }}
TRACKER_API_URL={{ key "socs/esports/bot/trackerapi/url" }}
WORDNIK_API_KEY={{key "socs/esports/bot/wordnikapi/key" }}
HUGGING_FACE_API_KEY={{ key "socs/esports/bot/huggingface/key" }}
RCON_HOST=esports-mc-rcon.service.consul
# https://discuss.hashicorp.com/t/passing-registered-ip-and-port-from-consul-to-env-nomad-job-section/35647
{{ range service "esports-mc-rcon" }}
RCON_PORT={{ .Port }}{{ end }}
RCON_PASSWORD={{ key "games/mc/esports-mc/rcon/password" }}
EOH
destination = "local/.env"
env = true
}
}
}
}

View file

@ -0,0 +1,36 @@
job "mps-site-update" {
datacenters = ["aperture"]
type = "batch"
periodic {
crons = ["0 */6 * * * *"]
prohibit_overlap = true
}
group "mps-site-update" {
task "mps-site-update" {
driver = "raw_exec"
config {
command = "/bin/bash"
args = ["local/script.sh"]
}
template {
data = <<EOH
#!/bin/bash
# stop the mps-site job
nomad job stop mps-site
sleep 1
# revert the mps-site job to the previous version
# this will trigger a new deployment, which will pull the latest image
nomad job revert mps-site $(($(nomad job inspect mps-site | jq '.Job.Version')-1))
EOH
destination = "local/script.sh"
}
}
}
}

66
jobs/socs/mps-site.hcl Normal file
View file

@ -0,0 +1,66 @@
job "mps-site" {
datacenters = ["aperture"]
type = "service"
group "mps-django" {
count = 1
network {
port "http" {
to = 8000
}
}
service {
name = "mps-django"
port = "http"
check {
type = "http"
path = "/"
interval = "10s"
timeout = "5s"
}
tags = [
"traefik.enable=true",
"traefik.port=${NOMAD_PORT_http}",
"traefik.http.routers.mps-django.rule=Host(`mps.rb.dcu.ie`) || Host(`dcumps.ie`) || Host(`www.dcumps.ie`) || Host(`dcumps.com`) || Host(`www.dcumps.com`)",
"traefik.http.routers.mps-django.entrypoints=web,websecure",
"traefik.http.routers.mps-django.tls.certresolver=lets-encrypt",
"traefik.http.routers.mps-django.middlewares=mps-django-redirect-com",
"traefik.http.middlewares.mps-django-redirect-com.redirectregex.regex=dcumps\\.com/(.*)",
"traefik.http.middlewares.mps-django-redirect-com.redirectregex.replacement=dcumps.ie/$1",
]
}
task "django-web" {
driver = "docker"
config {
image = "ghcr.io/dcumps/dcumps-website-django:latest"
ports = ["http"]
force_pull = true
hostname = "${NOMAD_TASK_NAME}"
auth {
username = "${DOCKER_USER}"
password = "${DOCKER_PASS}"
}
}
template {
data = <<EOH
DOCKER_USER={{ key "mps/site/ghcr/username" }}
DOCKER_PASS={{ key "mps/site/ghcr/password" }}
EOH
destination = "local/.env"
env = true
}
resources {
cpu = 300
memory = 500
}
}
}
}

View file

@ -0,0 +1,49 @@
job "mps-thecollegeview-backup" {
datacenters = ["aperture"]
type = "batch"
periodic {
crons = ["0 */3 * * * *"]
prohibit_overlap = true
}
group "db-backup" {
task "mysql-backup" {
driver = "raw_exec"
config {
command = "/bin/bash"
args = ["local/mysql-backup.sh"]
}
template {
data = <<EOH
#!/bin/bash
file=/storage/backups/nomad/mps-thecollegeview/mysql/tcv-mysql-$(date +%Y-%m-%d_%H-%M-%S).sql
mkdir -p /storage/backups/nomad/mps-thecollegeview/mysql
alloc_id=$(nomad job status mps-thecollegeview | grep running | tail -n 1 | cut -d " " -f 1)
job_name=$(echo ${NOMAD_JOB_NAME} | cut -d "/" -f 1)
nomad alloc exec -task tcv-db $alloc_id mariadb-dump -u {{ key "mps/thecollegeview/db/username" }} -p'{{ key "mps/thecollegeview/db/password"}}' {{ key "mps/thecollegeview/db/name" }} > "${file}"
find /storage/backups/nomad/mps-thecollegeview/mysql/tcv-mysql* -ctime +3 -exec rm {} \; || true
if [ -s "$file" ]; then # check if file exists and is not empty
echo "Backup successful"
exit 0
else
rm $file
curl -H "Content-Type: application/json" -d \
'{"content": "# <@&585512338728419341> `MySQL` backup for **'"${job_name}"'** has just **FAILED**\nFile name: `'"$file"'`\nDate: `'"$(TZ=Europe/Dublin date)"'`\nTurn off this script with `nomad job stop '"${job_name}"'` \n\n## Remember to restart this backup job when fixed!!!"}' \
{{ key "mysql/webhook/discord" }}
fi
EOH
destination = "local/mysql-backup.sh"
}
}
}
}

View file

@ -0,0 +1,257 @@
job "mps-thecollegeview" {
datacenters = ["aperture"]
type = "service"
meta {
domain = "thecollegeview.ie"
}
group "tcv" {
count = 1
network {
mode = "bridge"
port "http" {
to = 80
}
port "fpm" {
to = 9000
}
port "db" {
to = 3306
}
port "redis" {
to = 6379
}
}
service {
name = "tcv-web"
port = "http"
check {
type = "http"
path = "/"
interval = "10s"
timeout = "5s"
}
tags = [
"traefik.enable=true",
"traefik.http.routers.tcv.rule=Host(`${NOMAD_META_domain}`)",
"traefik.http.routers.tcv.entrypoints=web,websecure",
"traefik.http.routers.tcv.tls.certresolver=lets-encrypt",
]
}
task "tcv-nginx" {
driver = "docker"
config {
image = "nginx:alpine"
ports = ["http"]
volumes = [
"local/nginx.conf:/etc/nginx/nginx.conf",
"/storage/nomad/mps-thecollegeview:/var/www/html/",
]
group_add = [82] # www-data in alpine
}
resources {
cpu = 200
memory = 100
}
template {
data = <<EOH
# user www-data www-data;
error_log /dev/stderr error;
events {
worker_connections 1024;
}
http {
include /etc/nginx/mime.types;
server_tokens off;
error_log /dev/stderr error;
access_log /dev/stdout;
charset utf-8;
server {
server_name {{ env "NOMAD_META_domain" }};
listen 80;
listen [::]:80;
root /var/www/html;
index index.php index.html index.htm;
client_max_body_size 5m;
client_body_timeout 60;
# NOTE: Not used here, WP super cache rule used instead
# # Pass all folders to FPM
# location / {
# try_files $uri $uri/ /index.php?$args;
# }
# Pass the PHP scripts to FastCGI server
location ~ \.php$ {
include fastcgi_params;
fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name;
fastcgi_pass {{ env "NOMAD_ADDR_fpm" }};
fastcgi_index index.php;
}
location ~ /\.ht {
deny all;
}
# WP Super Cache rules.
set $cache_uri $request_uri;
# POST requests and urls with a query string should always go to PHP
if ($request_method = POST) {
set $cache_uri 'null cache';
}
if ($query_string != "") {
set $cache_uri 'null cache';
}
# Don't cache uris containing the following segments
if ($request_uri ~* "(/wp-admin/|/xmlrpc.php|/wp-(app|cron|login|register|mail).php|wp-.*.php|/feed/|index.php|wp-comments-popup.php|wp-links-opml.php|wp-locations.php|sitemap(_index)?.xml|[a-z0-9_-]+-sitemap([0-9]+)?.xml)") {
set $cache_uri 'null cache';
}
# Don't use the cache for logged in users or recent commenters
if ($http_cookie ~* "comment_author|wordpress_[a-f0-9]+|wp-postpass|wordpress_logged_in") {
set $cache_uri 'null cache';
}
# Use cached or actual file if they exists, otherwise pass request to WordPress
location / {
try_files /wp-content/cache/supercache/$http_host/$cache_uri/index.html $uri $uri/ /index.php?$args ;
}
}
}
EOH
destination = "local/nginx.conf"
}
}
task "tcv-phpfpm" {
driver = "docker"
config {
image = "wordpress:php8.3-fpm-alpine"
ports = ["fpm"]
volumes = [
"/storage/nomad/mps-thecollegeview:/var/www/html/",
"local/custom.ini:/usr/local/etc/php/conf.d/custom.ini",
]
}
resources {
cpu = 800
memory = 500
}
template {
data = <<EOH
WORDPRESS_DB_HOST={{ env "NOMAD_ADDR_db" }}
WORDPRESS_DB_USER={{ key "mps/thecollegeview/db/username" }}
WORDPRESS_DB_PASSWORD={{ key "mps/thecollegeview/db/password" }}
WORDPRESS_DB_NAME={{ key "mps/thecollegeview/db/name" }}
WORDPRESS_TABLE_PREFIX=wp_2
WORDPRESS_CONFIG_EXTRA="define('WP_REDIS_HOST', '{{ env "NOMAD_ADDR_redis" }}');"
EOH
destination = "local/.env"
env = true
}
template {
data = <<EOH
pm.max_children = 10
upload_max_filesize = 64M
post_max_size = 64M
EOH
destination = "local/custom.ini"
}
}
service {
name = "tcv-db"
port = "db"
}
task "tcv-db" {
driver = "docker"
config {
image = "mariadb"
ports = ["db"]
volumes = [
"/storage/nomad/mps-thecollegeview/db:/var/lib/mysql",
]
}
template {
data = <<EOH
[mysqld]
max_connections = 100
key_buffer_size = 2G
query_cache_size = 0
innodb_buffer_pool_size = 6G
innodb_log_file_size = 512M
innodb_flush_log_at_trx_commit = 1
innodb_flush_method = O_DIRECT
innodb_io_capacity = 200
tmp_table_size = 5242K
max_heap_table_size = 5242K
innodb_log_buffer_size = 16M
innodb_file_per_table = 1
bind-address = 0.0.0.0
# Logging
slow_query_log = 1
slow_query_log_file = /var/log/mysql/slow.log
long_query_time = 1
EOH
destination = "local/conf.cnf"
}
resources {
cpu = 800
memory = 800
}
template {
data = <<EOH
MYSQL_DATABASE={{ key "mps/thecollegeview/db/name" }}
MYSQL_USER={{ key "mps/thecollegeview/db/username" }}
MYSQL_PASSWORD={{ key "mps/thecollegeview/db/password" }}
MYSQL_RANDOM_ROOT_PASSWORD=yes
EOH
destination = "local/.env"
env = true
}
}
task "redis" {
driver = "docker"
config {
image = "redis:latest"
ports = ["redis"]
}
resources {
cpu = 200
}
}
}
}

View file

@ -0,0 +1,49 @@
job "style-thelook-backup" {
datacenters = ["aperture"]
type = "batch"
periodic {
crons = ["0 */3 * * * *"]
prohibit_overlap = true
}
group "db-backup" {
task "mysql-backup" {
driver = "raw_exec"
config {
command = "/bin/bash"
args = ["local/mysql-backup.sh"]
}
template {
data = <<EOH
#!/bin/bash
file=/storage/backups/nomad/style-thelook/mysql/thelook-mysql-$(date +%Y-%m-%d_%H-%M-%S).sql
mkdir -p /storage/backups/nomad/style-thelook/mysql
alloc_id=$(nomad job status style-thelook | grep running | tail -n 1 | cut -d " " -f 1)
job_name=$(echo ${NOMAD_JOB_NAME} | cut -d "/" -f 1)
nomad alloc exec -task thelook-db $alloc_id mariadb-dump -u {{ key "style/thelook/db/username" }} -p'{{ key "style/thelook/db/password"}}' {{ key "style/thelook/db/name" }} > "${file}"
find /storage/backups/nomad/style-thelook/mysql/thelook-mysql* -ctime +3 -exec rm {} \; || true
if [ -s "$file" ]; then # check if file exists and is not empty
echo "Backup successful"
exit 0
else
rm $file
curl -H "Content-Type: application/json" -d \
'{"content": "# <@&585512338728419341> `MySQL` backup for **'"${job_name}"'** has just **FAILED**\nFile name: `'"$file"'`\nDate: `'"$(TZ=Europe/Dublin date)"'`\nTurn off this script with `nomad job stop '"${job_name}"'` \n\n## Remember to restart this backup job when fixed!!!"}' \
{{ key "mysql/webhook/discord" }}
fi
EOH
destination = "local/mysql-backup.sh"
}
}
}
}

257
jobs/socs/style-thelook.hcl Normal file
View file

@ -0,0 +1,257 @@
job "style-thelook" {
datacenters = ["aperture"]
type = "service"
meta {
domain = "thelookonline.dcu.ie"
}
group "thelook" {
count = 1
network {
mode = "bridge"
port "http" {
to = 80
}
port "fpm" {
to = 9000
}
port "db" {
to = 3306
}
port "redis" {
to = 6379
}
}
service {
name = "thelook-web"
port = "http"
check {
type = "http"
path = "/"
interval = "10s"
timeout = "5s"
}
tags = [
"traefik.enable=true",
"traefik.http.routers.thelook.rule=Host(`${NOMAD_META_domain}`) || Host(`style.redbrick.dcu.ie`)",
"traefik.http.routers.thelook.entrypoints=web,websecure",
"traefik.http.routers.thelook.tls.certresolver=lets-encrypt",
]
}
task "thelook-nginx" {
driver = "docker"
config {
image = "nginx:alpine"
ports = ["http"]
volumes = [
"local/nginx.conf:/etc/nginx/nginx.conf",
"/storage/nomad/style-thelook:/var/www/html/",
]
group_add = [82] # www-data in alpine
}
resources {
cpu = 200
memory = 100
}
template {
data = <<EOH
# user www-data www-data;
error_log /dev/stderr error;
events {
worker_connections 1024;
}
http {
include /etc/nginx/mime.types;
server_tokens off;
error_log /dev/stderr error;
access_log /dev/stdout;
charset utf-8;
server {
server_name {{ env "NOMAD_META_domain" }};
listen 80;
listen [::]:80;
root /var/www/html;
index index.php index.html index.htm;
client_max_body_size 5m;
client_body_timeout 60;
# NOTE: Not used here, WP super cache rule used instead
# Pass all folders to FPM
# location / {
# try_files $uri $uri/ /index.php?$args;
# }
# Pass the PHP scripts to FastCGI server
location ~ \.php$ {
include fastcgi_params;
fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name;
fastcgi_pass {{ env "NOMAD_ADDR_fpm" }};
fastcgi_index index.php;
}
location ~ /\.ht {
deny all;
}
# WP Super Cache rules.
set $cache_uri $request_uri;
# POST requests and urls with a query string should always go to PHP
if ($request_method = POST) {
set $cache_uri 'null cache';
}
if ($query_string != "") {
set $cache_uri 'null cache';
}
# Don't cache uris containing the following segments
if ($request_uri ~* "(/wp-admin/|/xmlrpc.php|/wp-(app|cron|login|register|mail).php|wp-.*.php|/feed/|index.php|wp-comments-popup.php|wp-links-opml.php|wp-locations.php|sitemap(_index)?.xml|[a-z0-9_-]+-sitemap([0-9]+)?.xml)") {
set $cache_uri 'null cache';
}
# Don't use the cache for logged in users or recent commenters
if ($http_cookie ~* "comment_author|wordpress_[a-f0-9]+|wp-postpass|wordpress_logged_in") {
set $cache_uri 'null cache';
}
# Use cached or actual file if they exists, otherwise pass request to WordPress
location / {
try_files /wp-content/cache/supercache/$http_host/$cache_uri/index.html $uri $uri/ /index.php?$args ;
}
}
}
EOH
destination = "local/nginx.conf"
}
}
task "thelook-phpfpm" {
driver = "docker"
config {
image = "wordpress:php8.3-fpm-alpine"
ports = ["fpm"]
volumes = [
"/storage/nomad/style-thelook:/var/www/html/",
"local/custom.ini:/usr/local/etc/php/conf.d/custom.ini",
]
}
resources {
cpu = 800
memory = 500
}
template {
data = <<EOH
WORDPRESS_DB_HOST={{ env "NOMAD_ADDR_db" }}
WORDPRESS_DB_USER={{ key "style/thelook/db/username" }}
WORDPRESS_DB_PASSWORD={{ key "style/thelook/db/password" }}
WORDPRESS_DB_NAME={{ key "style/thelook/db/name" }}
WORDPRESS_TABLE_PREFIX=wp_
WORDPRESS_CONFIG_EXTRA="define('WP_REDIS_HOST', '{{ env "NOMAD_ADDR_redis" }}');"
EOH
destination = "local/.env"
env = true
}
template {
data = <<EOH
pm.max_children = 10
upload_max_filesize = 64M
post_max_size = 64M
EOH
destination = "local/custom.ini"
}
}
service {
name = "thelook-db"
port = "db"
}
task "thelook-db" {
driver = "docker"
config {
image = "mariadb"
ports = ["db"]
volumes = [
"/storage/nomad/style-thelook/db:/var/lib/mysql",
]
}
template {
data = <<EOH
[mysqld]
max_connections = 100
key_buffer_size = 2G
query_cache_size = 0
innodb_buffer_pool_size = 6G
innodb_log_file_size = 512M
innodb_flush_log_at_trx_commit = 1
innodb_flush_method = O_DIRECT
innodb_io_capacity = 200
tmp_table_size = 5242K
max_heap_table_size = 5242K
innodb_log_buffer_size = 16M
innodb_file_per_table = 1
bind-address = 0.0.0.0
# Logging
slow_query_log = 1
slow_query_log_file = /var/log/mysql/slow.log
long_query_time = 1
EOH
destination = "local/conf.cnf"
}
resources {
cpu = 800
memory = 800
}
template {
data = <<EOH
MYSQL_DATABASE={{ key "style/thelook/db/name" }}
MYSQL_USER={{ key "style/thelook/db/username" }}
MYSQL_PASSWORD={{ key "style/thelook/db/password" }}
MYSQL_RANDOM_ROOT_PASSWORD=yes
EOH
destination = "local/.env"
env = true
}
}
task "redis" {
driver = "docker"
config {
image = "redis:latest"
ports = ["redis"]
}
resources {
cpu = 200
}
}
}
}

View file

@ -1,89 +0,0 @@
job "traefik" {
datacenters = ["aperture"]
type = "system"
group "traefik" {
network {
port "http"{
static = 80
}
port "https" {
static = 443
}
port "admin"{
static = 8080
}
}
service {
name = "traefik-http"
provider = "nomad"
port = "https"
}
task "traefik" {
driver = "docker"
config {
image = "traefik"
network_mode = "host"
volumes = [
"local/traefik.toml:/etc/traefik/traefik.toml",
]
}
template {
data = <<EOF
[entryPoints]
[entryPoints.web]
address = ":80"
[entryPoints.web.http.redirections.entryPoint]
to = "websecure"
scheme = "https"
[entryPoints.websecure]
address = ":443"
[entryPoints.traefik]
address = ":8080"
[tls.options]
[tls.options.default]
minVersion = "VersionTLS12"
cipherSuites = [
"TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384",
"TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305",
"TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256",
"TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256",
"TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256",
"TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256"
]
[api]
dashboard = true
insecure = true
# Enable Consul Catalog configuration backend.
[providers.consulCatalog]
prefix = "traefik"
exposedByDefault = false
[providers.consulCatalog.endpoint]
address = "127.0.0.1:8500"
scheme = "http"
#[providers.nomad]
# [providers.nomad.endpoint]
# address = "127.0.0.1:4646"
# scheme = "http"
[certificatesResolvers.lets-encrypt.acme]
email = "elected-admins@redbrick.dcu.ie"
storage = "local/acme.json"
[certificatesResolvers.lets-encrypt.acme.tlsChallenge]
EOF
destination = "/local/traefik.toml"
}
}
}
}

View file

@ -0,0 +1,38 @@
job "ayden-discord-bot" {
datacenters = ["aperture"]
type = "service"
group "discordbotgoml" {
count = 1
task "discordbotgoml" {
driver = "docker"
config {
image = "ghcr.io/aydenjahola/discordbotgoml:main"
force_pull = true
auth {
username = "${DOCKER_USER}"
password = "${DOCKER_PASS}"
}
}
resources {
cpu = 500
memory = 256
}
template {
data = <<EOH
DISCORD_TOKEN={{ key "user-projects/ayden/gomlbot/discord/token" }}
DOCKER_USER={{ key "user-projects/ayden/ghcr/username" }}
DOCKER_PASS={{ key "user-projects/ayden/ghcr/password" }}
DEBUG=false
MONGO_DB={{ key "user-projects/ayden/gomlbot/mongo/db" }}
EOH
destination = "local/.env"
env = true
}
}
}
}

View file

@ -0,0 +1,26 @@
job "midnight-calendarbot" {
datacenters = ["aperture"]
type = "service"
group "calendarbot" {
count = 1
task "calendarbot" {
driver = "docker"
config {
image = "ghcr.io/nightmarishblue/calendarbot:latest"
force_pull = true
}
template {
data = <<EOH
BOT_TOKEN={{ key "user-projects/midnight/calendarbot/discord/token" }}
APPLICATION_ID={{ key "user-projects/midnight/calendarbot/discord/appid" }}
EOH
destination = "local/.env"
env = true
}
}
}
}

View file

@ -1,59 +1,148 @@
job "nova-timetable" {
datacenters = ["aperture"]
type = "service"
type = "service"
group "nova-timetable" {
count = 1
network {
port "http" {
to = 80
port "redis" {
to = 6379
}
port "db" {
to = 6379
to = 5432
}
port "frontend" {
to = 3000
}
port "backend" {
to = 4000
}
}
service {
name = "nova-timetable"
port = "http"
check {
type = "http"
path = "/healthcheck"
interval = "10s"
timeout = "2s"
}
tags = [
"traefik.enable=true",
"traefik.http.routers.nova-timetable.rule=Host(`timetable.redbrick.dcu.ie`)",
"traefik.http.routers.nova-timetable.entrypoints=web,websecure",
"traefik.http.routers.nova-timetable.tls.certresolver=lets-encrypt",
]
}
task "python-application" {
task "frontend" {
driver = "docker"
env {
REDIS_ADDRESS = "${NOMAD_ADDR_db}"
PORT = "${NOMAD_PORT_frontend}"
}
config {
image = "novanai/timetable-sync"
ports = ["http"]
image = "ghcr.io/novanai/timetable-sync-frontend:latest"
ports = ["frontend"]
}
service {
name = "nova-timetable-frontend"
port = "frontend"
check {
type = "http"
path = "/"
interval = "10s"
timeout = "2s"
}
tags = [
"traefik.enable=true",
"traefik.port=${NOMAD_PORT_frontend}",
"traefik.http.routers.nova-timetable-frontend.rule=Host(`timetable.redbrick.dcu.ie`)",
"traefik.http.routers.nova-timetable-frontend.entrypoints=web,websecure",
"traefik.http.routers.nova-timetable-frontend.tls.certresolver=lets-encrypt",
]
}
}
task "redis-db" {
task "backend" {
driver = "docker"
env {
BACKEND_PORT = "${NOMAD_PORT_backend}"
REDIS_ADDRESS = "${NOMAD_ADDR_redis}"
CNS_ADDRESS = "https://clubsandsocs.jakefarrell.ie"
}
config {
image = "ghcr.io/novanai/timetable-sync-backend:latest"
ports = ["backend"]
}
service {
name = "nova-timetable-backend"
port = "backend"
check {
type = "http"
path = "/api/healthcheck"
interval = "10s"
timeout = "2s"
}
tags = [
"traefik.enable=true",
"traefik.port=${NOMAD_PORT_backend}",
"traefik.http.routers.nova-timetable-backend.rule=Host(`timetable.redbrick.dcu.ie`) && PathPrefix(`/api`)",
"traefik.http.routers.nova-timetable-backend.entrypoints=web,websecure",
"traefik.http.routers.nova-timetable-backend.tls.certresolver=lets-encrypt",
]
}
}
task "redis" {
driver = "docker"
config {
image = "redis:latest"
ports = ["redis"]
}
}
task "timetable-db" {
driver = "docker"
config {
image = "postgres:17.0-alpine"
ports = ["db"]
volumes = [
"/storage/nomad/nova-timetable/db:/var/lib/postgresql/data"
]
}
template {
data = <<EOH
POSTGRES_USER={{ key "user-projects/nova/db/user" }}
POSTGRES_PASSWORD={{ key "user-projects/nova/db/password" }}
POSTGRES_DB={{ key "user-projects/nova/db/name" }}
EOH
destination = "local/db.env"
env = true
}
}
task "timetable-bot" {
driver = "docker"
config {
image = "ghcr.io/novanai/timetable-sync-bot:latest"
}
template {
data = <<EOH
BOT_TOKEN={{ key "user-projects/nova/bot/token" }}
REDIS_ADDRESS={{ env "NOMAD_ADDR_redis" }}
POSTGRES_USER={{ key "user-projects/nova/db/user" }}
POSTGRES_PASSWORD={{ key "user-projects/nova/db/password" }}
POSTGRES_DB={{ key "user-projects/nova/db/name" }}
POSTGRES_HOST={{ env "NOMAD_IP_db" }}
POSTGRES_PORT={{ env "NOMAD_HOST_PORT_db" }}
CNS_ADDRESS="https://clubsandsocs.jakefarrell.ie"
EOH
destination = "local/.env"
env = true
}
}
}

View file

@ -0,0 +1,36 @@
job "urri-meetups-update" {
datacenters = ["aperture"]
type = "batch"
periodic {
crons = ["0 */6 * * * *"]
prohibit_overlap = true
}
group "urri-meetups-update" {
task "urri-meetups-update" {
driver = "raw_exec"
config {
command = "/bin/bash"
args = ["local/script.sh"]
}
template {
data = <<EOH
#!/bin/bash
# stop the urri-meetups job
nomad job stop urri-meetups
sleep 1
# revert the urri-meetups job to the previous version
# this will trigger a new deployment, which will pull the latest image
nomad job revert urri-meetups $(($(nomad job inspect urri-meetups | jq '.Job.Version')-1))
EOH
destination = "local/script.sh"
}
}
}
}

View file

@ -0,0 +1,47 @@
job "urri-meetups" {
datacenters = ["aperture"]
type = "service"
group "urri-meetups" {
count = 1
network {
port "http" {
to = 8000
}
}
service {
port = "http"
check {
type = "http"
path = "/"
interval = "10s"
timeout = "2s"
}
tags = [
"traefik.enable=true",
"traefik.http.routers.urri-meetups.rule=Host(`urri-meetups.rb.dcu.ie`)",
"traefik.http.routers.urri-meetups.entrypoints=web,websecure",
"traefik.http.routers.urri-meetups.tls.certresolver=lets-encrypt",
]
}
task "web" {
driver = "docker"
config {
image = "ghcr.io/haefae222/pizza_app:latest"
ports = ["http"]
force_pull = true
}
resources {
cpu = 1000
memory = 800
}
}
}
}

View file

@ -0,0 +1,61 @@
job "cands-room-bookings" {
datacenters = ["aperture"]
type = "service"
meta {
git-sha = ""
}
group "clubsandsocs-room-bookings" {
count = 1
network {
port "http" {
to = 5000
}
}
service {
port = "http"
check {
type = "http"
path = "/"
interval = "10s"
timeout = "2s"
}
tags = [
"traefik.enable=true",
"traefik.http.routers.clubsandsocs-room-bookings.rule=Host(`rooms.rb.dcu.ie`)",
"traefik.http.routers.clubsandsocs-room-bookings.entrypoints=web,websecure",
"traefik.http.routers.clubsandsocs-room-bookings.tls.certresolver=lets-encrypt",
]
}
task "web" {
driver = "docker"
config {
image = "ghcr.io/wizzdom/clubsandsocs-room-bookings:latest"
ports = ["http"]
force_pull = true
volumes = [
"local/.env:/app/.env"
]
}
template {
data = <<EOF
UPLOAD_FOLDER=uploads
SECRET_KEY={{ key "user-projects/wizzdom/clubsandsocs-room-bookings/secret" }}
EOF
destination = "local/.env"
}
resources {
cpu = 1000
memory = 800
}
}
}
}

91
jobs/user-vms/README.md Executable file → Normal file
View file

@ -1,94 +1,89 @@
# User VMs
This directory contains the configuration files for the user VMs.
For the latest docs, see [here](https://docs.redbrick.dcu.ie/services/user-vms/).
Each VM is configured with cloud-init. Those configuration files are served by wheatley, but they can
be served by any HTTP server.
User VMs are deployed on [`aperture`](https://docs.redbrick.dcu.ie/hardware/aperture/) using [nomad](https://docs.redbrick.dcu.ie/services/nomad/)'s [QEMU driver](https://developer.hashicorp.com/nomad/docs/drivers/qemu).
## Setting up networking on the host
Each VM is configured with cloud-init. Those configuration files are served by [`wheatley`](https://docs.redbrick.dcu.ie/hardware/aperture/wheatley/), but they can be served by any HTTP server.
The host needs to be configured to allow the VMs to communicate with each other. This is done by creating
a bridge and adding the VMs to it.
## Setting up Networking on the Host
### Create a bridge
The host needs to be configured to allow the VMs to communicate with each other. This is done by creating a bridge and adding the VMs to it.
To create a bridge that qemu can use to place the guest (vm) onto the same network as the host, follow
the instructions listed [here](https://wiki.archlinux.org/title/Network_bridge#With_iproute2) for
iproute2, summarised below.
### Create a Bridge
To create a bridge that qemu can use to place the guest (VM) onto the same network as the host, follow the instructions listed [here](https://wiki.archlinux.org/title/Network_bridge#With_iproute2) for `iproute2`, summarised below.
We need to create a bridge interface on the host.
```bash
$ sudo ip link add name br0 type bridge
$ sudo ip link set dev br0 up
sudo ip link add name br0 type bridge
sudo ip link set dev br0 up
```
We'll be adding a physical interface to this bridge to allow it to communicate with the external (UDM)
network.
We'll be adding a physical interface to this bridge to allow it to communicate with the external ([UDM](https://docs.redbrick.dcu.ie/hardware/network/mordor/)) network.
```bash
$ sudo ip link set eno1 master br0
sudo ip link set eno1 master br0
```
You'll need to assign an IP address to the bridge interface. This will be used as the default address
for the host. You can do this with DHCP or by assigning a static IP address. The best way to do this
is to create a DHCP static lease on the UDM for the bridge interface MAC address.
You'll need to assign an IP address to the bridge interface. This will be used as the default address for the host. You can do this with DHCP or by assigning a static IP address. The best way to do this is to create a DHCP static lease on the [UDM](https://docs.redbrick.dcu.ie/hardware/network/mordor/) for the bridge interface MAC address.
:::note
TODO: Find out why connectivity seems to be lost when the bridge interface receives an address before
the physical interface.
> [!NOTE]
> TODO: Find out why connectivity seems to be lost when the bridge interface receives an address before the physical interface.
> If connectivity is lost, release the addresses from both the bridge and the physical interface (in that order) with `sudo dhclient -v -r <iface>` and then run `sudo dhclient -v <iface>` to assign the bridge interface an address.
If connectivity is lost, release the addresses from both the bridge and the physical interface (in
that order) with `sudo dhclient -v -r <iface>` and then run `sudo dhclient -v <iface>` to assign the
bridge interface an address.
:::
### Add the VMs to the Bridge
### Add the VMs to the bridge
The configuration of the qemu network options in the job file will create a new tap interface and add it to the bridge and the VM. I advise you for your own sanity to never touch the network options, they will only cause you pain.
The configuration of the qemu network options in the job file will create a new tap interface and add
it to the bridge and the VM. I advise you for your own sanity to never touch the network options, they
will only cause you pain.
For others looking, this configuration is specific to QEMU only.
For others looking, this configuration is specific to *QEMU only*.
```bash
qemu-system-x86_64 ... -netdev bridge,id=hn0 -device virtio-net-pci,netdev=hn0,id=nic1
```
This will assign the VM an address on the external network. The VM will be able to communicate with
the host and other VMs in the network.
This will assign the VM an address on the external network. The VM will be able to communicate with the host and other VMs in the network.
You must also add `allow br0` to `/etc/qemu/bridge.conf` to allow qemu to add the tap interfaces to
the bridge. [Source](https://wiki.qemu.org/Features/HelperNetworking)
You must also add `allow br0` to `/etc/qemu/bridge.conf` to allow qemu to add the tap interfaces to the bridge. [Source](https://wiki.qemu.org/Features/HelperNetworking)
The VMs, once connected to the bridge, will be assigned an address via DHCP. You can assign a static
IP address to the VMs by adding a DHCP static lease on the UDM for the VMs MAC address. You can get
the address of a VM by checking the nomad alloc logs for that VM and searching for `ens3`.
The VMs, once connected to the bridge, will be assigned an address via DHCP. You can assign a static IP address to the VMs by adding a DHCP static lease on the [UDM](https://docs.redbrick.dcu.ie/hardware/network/mordor/) for the VMs MAC address. You can get the address of a VM by checking the `nomad alloc logs` for that VM and searching for `ens3`.
```bash
$ nomad job status distro-vm | grep "Node ID" -A 1 | tail -n 1 | cut -d " " -f 1
nomad job status distro-vm | grep "Node ID" -A 1 | tail -n 1 | cut -d " " -f 1
# <alloc-id>
$ nomad alloc logs <alloc-id> | grep -E "ens3.*global" | cut -d "|" -f 4 | xargs
nomad alloc logs <alloc-id> | grep -E "ens3.*global" | cut -d "|" -f 4 | xargs
# cloud init... ens3: <ip-address> global
```
## Configuring the VMs
The VMs are configured with cloud-init. Their docs are pretty good, so I won't repeat them here. The
files can be served by any HTTP server, and the address is placed into the job file in the QEMU options.
The VMs are configured with cloud-init. Their [docs](https://cloudinit.readthedocs.io/en/latest/) are pretty good, so I won't repeat them here. The files can be served by any HTTP server, and the address is placed into the job file in the QEMU options.
```hcl
```hcl title="Nomad"
...
args = [
...
"virtio-net-pci,netdev=hn0,id=nic1,mac=52:54:84:ba:49:22", # make sure this MAC address is unique!!
"-smbios",
"type=1,serial=ds=nocloud-net;s=http://136.206.16.5:8000/",
"type=1,serial=ds=nocloud-net;s=http://vm-resources.service.consul:8000/res/",
]
...
```
## Creating a new VM
To create a new VM, you'll need to create a new job file and a cloud-init configuration file. Copy
any of the existing job files and modify them to suit your needs. The cloud-init configuration files
can be copied and changed based on the user also.
Here in the args block:
- we define that the VM will have a network device using the `virtio` driver, we pass it an `id` and a random ***unique*** MAC address
- we tell it to use `smbios` type 1 and to grab its `cloud-init` configs from `http://vm-resources.service.consul:8000/res/`
> [!NOTE]
> If you're running multiple VMs on the same network make sure to set different MAC addresses for each VM, otherwise you'll have a bad time.
## Creating a New VM
To create a new VM, you'll need to create a new job file and a cloud-init configuration file. Copy any of the existing job files and modify them to suit your needs. The cloud-init configuration files can be copied and changed based on the user also. **Remember to ensure the MAC addresses are unique!**

View file

@ -0,0 +1,87 @@
job "admin-exams" {
datacenters = ["aperture"]
group "ayden-vm" {
network {
mode = "host"
}
service {
name = "ayden-vm"
}
task "ayden-vm" {
resources {
cpu = 12000
memory = 4096
}
artifact {
source = "http://vm-resources.service.consul:8000/res/base-images/debian-12-genericcloud-amd64-30G.qcow2"
destination = "local/ayden-vm.qcow2"
mode = "file"
}
driver = "qemu"
config {
image_path = "local/ayden-vm.qcow2"
accelerator = "kvm"
drive_interface = "virtio"
args = [
"-netdev",
"bridge,id=hn0",
"-device",
"virtio-net-pci,netdev=hn0,id=nic1,mac=52:54:84:ba:49:20", # mac address must be unique or else you will regret it
"-smbios",
"type=1,serial=ds=nocloud-net;s=http://vm-resources.service.consul:8000/res/ayden-vm/",
]
}
}
}
group "hypnoant-vm" {
network {
mode = "host"
}
service {
name = "hypnoant-vm"
}
task "hypnoant-vm" {
resources {
cpu = 12000
memory = 4096
}
artifact {
source = "http://vm-resources.service.consul:8000/res/base-images/debian-12-genericcloud-amd64-30G.qcow2"
destination = "local/hypnoant-vm.qcow2"
mode = "file"
}
driver = "qemu"
config {
image_path = "local/hypnoant-vm.qcow2"
accelerator = "kvm"
drive_interface = "virtio"
args = [
"-netdev",
"bridge,id=hn0",
"-device",
"virtio-net-pci,netdev=hn0,id=nic1,mac=52:54:84:ba:49:22",
"-smbios",
"type=1,serial=ds=nocloud-net;s=http://vm-resources.service.consul:8000/res/hypnoant-vm/",
]
}
}
}
}

View file

@ -0,0 +1,64 @@
job "bastion-vm-backup" {
datacenters = ["aperture"]
type = "batch"
periodic {
crons = ["0 */3 * * * *"]
prohibit_overlap = true
}
group "vm-backup" {
task "qcow-backup" {
driver = "raw_exec"
config {
command = "/bin/bash"
args = ["local/script.sh"]
}
resources {
cpu = 3000
memory = 1000
}
template {
data = <<EOH
#!/bin/bash
path=/storage/backups/nomad/bastion-vm
file=bastion-vm-$(date +%Y-%m-%d_%H-%M-%S).qcow2
mkdir -p ${path}
host=$(nomad job status -verbose bastion-vm | grep running | tail -n 1 | cut -d " " -f 7)
alloc_id=$(nomad job status -verbose bastion-vm | grep running | tail -n 1 | cut -d " " -f 1)
job_name=$(echo ${NOMAD_JOB_NAME} | cut -d "/" -f 1)
echo "Backing up alloc id: ${alloc_id} on: ${host} to ${path}/${file}..."
ssh -i {{ key "bastion-vm/service/key" }} {{ key "bastion-vm/service/user" }}@${host} "sudo cat /opt/nomad/alloc/${alloc_id}/bastion-vm/local/bastion-vm.qcow2" > ${path}/${file}
find ${path}/bastion-vm-* -ctime +2 -exec rm {} \; || true
size=$(stat -c%s "${path}/${file}")
if [ ${size} -gt 4000000000 ]; then # check if file exists and is not empty
echo "Updating latest symlink to ${file}..."
ln -sf ./${file} ${path}/bastion-vm-latest.qcow2
echo "Backup successful"
exit 0
else
rm $file
curl -H "Content-Type: application/json" -d \
'{"content": "## <@&585512338728419341> `VM` backup for **'"${job_name}"'** has just **FAILED**\nFile name: `'"$file"'`\nDate: `'"$(TZ=Europe/Dublin date)"'`\nTurn off this script with `nomad job stop '"${job_name}"'` \n\n## Remember to restart this backup job when fixed!!!"}' \
{{ key "bastion-vm/webhook/discord" }}
fi
EOH
destination = "local/script.sh"
}
}
}
}

View file

@ -0,0 +1,46 @@
job "bastion-vm" {
datacenters = ["aperture"]
group "bastion-vm" {
network {
mode = "host"
}
service {
name = "bastion-vm"
}
task "bastion-vm" {
resources {
cpu = 12000
memory = 4096
}
artifact {
source = "http://vm-resources.service.consul:8000/bastion/bastion-vm-latest.qcow2"
destination = "local/bastion-vm.qcow2"
mode = "file"
}
driver = "qemu"
config {
image_path = "local/bastion-vm.qcow2"
accelerator = "kvm"
drive_interface = "virtio"
args = [
"-netdev",
"bridge,id=hn0",
"-device",
"virtio-net-pci,netdev=hn0,id=nic1,mac=52:54:84:ba:49:02",
"-smbios",
#"type=1,serial=ds=nocloud-net;s=http://10.10.0.5:8000/bastion-vm/",
"type=1",
]
}
}
}
}

View file

@ -12,29 +12,23 @@ job "distro-vm" {
}
task "distro-vm" {
constraint {
attribute = "${attr.unique.hostname}"
value = "wheatley"
}
resources {
cpu = 12000
cpu = 12000
memory = 4096
}
artifact {
source = "https://cloud.debian.org/images/cloud/bullseye/latest/debian-11-genericcloud-amd64.qcow2"
source = "http://vm-resources.service.consul:8000/res/base-images/debian-12-genericcloud-amd64-30G.qcow2"
destination = "local/distro-vm.qcow2"
mode = "file"
mode = "file"
}
driver = "qemu"
config {
image_path = "local/distro-vm.qcow2"
accelerator = "kvm"
image_path = "local/distro-vm.qcow2"
accelerator = "kvm"
drive_interface = "virtio"
args = [

View file

@ -0,0 +1,34 @@
job "vm-resources" {
datacenters = ["aperture"]
type = "service"
group "vm-resources" {
count = 1
network {
port "http" {
static = "8000"
to = "80"
}
}
service {
name = "vm-resources"
port = "http"
}
task "resource-server" {
driver = "docker"
config {
image = "nginx"
ports = ["http"]
volumes = [
"/storage/nomad/vm-resources/:/usr/share/nginx/html/res",
"/storage/backups/nomad/bastion-vm:/usr/share/nginx/html/bastion",
]
}
}
}
}