Table of Contents [ Show]
Meta-Data
.me.tz?
So, despite the fact that the domain was originally acquired in order to process email, this has been perhaps the most poorly executed part. This is not because I don't know how email works; I can administer the crap out of Sendmail or Postfix. The reason it hasn't worked out great is that nobody knows what the heck I'm talking about when I give them my email address. A common conversation is something like:
- contact@john.me.tz? Then what? @gmail.com?.
- Nope. That's it. You know how there are .com and .net addresses? Well there are also .me.tz addresses.
- Oh. So, what was it? john@me.tz?
- <sigh> Let me start again...
This was a short sighted decision based on a not-very-funny joke that very few people will understand. I explained that joke with an easter egg if you click the domain name or logo in the banner at the top. This should quickly flip the dots though various characters and eventually stop on my name, John Mertz. It is a domain name hack which matches my name if used as a Regular Expression. These are a standard set of characters that can be used in some software to search for patterns rather than literal strings. With Regular Expressions, you can use a dot to represent any single character of text. So searching for "te.t" would find "test", "text" and "tent". This is incredibly useful for manipulating text, but much less useful when you are explaining for the third time that, despite there being an "r" in my last name, there isn't one in my email address.
So, what the heck is a .me.tz domain? Fun fact: any domain name that has 2 characters after the final dot is called a ccTLD, or Country Code Top-Level Domain. The "Top-Level Domain" part is the highest level of organization of the internet, the same as .com or .org. This helps computers find each other when they perform a lookup using the domain name service (DNS). The Country Code part means that rather than representing a specific category of website like .org or a specific industry like the pleathera of new designations like .accountant, and .zookeeper (I don't think that one actually exists, but at the rate ICANN is approving new TLDs, it will soon), it instead represents a country. My fellow canucks are very familiar with this concept given the .ca top-level domain, and Americans are probably much less familiar with .us. Despite what some might like to think, .com does not mean it is the american equivalent to the .ca, it is just the generic; all country-code TLDs are 2 characters and all 2 character TLDs are country-codes. The .us domain is just highly abused by spammers and malware sites, so no one really uses it on the web.
The Tech
For my fellow nerds, talking back-end infrastructure is always fun. The current answer to that is:
- OS - Debian Bookworm (12)
- Hosting - OVH VPS, 2 vCore, 8GB RAM, 80GB SSD
- Server - Nginx with HTTP2
- Nameservers - Mine, Bind9 on this machine and my mail server
- Language - Primarily PHP
- Frameworks/CMS - Custom
- SSL - Let's Encrypt
- VCS - Git
The website has gone through several iterations and so too has the hardware and software. Some changes have been practical. For instance the site started out running in a Linux Container (lxc) on Debian Wheezy (7) on my desktop computer attached to a static IP on my home internet connection. That machine was a heck of a lot more powerful than the VPS, but it lived and died at the whims of TekSavvy's DSL line and the Ottawa power grid. Other changes were not necessary but I made them just out of curiousity. It started out as a very vanilla Apache installation, but I administer Apache servers at work and thought I'd expand my skillset. Similarly, I used to just use the DNS services provided by my registrar but having never run my own name servers, I thought I'd learn that as well (this one also saved me a few dollars a year, but that wasn't the point). Early on I added HTTPS support. Despite the fact that there's no real privacy need for it, Let's Encrypt just makes that process so easy that it would seem silly not to set it up.
Everything was initially pretty much static HTML with a lot of copy-pasting, then I split the duplicate stuff into JavaScript files using document.write functions and loading them on each page that they were needed. That reduced the need to apply changes across multiple pages but also meant that processing had to be done on the client-side and was dependent on a JavaScript enabled browser. It did also allow me to get the primary functions of the CMS working, including to programatically generate all of the common elements for page blocks given just the variable content strings and images. This was still not ideal because it was: 1) inefficient - it meant sending the complete contents of an article or list page even when only a handful of items might be rendered. 2) static - plain JavaScript cannot access the filesystem to know what content needs to be loaded, so I had to maintain separate files that listed all of the directories that needed to be included.
I'm pretty happy with where things are at right now. I will continue to update the software that is currently in use, but I doubt it will change significantly aside from adding features to my PHP code. I may upgrade to a more powerful VPS if that becomes necessary, but this would be more likely to happen if I add on additional services like GitLab or NextCloud. Web traffic is unlikely to take me down any time soon. UPDATE: It did happen, at the time of writing, the VPS had 1 core and 2GB of RAM. I did end up installing a Git server, moved my email filter to a VM on this same machine and run a bunch of other containers and applications here now.
The Code
The following discusses the basic mechanics of how pages on the site are generated. There are only a few basic classes of pages and then a few special cases that are still relatively static.