behind the scenes

information on the instance

As this is a community project, I thought it would be a good idea to give some information about what is at the back of your browser or app. Every once in a while I will update the information. Let me know in case you would have more insight about a certain subject.

Accounting

With running a Fediverse instance comes costs for server, storage (media and backup) and other things like the domain. Currently mountains.social is still in the free plan for Deepl Translation and SendGrid (for mail). Although it is not required to participate in these costs, various members are chipping in via recurring or one-time donations. To give an overview of the costs and donations, I will update these numbers on a monthly basis.

mountains-social-accounting-2024-04

mountains-social-accounting-2024-03

mountains-social-accounting-2024-02

mountains-social-accounting-2024-01

mountains-social-accounting-2023-12

mountains-social-accounting-2023-11

mountains-social-accounting-2023-10

mountains-social-accounting-2023-09

Mountains.social was opened to the community in December 2022. In September 2023 I started tracking the costs and donations more closely.

Peek in the technical stuff

The heart of the infrastructure is location at the Hetzner datacenter in Falkenstein, Germany. This is where the servers (running as virtual machines) are located, which make up mountains.social. The media however that is uploaded by the members or comes in via federation is stored at the Backblaze datacenter in Amsterdam, the Netherlands. The third (and an important) location is the Linode datacenter in Stockholm, Sweden. Here the database and file backups are stored (coming from the Hetzner datacenter).

mountains-social-locations

The philosophy for the technical setup is "keep it simple". The instance must run robustly, but too many unnecessary technical stuff and tweaks will also increase the risk of failures. As long as the instance can have a simple setup, this will be the approach.

Server farm

Mountains.social is a modest-size Mastodon instance. This means the infrastructure can be kept simple. Nevertheless there are 3 servers, each having their own role.

hetzner-mastodon

This server has the main (PostgreSQL) database and the Mastodon server software. This is the core of the server farm. The "addons" and "monitor" servers could die without much impact to the instance (see hetzner-addons for the functionalities which would not be available).

Hostname hetzner-mastodon
Model CPX31
CPU 4 vCPU @ 2.50 GHz
Memory 8 GB
Disk 80 GB (database) + 60 GB (local backups)
IPv4 142.132.227.141
IPv6 2a01:4f8:c17:14db::1
Functionalities Mastodon
hetzner-pixelfed

As the Mastodon server, also this Pixelfed server is using a PostgreSQL database. This system does not rely on the server with the addons. All software required that is needed, is locally on the system.

Model CP21
CPU 2 vCPU @ 2.30 GHz
Memory 4 GB
Disk 40 GB (local)
IPv4 49.13.114.95
IPv6 2a01:4f8:c012:1a4::1
Functionalities Pixelfed
hetzner-addons

As the name says, this server hosts addons as in "optional". These goodies are located on a different server to not affect the main instance in case of issues. The main addons is ElasticSearch, which provides the full-text search. It is known for being memory-hungry and should not be able to interfere with the memory of Mastodon. Also this wiki which you are reading now is running on this server (built on Grav). Finally it provides the remote operating access to all 4 servers.

Model CP21
CPU 2 vCPU @ 2.30 GHz
Memory 4 GB
Disk 40 GB (local)
IPv4 49.12.4.140
IPv6 2a01:4f8:c17:805e::/64
Functionalities ElasticSearch
Wiki (via Grav in a Docker container)
hetzner-monitor

This server provides the monitoring of all components on the mountains.social servers: metrics on the database, sidekiq, webserver, etc. as well as their availablity. Also the media and backup buckets at Backblaze and Linode are monitored from this server. The monitoring is done via Zabbix which has lots of integrations with server and software. Notifications in case of alerts is done via Telegram. This system is not reachable from the internet directly.

Model CP11
CPU 2 vCPU @ 2.40 GHz
Memory 2 GB
Disk 40 GB (local)
IPv4 -
IPv6 -
Functionalities Zabbix

Remote availability monitoring is also in place, which is not documented in further detail here.

Media storage

All information and text from a post is saved in the Mastodon database. The included media however is saved outside of this database. This could be saved locally on the Mastodon server, but this would drive up costs significantly. Therefore the media for mountains.social is saved on B2 Cloud storage buckets at Backblaze. These buckets are S3-compatible.

To give an impression from the size of the bucket, the below graph shows the history of the last 3 months (mid January - mid April 2024). The saw-tooth-like behaviour comes from the daily housekeeping jobs, that remove older remote media (remote means here media that came in via federation).

Backblaze-media

Media delivery

To improve the performance of the media delivery to the people, a Content Delivery Network (CDN) has been configured. The CDN which is used is Bunny.net. To balance costs and performance, the "Volume Network" is used. This network has nodes in the below 10 locations. The "Standard Network" of Bunny.net has nodes in 100+ locations, but would cost 3 to 4 times as much.

When posting media, this is not uploaded to one of the CDN nodes, but via the Mastodon server in Germany instead. Only the subsequent delivery of the media to the people is managed via the CDN.

Europe
North America
πŸ‡©πŸ‡ͺ Frankfurt πŸ‡ΊπŸ‡Έ Los Angeles
πŸ‡«πŸ‡· Paris πŸ‡ΊπŸ‡Έ Chicago
πŸ‡ΊπŸ‡Έ Dallas
πŸ‡ΊπŸ‡Έ Miami


Asia
South America
πŸ‡ΈπŸ‡¬ Singapore πŸ‡§πŸ‡· Sao Paolo
πŸ‡―πŸ‡΅ Tokyo
πŸ‡­πŸ‡° Hong Kong


cdn

Backups

Backups are vital when running a service where a lot of people rely on. Sure, mountains.social is perhaps not the most important service on the internet, but it would be a shame nevertheless when people lose post, memories and connections to other people. Therefore backups on various level are scheduled, run and monitored.

Server Backups

The servers are running as virtual machines (VMs), which makes it easy to make a backup at server level. For every server a daily backup is running. It includes all static files as well as the database. As the database is constantly changing (and therefore also its files on the operating system), such a backup can't be used to recover the complete system. For the static files like configuration files and binaries however, this is perfect. Just before maintenance activities (depending on the type of maintenance), a separate manual backup is made. In case something goes wrong, this ensures the minimal amount of data-loss.

Database Backups

As mentioned above, the server backups can't be used to recover the database in case of failure as its files are inconsistent in the server backup. Therefore there is a daily database backup scheduled, which is dumped on a local filesystem. Afterwards is this database backup, together with the ~live directory (where all the Mastodon software and configuration is located) and further relevant configuration files sent to the S3 Linode backup bucket. The tool used here is Duplicati, which is installed locally on the Mastodon server.

Media backups

Currently there are no backups made from the B2 Cloud storage bucket.

It is on the to-do list to implement also a backup from the media storage in Amsterdam. Where this will be located is not decided yet, but certainly within the European Union or Switzerland (reason being the better privacy laws).