gurfin / dn42 - The design

Created Wed, 12 Mar 2025 01:32:00 +0100 Modified Fri, 14 Mar 2025 16:45:53 +0000
475 Words

Ever wanted to try being your own ISP? Ever wanted to setup a transit AS while hosting some services on IP-addresses you own? Enter “dn42” (https://dn42.eu/), where you are able to build an overlay network of software or hardware based routers and do BGP peering with hundreds of other AS:s.

For a couple of months now I have been itching to setup a dn42 network to learn some more about software based routing and networking in the cloud. So, in the beginning of 2025 I decided to pull the trigger on the project and began designing and constructing what my network would look like. Now, keep in mind, I am a complete novice when it comes to software based routing, so please value this post accordingly.

The criteria

The main goal of the dn42 network is to learn, while causing as little headaches as possible. Given this, I had a few criteria that I wanted my network to meet:

  1. The entire infrastructure should be ephemeral.
    I should be able to completely erase any node in the network, with the flick of a wrist. Of course, this also applies in the creation of nodes.
  2. All nodes should be managed by some sort of central management system.
    I want to centrally control what nodes are deployed and how they are configured.
  3. The dn42 network should be accessible from a L3VPN inside my MPLS cloud.
    To provide reachability into the dn42 network I will use a dedicated L3VPN inside my MPLS cloud. I can then use PAT to allow any one of my networks to access the dn42 network.

The cloud

I wanted to use cloud based hosting for this. After searching for a bit I landed in using Azure for hosting the webpage https://dn42.gurfin.se while using Hetzner and Cleura for hosting the dn42 nodes themselves.

I would use Ansible along with Gitlabs CI/CD pipelines to deploy and configure everything.

The design

dn42 diagram

I want to interact with my dn42 network using a git based workflow. This gives me a lot of benefits such as setting up custom pipelines, version control, easy collaboration and a lot more.

The overall design is basically to deploy and configure everything automatically. The main tool handling the infrastructure will be Ansible.

The dn42 nodes

The dn42 nodes are all running Ubuntu server, with bird2 for routing, as well as wireguard and strongswan for overlay tunneling. They are at least 4 GB of RAM with at least a single core. These servers can easily be redeployed with more allocated hardware should the requirements expand in the future.

Monitoring

As for monitoring I have, as of now, only planned for cloud based ICMP monitoring of the nodes. I hope to do this using UptimeRobot, but in the future I may expand this to explore other monitoring options to get actual metrics from the dn42 nodes.