Reaching out to home computers
By Toshihito Kikuchi
Kinesis Network transforms underutilized computing resources into scalable on-demand compute services. It’s expected most of them are home computers or on-premise corporate servers that don’t have public IP addresses. To achieve our goal, it’s essential to make these computers accessible from the Internet, but at the same time, we need to provide an unified, automated way to join the network. Asking owners of those devices to modify router settings or other network settings is not our option. In this article, I’m going to briefly explain how we deal with computers without having public IP addresses. By the way, we call these computers “Global Node”, in contract to “Datacenter Node”, to which a public IP address is assigned.
Whatever approach we take, we need to set up a globally accessible server that works as a proxy to connect to global nodes. In the earlier version of our product, we chose SSH and implemented a custom channel to forward each TCP connection. It worked well for a while, but we discontinued this approach because of scalability concern and high maintenance cost.
The next technology we chose is WireGuard. We spotlighted this technology to solve another networking challenge, and then we realized this can be a replacement of SSH to achieve Global Nodes. WireGuard is basically a VPN. The basic idea is to install WireGuard on both the proxy side and the global node side and to redirect traffic to the proxy to the node through WireGuard network. In this setup, we call this proxy role “AppProxy”, in contrast to “NodeProxy”, which is to proxy a node’s admin channels (explained later).
A picture is worth a thousand words. Here’s the entire topology of a global node hosting an App with a single port.

In this diagram, you can see two outer boxes, Global Node and AppProxy. Both of them are Kinesis nodes, where Dynamo run and manage Kinesis Apps. Traffic comes from the right side. A node accepts two types of traffic: one is for normal app traffic like HTTP for Nginx, and the other is admin channels that host live docker log streams via gRPC and exec sessions via WebSocket. Admin channels are usually consumed by the Portal. An AppProxy opens accessible ports for these services, but on top of them, we have Load Balancer for App ports and NodeProxy for admin channels.
To establish a WireGuard network, we need to install WireGuard on both GlobalNode and AppProxy. Instead of directly installing WireGuard on these hosts, we decided to “kinesify” WireGuard and installed it as a Kinesis app [1]. This design drastically simplifies our landscape because we can reuse most of code to set up WireGuard. We call a WireGuard on AppProxy “Frontend Gateway”, and the one on a Global Node “Backend Gateway”. As the diagram shows, we use 10.200.100.1 and 10.200.100.2 for all WireGuard networks. Using the same subnet for everything is to simplify our code. There is no specific reason why I chose this subnet. Maybe I should have chosen a more exotic subnet like 31.41.59.0/24. For docker bridge network, Dynamo selects an unused one among 192.168.0.0/24, 192.168.1.0/24, etc. This means a global node cannot run more than 255 apps (and one for admin channels). I think it’s a reasonable limit, but we can increase it by changing the network bits though it’s hardcoded.
Once a WireGuard network is established, a frontend gateway can access to a backend gateway via the WireGuard network’s IP even though the Global Node doesn’t have a global IP address.
The next step is to redirect traffic to AppProxy to the Global Node. We simply configure DNAT to redirect all traffic to AppProxy to the Global Node’s address on the WireGuard network. Here are the actual commands on AppProxy. To do this, we leverage a feature in Kinesis App to plant arbitrary files on a node and execute arbitrary commands on startup.
iptables -t nat -A PREROUTING -p tcp -j DNAT --to-destination 10.200.100.2
iptables -A FORWARD -p tcp -d 10.200.100.2 -j ACCEPT
iptables -t nat -A POSTROUTING -p tcp -d 10.200.100.2 -j MASQUERADE
For App ports, we configure one more DNAT to redirect traffic to the Global Node to the actual user app which is on the same Docker Bridge network. We create a custom network per App, so traffic to each App is completely separated on a global node.
Since admin channels are hosted by Dynamo Admin Server which is running on host, we cannot use DNAT unless we make host network accessible from a docker container, which is not recommended because of security concern. Therefore, we decided to include SSH in the gateway app and get Dynamo Admin Server to listen admin channels inside a backend gateway via SSH remote port forwarding.
That’s the overview of our global nodes. How does the network set up this whole thing? Here are the flow to establish admin channels when a global node joins the network:
Backend selects an AppProxy node to pair with
Backend installs a frontend gateway app on the AppProxy
Dynamo fetches dynamically assigned ports (WG’s listener, admin channels) and sends them back to the backend
Backend registers exposed admin channels with NodeProxy
Backend installs a backend gateway app on the global node with the config to connect to the AppProxy
Throughout the flow, Dynamo Admin Server keeps asking Dynamo for an SSH endpoint. Once it’s available, it automatically connects and listens ports inside a backend gateway.
When the network receives a request to run an App on a global node, it repeats almost the same steps to install frontend/backend gateway on both nodes.
[1] Kinesified Gateway App: https://github.com/kinesis-network/gateway-container
Last updated
Was this helpful?