Virtualization and Containers + Decentralized Networks

Virtualization and containers have had a profound impact on computing – setting off innovation and scalability at a pace unforeseeable just a few years prior. These innovations ushered in cloud computing with its ability to rent servers by the hour. This was soon followed by microservice and serverless architectures with compute power measured in seconds and microseconds. Having highly scalable and concurrent compute resources available with low cost on-demand pricing has given birth to many innovative business models and made exponential growth achievable – all without massive retooling or significant capital outlay.

Decentralization is in a similar stage as virtualization and containers were a decade or more ago. The fundamental principles and enabling components and economic models are in place and in heavy use by a growing number of innovators and pioneers. Use cases have moved from paper outlines to production realities and the ecosystem is growing rapidly across all layers of the technical stack.

The combination of these two technologies will be one of these cases where the whole will be greater than their parts. The use of proven technology for scaling and deploying server scale in combination with a revolutionary network design will allow the latter (the network) to accrue the benefits of the former – namely tremendous agility, scale, and cost efficiency.

In short, virtualization and containers will bring to decentralized networks the effects similar to the ones observed previously in cloud computing – and that is, several orders of magnitude with respect to flexibility and scale.

Virtualized and Containerized Subnodes

In a virtualized and decentralized network, subnodes are virtualized within a full node. This virtualization can be enabled via a containerized architecture to provide high-grade performance along with increased optionality for decentralized application developers. This combination of performance and flexibility is similar to traditional centralized cloud and microservice systems. Containers can be subdivided into several main components encapsulated via a dockerized Linux OS – allowing for each node to be hosted in an OS-agnostic manner.

Some Background on Containers, LXC, and Docker

Before we get too deep into this type of decentralized architecture, let’s start with some background on containers and some of the underlying technical components – namely LXC and Docker.

Containers

A container is a standard unit of software that packages up code and all its dependencies so an application runs quickly and reliably from one computing environment to another. A container image is a lightweight, standalone, executable package of software that includes everything needed to run an application: code, runtime, system tools, system libraries, and settings.

Container images become containers at runtime. The goal of containerized software is that it will always run the same, regardless of the infrastructure. Containers serve to isolate software from its environment and ensure that it works uniformly despite differences for instance between development and staging.

Containers also help improve operational efficiency and deployment agility. The use of containers along with an effective container management system gives devops teams the ability to build, deploy, and reprovision servers, applications, microservices, and even tasks.

Linux Container (LXC)

LXC (LinuX Container) is an operating system–level virtualization method that provides a secure way to isolate one or more processes from others processes running a single Linux system. By using containers, resources can be isolated, services restricted, and processes provisioned to have a private view of the operating system with their own process ID space, file system structure, and network interfaces. Multiple containers can share the same kernel, but each container can be constrained to only use a defined amount of resources such as CPU, memory, and I/O. As a result, applications, workers, and other processes can be set up to run as multiple lightweight isolated Linux instances on a single host.

Docker and Docker Engine

Docker is built on top of LXC, enabling image management and deployment services. Note that Docker is not a replacement for LXC as LXC refers to capabilities of the Linux kernel (specifically namespaces and control groups) which provides the ability to sandbox processes from one another as well as control their resource allocations. On top of this low-level foundation of kernel features, Docker offers high-level tooling that includes portable deployment across machines, application-centric usability, automatic build, versioning, component reuse, sharing, a tool ecosystem, and more.

Docker Engine is a runtime server-side engine that consists of a long-running daemon process (called dockerd) along with APIs which specify interfaces to communicate with and instruct the Docker daemon. Through Docker Engine, servers can be configured on a very granular level, instantiating Docker images on a persistent or ephemeral basis to segment and manage server resources (or as in the case of the SKALE Network, node resources) .

Virtualization and Containers within the SKALE Network

The SKALE Network is one of the first networks to make use of virtualization within its node architecture. By combining virtualization and containerization with a unique pooled validation model – one that employs random node selection and frequent node rotation to enhance network security – the SKALE Network is showing that security, scale, and economic viability do not have to be at odds.

The SKALE Network is a configurable network of elastic sidechains that supports high-throughput and low-latency transactions without the high transaction costs found in public mainnets. The network readily ties into the Ethereum mainnet and offers expanded storage capabilities along with embedded connectivity and interchain messaging. All of this is performed using a pooled transaction validation and security model that is efficient, scalable, and collusion-resistant.

Virtualized Subnodes

Each elastic sidechain is comprised of a collective of randomly appointed nodes which run the SKALE daemon and perform the SKALE consensus process. Nodes in the SKALE Network are not restricted to a single chain but rather can work across multiple sidechains via the use of virtualized subnodes.

In other words, each node is virtualized and is able to participate as a validator for an independent number of sidechains via this subnode architecture. This multiplex capability is made possible via a containerized architecture deployed within each node in the Network.

Validator Nodes Consist of a Node Core and up to 128 Virtualized Subnodes

How are Containers Used in the SKALE Network

Container images are used to stand up SKALE validator nodes as well as segment them into up to 128 subnodes. Validators first provision one or more servers – in a manner that meets the operational specs for nodes in the network – and then register them with the SKALE Manager. (See the SKALE Validator FAQ for details on the server requirements.)

The SKALE Manager is a set of smart contracts running on the Ethereum mainnet that control the operation of the nodes in the network. This point bears repeating as it is a reinforcing principle as to what the difference is between centralized computing and decentralized computing. In a centralized cloud computing network, computing resources are managed by a centralized hub. In the SKALE Network, the node resources are managed by smart contracts that live on the Ethereum mainnet. This transparent and deterministic approach gives the network an extreme degree of autonomy and independence.

In the case of new validator nodes, the SKALE Manager will provision the servers provided by the validator by installing Docker Engine along with the SKALE Admin plugin. This plugin has been specifically designed to take input from the SKALE Manager to perform the operations as dictated by a particular smart contract.

The SKALE Manager is a set of smart contracts that run on the Ethereum mainnet and control the provisioning and operation of the pool of nodes in the SKALE Network. When the term ‘SKALE Manager’ is used in this document, it is essentially referring to one or more Ethereum mainnet smart contracts that address a particular function or operation.

Benefits of Containerization

Much has been written about the benefits of containers in cloud computing and within serverless and microservice-based architectures. Most of these same benefits also apply here, albeit in some cases with a twist given the decentralized nature of the system.

Operational Agility

A large benefit of using Docker containers is the way it makes node setup and operation easy and relatively foolproof. Node operators in the SKALE Network simply need to provision the appropriate hardware, install Docker Engine and the SKALE Admin plugin, and then register the node into the SKALE Network. The SKALE Network in turn, via the SKALE Manager, will then set up the nodes, provision the subnodes, assign and reassign sidechains, collect node-provided metrics, and more. The operationality agility provided by containers reduces the amount of devops resources node operators need to deploy.

Network Resilience

Related to operational agility is network resilience. The use of containers makes it relatively easy to address server outages by providing a quick change-over mechanism to deprovision a failed server and activate a new set of virtualized subnodes in a new server. The use of standard images means new containers can be launched and get running quickly so that traffic can be redirected to working nodes.

Dynamic Chain Sizing

The use of Docker gives the SKALE Network the ability to dynamically allocate resources within the node to map resource usage with sidechain size. User level resources include the elastic blockchain size, file storage capacity, and transaction throughput. Under the hood, containers manage memory, CPU usage, I/O usage, and other system/kernel-level resources. Effective resource management lets each sidechain operate at the appropriate service levels while maintaining a secure and fault-tolerant operating environment.

Transparency

The use of Docker containers also provides significant transparency in the context of nodes and subnodes. Open-source software is a fundamental precept in decentralized computing given that it is critical to know the components that make up a solution and be able to validate each of them. This is no different with respect to the composition of the Docker images for the nodes and subnodes in the SKALE Network. Packaging these components as Docker images means that any user, developer, validator, and other interested party can easily inspect each image to ascertain and verify their contents. It also lets automated software packages run tests on the images to satisfy any compositional and/or security concerns.

Composability

Composability is a design principle for complex systems that deals with the inter-relationships of components and how loosely or tightly coupled components are to each other. As an example, the highly composable design of the EVM makes it possible for the SKALE Network to swap out the Proof of Work consensus algorithm and replace it with a Proof of Stake algorithm. This same composability also means that this algorithm could be changed and substituted with a different algorithm to meet the needs for a particular sidechain.

Adaptability

Related to composability is adaptability which translates into the ability for the SKALE Network to add new features, components, and capabilities. For example, chain users might want to run their own extensions to the EVM or swap in a new consensus protocol, the network might build in support for popular decentralized file storage, zero-knowledge proof protocols, or cross-chain messaging approaches, to name just a few examples. The SKALE Network is able to more easily include these types of enhancements and make them a standard part of the network via the use of Docker and a containerized architecture.

This post is the first of a two-part series. The second and final portion of the article will address Containers and the SKALE EVM, Network Security via Random Node Selection and Frequent Node Rotation, and more.