Virtualization technology, whereby multiple operating systems can be run on shared hardware, is extremely well understood if somewhat inefficient in its use of resources. The original virtualization architecture system was based on the implementation of a number of virtual machines (VMs). Every VM has to run its own instance of an operating system, resulting in a duplication of responsibility. It is also hard to manage such an infrastructure as there are multiple servers which are all independent virtual machines.
Containers, like Docker and Kubernetes, try to achieve the same concept as virtual machines but eliminate duplication of effort between machines. Instead of loading an entire operating system for an app, containers use the kernel of the host OS while allowing them to sideload app-specific libraries and programs. By adjusting the container and its image, it is possible to fine-tune the specific libraries and configuration your app will use. This results in performance gains without the overhead of running an entire OS.
The container-based approach has its downsides. The software has to be adapted for usage in containers (containerized), and this can get tricky, especially with legacy codebases. Containers have many more configurations for resource allocation and interop capabilities, so it is quite easy to misconfigure them.
The next logical step in the progression from VMs to containers is unikernels, which try to push the concepts of containers even further. Unikernels are effectively a set of pre-built binary libraries and do not handle resource allocation. A hypervisor handles direct hardware interoperation. All application-specific system calls are pushed as close to the app as possible. Lynx views unikernels as being able to deliver the security strengths of VM level partitioning with the speed and footprint size benefits attributed to containers.
more on Unikernel technology & our focus
Unikernels are not new. This piece written by Ericsson in 2016 illustrates the distinction between different architecture approaches. There are, however, several issues associated with unikernels which have limited their applications until now. These include:
Debugging. Since a unikernel has no OS running whatsoever, the approach of directly connecting to its shell and investigating does not work
Producing unikernel images is complicated and requires deep knowledge on the subject
Current application frameworks have to adapt and produce documentation on usage in unikernels
The lack of a safety certifiable/certified unikernel for mission critical applications
To provide unikernel technology that is ready for deployment in mission critical applications, a number of areas need to be addressed:
POSIX and ARINC Compliance The industry is shifting to the use of more modular software architectures which drive a need to align with popular APIs like POSIX (and, in the US market, driven by the US Army, the FACE standard)
Size The larger an operating system is, the more time and cost associated with taking the technology through a certification process like DO-178. People use the Source Lines of Code (SLOC) number as an approximate measure of software complexity. For the most stringent versions of this standard, it takes an average of 2-4 hours to take a single line of source code through that process.
Scheduling Several open source unikernels support a preemptive fair scheduling algorithm between tasks, meaning the processor is equally distributed among system users or groups. For mission critical systems, we see a need for non-preemptive scheduling, where the CPU is allocated to the process until it terminates or switches to the waiting state
This is where our focuses lay and we are excited about announcing more details about our journey in this area very soon.