Nebula Vizor Engines

Nebula Vizor Engines


The Nebula System is equipped with three hypervisor engines, each tailored to handle distinct types of workloads. This configuration optimizes overall performance by allocating resources in a manner that enhances efficiency and ensures effective utilization of the host machine's capabilities. By strategically distributing the workloads across these hypervisor engines, the Nebula System maximizes its computational power while minimizing resource consumption, providing a robust and scalable environment for various applications.


Vizor Container Engine


The LXD-based Vizor Container Engine is a versatile container technology utilized to build Nexus cloud services. Despite being categorized as a container technology, it functions as a hybrid between a virtual machine and a container. The Vizor Container Engine allows for the creation of guest machines without requiring CPU virtualization technology (such as Intel VT or AMD-V), enabling it to behave like a container while providing the isolation and performance akin to a virtual machine. Within Nexus, it is integrated with the Instances-cn module, which facilitates the creation of Instances-cn machines. One notable feature of the Vizor Container Engine is its capability to run confined Docker containers within an Instances-cn container. Additionally, it can further enhance isolation by creating virtual machines using the Vizor QEMU Virtual Machine Engine. In Nexus, this functionality is represented by the Instances-vm interface, which, although technically a virtual machine, behaves like a managed container. Moreover, the Vizor Container Engine can automatically utilize Nvidia GPU devices through nested GPU technology.


Vizor QEMU Virtual Machine Engine


The QEMU-based Vizor Virtual Machine Engine is a flexible virtualization technology that can emulate CPU, RAM, and network interfaces, making it adaptable to various computing architectures. Currently, this engine is utilized by the Nexus Instances-vm interface to emulate x64 architectures for virtual machine creation. Development is also in progress to add support for ARM architecture, which will soon enter Beta testing. Note that to create an Instances-vm, the host system must support CPU virtualization technology, such as Intel VT or AMD-V.


Vizor KVM Virtual Machine Engine


The KVM-based Vizor Virtual Machine Engine is a Tier 1 hypervisor technology that leverages the host's resources to run multiple virtual machine guests, each capable of running unmodified Linux or Windows images. Each virtual machine is equipped with its own private virtualized hardware, including a network card, disk, graphics adapter, and more. The host system must support CPU virtualization technology, such as Intel VT or AMD-V, for this engine to function. In Nexus, this engine is utilized by the Instances-xvm interface.

The Vizor KVM Virtual Machine Engine also supports the following technologies:

  • PXE Boot:
    Allows booting of operating systems over the network.

  • TPM 2.0:
    Provides support for operating systems that require a secure boot space.

  • Template Driver:
    Facilitates the use of pre-configured templates for creating and managing virtual machines and linked clones.

  • Nvidia vGPU:
    The capability to utilize Nvidia vGPU devices for AI and graphics-intensive computing tasks.

Vizor KVM Drivers


The default KVM driver for this Vizor engine configures a Standard VGA display device, an Intel Pro 1 GB network device, and a SATA disk interface for the guest machine. This setup typically delivers high performance for most workloads..

Alternatively, the Vizor KVM Virtual Machine Engine supports Virtio paravirtualized drivers, which provide advanced device options. With Virtio, the engine sets up a Virtio graphics device with a virtual GPU, a Virtio network device with speeds up to 10GB, and a Virtio disk device with enhanced I/O capabilities. To use these Virtio devices, the guest machine must have Virtio drivers installed. If Virtio drivers are selected during virtual machine provisioning, an additional ISO with the Virtio image will be mounted for Windows guest machines. This ISO contains the necessary drivers, which can be installed during the Windows setup process. Linux guest machines typically have Virtio technology integrated by default, allowing them to utilize Virtio drivers without requiring additional installation.

We utilize xcware specifically for our external CAD/CAE workforce needs. Our vGPU Workstations outperform our previously used VMware Horizon on the same hardware. More importantly, it is now easier to onboard and scalable for every project.

— Mark K.
IT-Manager @ Bielomatik

I rely on xcware for crafting and implementing solutions for my clients due to its scalability and quick setup time for projects. 8 out of 10 customers remain with the initial xcware project setup, streamlining my delivery process.

— Thomas B.
Cloud Solutions Architect

We have successfully migrated 500+ servers and desktops from VMware to xcware. We extend our gratitude to the xcware Consulting Team for delivering exceptional work.

— Franco O.
IT Manager @ SportSA

We were pleasantly surprised by how effortlessly we could construct our Big Data platform and extend it to various production lines across the globe.

— Simone C.
Big Data Engineer @ UBX

As a developer specializing in native cloud solutions, I am delighted that xcware is available for free for developers like me. This allows me to enhance my cloud skills and expand my expertise.

— Sindra L.
Cloud Engineer

My favorite is the Flow-fx engine and the API. With Nexus Flow-fx, you can automate everything, and I mean everything! I manage over 150+ Linux servers fully automated.

— Mirco. W.
Linux Administrator @ S&P

xcware Strategic Partners