- Home
- Documentation
- Nebula System
- Nebula Images
Nebula Images
-
Documentation
- Release Notes
- Get Started
- Nexus Server
- Nexus Application
- Nexus Stacks
- Nexus Two Factor Authentication
- Nexus GUI and Modules
- Access Gates
- Access Keys
- Block Storage
- Codespaces
- Cron Scheduler
- Data Bright
- Data Gate
- Data Insight
- Data Spark-house
- Data Spark-nodes
- Data Spark-solaris
- Data Stream
- Desktops
- Event Hub
- Firewall
- Flow-fx
- Groups
- Identities
- Instances-cn
- Instances-vm
- Instances-xvm
- Load Balancer
- Magna-app
- Magna-buckets
- Magna-db
- Magna-nodes
- Magna-s3
- Magna-se
- Magna-sqld
- Magna-sqlr
- Name Server
- Notification Gate
- Object Storage
- Private Network
- Repositories
- Roles
- SIEM Collector
- Secret Keys
- Security Scanner
- Serverless-api
- Serverless-flow
- Serverless-fx
- Serverless-json
- Serverless-mq
- Serverless-spark
- Sky Link
- Sky Nodes
- Solution Stacks
- VPN Manager
- Vista Sessions
- Nebula System
- Vista Connect
Nebula Images
The Nebula Images consist of a collection of Docker, Stack, Container, Virtual Machine, and ISO Images that are installed on each Sky Node during the deployment process. The table below provides a list of all these Images along with their descriptions.
Image | Description |
cn-amazon-2023 | Container: Amazon 2023 |
cn-centos-9 | Container: CentOS 9 |
cn-debian-12 | Container: Debian 12 |
cn-fedora-39 | Container: Fedora 39 |
cn-rocky-9 | Container: Rocky 9 |
cn-ubuntu-22.04 | Container: Ubuntu 22.04 |
api.img | Docker: Cloud Core API Service |
chrome.img | Docker: Cloud Core Sky Chrome Browser |
cockroach.img | Docker: Cloud Core Magna-sqld Service |
httpd-2.4.img | Docker: Cloud Core Apache Httpd Service |
mariadb-11.img | Docker: Cloud Core Magna-sqlr Service |
massivedb.img | Docker: Cloud Core Magna-db Service |
minio.img | Docker: Cloud Core S3 Service |
mqtrigger.img | Docker: Cloud Core Trigger Service |
mysql8.img | Docker: Cloud Core Magna-sqlr Service |
nginx-1.25.img | Docker: Cloud Core Nginx Service |
percona8-1.25.img | Docker: Cloud Core Magna-sqlr Service |
rabbitmq.img | Docker: Cloud Core Event/MQ Service |
redpanda.img | Docker: Cloud Core Stream Service |
registry.img | Docker: Cloud Core Registry Service |
sshgate.img | Docker: Cloud Core Gate/Notification Service |
wireguard.img | Docker: Cloud Core VPN Service |
lambda-nodejs14.img | Docker: Cloud Core Serverless Node 14 Service |
lambda-nodejs16.img | Docker: Cloud Core Serverless Node 16 Service |
lambda-nodejs18.img | Docker: Cloud Core Serverless Node 18 Service |
lambda-nodejs20.img | Docker: Cloud Core Serverless Node 20 Service |
lambda-python39.img | Docker: Cloud Core Serverless Python 3.9 Service |
lambda-python310.img | Docker: Cloud Core Serverless Python 3.10 Service |
lambda-python311.img | Docker: Cloud Core Serverless Python 3.11 Service |
lambda-python312.img | Docker: Cloud Core Serverless Python 3.12 Service |
lambda-ruby32.img | Docker: Cloud Core Serverless Ruby 3.2 Service |
debian-12.iso | ISO: Debian 12 netbbot ISO Image |
rocky-9.iso | ISO: Rockey 9 netbbot ISO Image |
virtio-drivers.iso | ISO: Virtio Windows drivers ISO Image |
istack-codespace | Container: Cloud Core Code Server Service |
istack-desktop | Container: Cloud Core Code Desktops Service |
istack-docker-machine | Container: Cloud Core Code Docker/Magna-node Service |
istack-gate | Container: Cloud Core Code Gateway/Load Balancer Service |
istack-git | Container: Cloud Core Code Git Service |
istack-h2o | Container: Cloud Core Data Bright Service |
istack-solaris | Container: Cloud Core Data Spsrk Service |
istack-superset | Container: Cloud Core Data Insight Service |
vm-debian-12 | Virtual Machine: Debian 12 |
vm-rocky-9 | Virtual Machine: Rocky 9 |
vm-ubuntu-22.04 | Virtual Machine: Ubuntu 22.04 |
Warning
In the following section, we describe the manual procedures for exporting and importing Instances-cn, Instances-vm, and Instances-xvm machines. These methods should only be used in special circumstances as a last resort. We highly recommend using the Cloning and Migrating processes provided by Nexus instead.
In the following section, we describe the manual procedures for exporting and importing Instances-cn, Instances-vm, and Instances-xvm machines. These methods should only be used in special circumstances as a last resort. We highly recommend using the Cloning and Migrating processes provided by Nexus instead.
Exporting Container Images (Instances-cn)
After developing a solution in a container, you may want to export it as an image to reuse on a different Sky Node or to automate your deployment process. You can achieve this by following these steps:
-
Open a terminal session as the Nexus Administrator on the Sky Node where the
container is located.
-
Next, enter the following commands:
sudo vizor publish CONTAINER_ID --alias TEMP_IMAGE_NAME; sudo vizor image export TEMP_IMAGE_NAME ./FINAL_IMAGE_NAME; sudo vizor rm TEMP_IMAGE_NAME; ls #TIP > To list all commands enter sudo vizor --help
You should see the FINAL_IMAGE_NAME.tar.gz file in your current directory.
-
To download the image file, simply use the Workspaces File Browser.
Importing Container Images (Instances-cn)
The easiest way to import container images is by uploading the image through the Sky Node Images dialog to the target Sky Node. Alternatively, you can upload the image using the Workspaces File Explorer and then follow these steps:
-
Open a terminal session as the Nexus Administrator on the target Sky Node.
-
Upload the image using the Workspaces File Explorer and ensure that the image is named
with the format starting with cn- and ending with .tar.gz. For example:
"cn-mycustom-image.tar.gz".
-
Next, enter the following commands:
sudo vizor image import /path/to/cn-mycustom-image.tar.gz; sudo touch /var/node/store/cn-mycustom-image.tar.gz; sudo rm -f /path/to/cn-mycustom-image.tar.gz;
This process will import the image into the registry, create a symbolic file for the container, and then delete the imported image to conserve space. Afterward, you can provision a new instance from this image.
Important
Although the Vizor Container engine is based on LXD/QEMU technology, you can only import images that were exported from Nebula Images. While you can import LXD images, they will not be compatible with the Workspaces service and can only be accessed through a terminal on a Sky Node.
Although the Vizor Container engine is based on LXD/QEMU technology, you can only import images that were exported from Nebula Images. While you can import LXD images, they will not be compatible with the Workspaces service and can only be accessed through a terminal on a Sky Node.
Exporting Virtual Machine Images (Instances-vm)
After developing a solution in a virtual machine, you may want to export it as an image to reuse on a different Sky Node or to automate your deployment process. You can achieve this by following these steps:
-
Open a terminal session as the Nexus Administrator on the Sky Node where the
container is located.
-
Next, enter the following commands:
sudo vizor publish VIRTUAL_MACHINE_ID --alias TEMP_IMAGE_NAME; sudo vizor image export TEMP_IMAGE_NAME ./FINAL_IMAGE_NAME; sudo vizor image rm TEMP_IMAGE_NAME; ls #TIP > To list all commands enter sudo vizor --help
You should see the FINAL_IMAGE_NAME.tar.gz file in your current directory.
-
To download the image file, simply use the Workspaces File Browser.
Importing Virtual Machine Images (Instances-vm)
The easiest way to import virtual machine images is by uploading the image through the Sky Node Images dialog to the target Sky Node. Alternatively, you can upload the image using the Workspaces File Explorer and then follow these steps:
-
Open a terminal session as the Nexus Administrator on the target Sky Node.
-
Upload the image using the Workspaces File Explorer and ensure that the image is named
with the format starting with vm- and ending with .tar.gz. For example:
"vm-mycustom-image.tar.gz".
-
Next, enter the following commands:
sudo vizor image import /path/to/vm-mycustom-image.tar.gz; sudo touch /var/node/store/vm-mycustom-image.tar.gz; sudo rm -f /path/to/vm-mycustom-image.tar.gz;
This process will import the image into the registry, create a symbolic file for the virtual machine, and then delete the imported image to conserve space. Afterward, you can provision a new instance from this image.
Important
Although the Vizor Virtual Machine engine is based on LXD technology, you can only import images that were exported from Nebula Images. While you can import LXD images, they will not be compatible with the Workspaces service and can only be accessed through a terminal on a Sky Node.
Although the Vizor Virtual Machine engine is based on LXD technology, you can only import images that were exported from Nebula Images. While you can import LXD images, they will not be compatible with the Workspaces service and can only be accessed through a terminal on a Sky Node.
Exporting X Virtual Machine Images (Instances-xvm)
After developing a solution in a X virtual machine, you may want to export it to reuse or move on a different Sky Node or to automate your deployment process. You can achieve this by following these steps:
-
Open a terminal session as the Nexus Administrator on the Sky Node where the
container is located.
-
Next, enter the following commands:
sudo vizorx destroy VIRTUAL_MACHINE_ID; #This will stop the machine sudo vizorx dumpxml VIRTUAL_MACHINE_ID > /path/to/folder/VIRTUAL_MACHINE_ID.xml; sudo cp /var/node/vm/VIRTUAL_MACHINE_ID/VIRTUAL_MACHINE_ID-1.disk /path/to/folder/VIRTUAL_MACHINE_ID-1.disk; #TIP > To list all commands enter sudo vizorx --help
You can now transfer those two files to another Sky Node.
-
To download the files, simply use the Workspaces File Browser.
Importing Virtual Machine Images (Instances-vm)
Typically, you would use the migration process to move your X virtual machine. However, there may be situations where you need to manually upload the files using the Workspaces File Explorer. In such cases, follow these steps:
-
Open a terminal session as the Nexus Administrator on the target Sky Node.
-
Upload both the disk and XML files to a location on the Sky Node using the Workspaces
File Explorer.
-
Next, enter the following commands:
sudo cp /path/to/VIRTUAL_MACHINE_ID/VIRTUAL_MACHINE_ID-1.disk /var/node/vm/VIRTUAL_MACHINE_ID/VIRTUAL_MACHINE_ID-1.disk; sudo vizorx define /path/to/VIRTUAL_MACHINE_ID.xml; sudo vixorx start VIRTUAL_MACHINE_ID;
This process defines the X virtual machine within the system. However, if the machine was not previously registered with the same ID in the Nexus database, it will not appear in the Nexus inventory. In such cases, you need to create a new X virtual machine, configure it without installing an operating system, and then move only the disk file to the new machine's location, renaming it to match the new ID. Finally, skip the previously mentioned "sudo vizorx define" step.
Publishing Custom Docker Images
If you have built Docker images and want to make them available in the registry, you can use the Flox-fx module and the Builder API in Nexus to build, publish, and deploy pipelines. However, if you want to pull a Docker image from a public provider like Docker Hub and publish it to the Sky Node Registry, follow these steps:
-
Create a Magna-node instance and start a Terminal session.
-
Enter the following commands to pull the image from Docker Hub and push it to the
Sky Node registry:
docker pull image_name:tag; docker tag image_name:tag sky.docker:575/image_name:tag; docker push sky.docker:575/image_name:tag;
If you have multiple Sky Nodes set up with Sky Link, you can replace sky.docker:575 with sky.central.docker:575 to make it accessible across all Sky Nodes.
-
To verify that your image is in the registry, you can use the following command:
curl -k -s -X GET sky.docker:575/v2/_catalog | jq '.repositories[]' | sort | xargs -I _ curl -s -k -X GET sky.docker:575/v2/_/tags/list
-
Remove the Magna-node if it is no longer required.
Uploading ISO Images
The simplest method to upload ISO images is through the Sky Node Images dialog to the target Sky Node. Alternatively, you can use the Workspaces File Explorer on the target Sky Node to upload the ISO image directly into the /var/node/iso folder.