BLOG

Deploying Secure DevOps Workflows in Controlled Environments

by
Lynx

Modern DevOps pipelines in controlled environments must run entirely on isolated networks with no direct internet access. Implementing a DevOps workflow to handle environment changes, consistent pipeline building, and package/artifact storage proves to add some additional challenges compared to a typical development environment. In this blog post we dive into practical details regarding: 

  • How to build Docker development containers for reproducible development environments. 
  • Run Gitlab CI/CD pipelines in a self-managed instance. 
  • Manage dependencies and artifacts via Nexus or Artifactory inside an air-gapped enclave. 

Dev Containers

Containerized Dev Environments (Docker + Dev Containers)

Using Docker ensures all developers build and test code in the same environment. The containers produced here can also be utilized within your CI/CD pipeline, minimizing the number of  environment-related errors that may be encountered. 

A useful extension for Visual Studio Code (VSCode) is the development container (Developing inside a Container). A common pattern for the development container is to ensure that it includes the compiler, library dependencies, and developer tools required to properly compile, test, package, and deploy a software product. It is created by utilizing a Dockerfile to build the container, and a devcontainer.json file to define the container’s configuration (mounts, users, VSCode extensions, etc). You’ll want to make sure that the base container you derived from has the proper networking configurations and security certificates to communicate with your intranet. 

 


 

A simple development container's Dockerfile might look like the following: 

Dockerfile 

//////////////////////CODE BLOCK START//////////////////////////////// 

FROM secure-baseline:latest # Insert name of security-configured baseline container 

  

ARG UID=1000 

ARG USER=container-user 

  

# Setup user and permissions (for dev container) 

RUN useradd -u ${UID} -m ${USER} && \ 

    usermod -aG wheel ${USER} && \ 

    sed -i 's/# %wheel/%wheel/' /etc/sudoers 

  

USER ${USER} 

//////////////////////CODE BLOCK END//////////////////////////////// 

 


 

With the accompanying devcontainer.json file looking like: 

devcontainer.json 

//////////////////////CODE BLOCK START//////////////////////////////// 

// For format details, see https://aka.ms/devcontainer.json. For config options, see the 

// README at: https://github.com/devcontainers/templates/tree/main/src/cpp 

{ 

    "name": "Dev Container", 

    "build": { 

        "dockerfile": "${localWorkspaceFolder}/.devcontainer/Dockerfile", 

        "args": { 

            "UID": "1000", 

            "USER": "${localEnv:USER}" 

        } 

    }, 

} 

//////////////////////CODE BLOCK END//////////////////////////////// 

 


 

Airgap Considerations: 

In controlled networks, build machines cannot pull base images from Docker Hub or utilize a standard package manager setup like apt-get with Ubuntu mirrors. Instead, it is on you to maintain privately deployed registries or mirrors. 

 

Podman 

An alternative to Docker is Podman, an open-source container engine developed by Red Hat. It’s main standout from Docker is the fact that it’s daemonless, compared to Docker which requires a daemon to be running in the background. Daemons typically run with root privileges, providing an attack vector to malicious actors. With Podman, users can interact with rootless containers, making this simpler to access while increasing security of the container system.  

Many commands you may be familiar with in Docker are still supported with Podman. You can also add the following line to your `~/.bashrc` file: 

`alias docker=podman` in order to support your existing Docker infrastructure in the migration to Podman. 

Podman also introduces the benefit with the new concept of “pods”. Pods are groups of containers, similar to how Kubernetes operates, that allows users and administrators to group together various Podman containers. This helps support the principle of least functionality and keeping container sizes small. Including this functionality into the Podman command line tool reduces the amount of dependencies that are introduced to your environment as well. 

 

Gitlab CI/CD in a Controlled Network 

Gitlab Self-Managed can run entirely offline on-premises. Gitlab's docs refer to these setups as "offline" or "air-gapped" environments (Offline environments | GitLab Docs). 

Inside the controlled network, GitLab runners execute jobs with limited privileges. It is not advised to run containers in privileged mode, as this effectively disables all the container's security mechanisms and exposes the host machine to the user. Be sure to enable the GitLab runner's job cleanup flag (FF_ENABLE_JOB_CLEANUP) to delete all build directories once a job has been finished. 

In your Gitlab Admin settings, ensure sign-in restrictions and default visibility are locked down. Default project visibility should be set to Private so that no repo is inadvertently public. Utilize encrypted vault programs to store CI secrets rather than in plaintext. 

 

Internal Artifact Management (Nexus or Artifactory) 

With no internet you must mirror (or cache) trusted package repositories locally. Both Sonatype Nexus Repository and JFrog Artifactory excel here. They support proxy repos (which cache remote data) and hosted repos for your own artifacts. In a controlled setup, you typically use them as the sole source of truth. 

Sonatype recommends running Nexus under a non-root OS account (with minimal privileges), or in a container to isolate it (Securing Nexus Repository). Only allow the machine's IP to reach approved external services (if any) and disable all webhooks or remote repo creation if unnecessary. Keep Nexus's audit log enabled to track all config changes, and periodically mine the request logs for unusual activity. Always enforce role-based access control (RBAC) and always give users the least number of privileges they require. Specific roles should be created for CI/CD runners, QA testers, and security analysts. 

On the Artifactory side, JFrog's guidance is similar. You can utilize a tarball of dependencies transferred via a secure USB to instantiate your remote repositories to handle dependencies. It's recommended to create separate repositories to be utilized by development teams for the software artifacts that they produce. 

These repositories provide a self-managed, secure option for storing RPMs, Debian Packages, OS and Docker images, and more. Gitlab CI/CD can fetch, and upload dependencies and artifacts as required for specifics jobs. 

 

Best Practices and Trade-Offs 

  • Dependency Management: In a closed network, plan how new dependencies enter the system. Common approaches include whitelisting libraries via a ticketed process or fully mirroring public repos. Both Nexus and Artifactory can be used to host private copies of Maven Central, npm register, Docker Hub, etc. A fully mirrored repo (cloning every path and metadata) gives predictability but is storage heavy. Using a proxy repo conserves space (only cache what's needed) but requires careful configuration and may still need external access. 
  • Version Control & CI: A self-hosted Gitlab means you control upgrade cadence. Upgrading in an air-gap environment requires staging servers. It is highly recommended to back up Gitlab's DB and repositories frequently in case of corruption. 
  • Audit & Logging: Record everything. Gitlab's audit logs, Nexus's audit logs, OS-level syslogs and network logs – all should be shipped to a central log management system if possible. 
  • Access Control: Enforce least privilege everywhere. For Nexus/Artifactory, map roles to functional teams. Utilize private SSH keys when cloning from Gitlab and set up specific access tokens for your CI/CD runners. 
  • Trade-offs: Everything has a bit more friction in an air-gapped environment. Be sure to continuously update mirrored package repositories, Gitlab instances and other software to patch security vulnerabilities. Plan for regular maintenance of windows to patch servers and images. 

 

Conclusion 

Building a secure DevOps pipeline and workflow in a controlled environment requires careful planning and airtight workflows. By containerizing development, running a self-hosted Gitlab with no internet dependency, and serving all packages via Nexus or Artifactory, you can achieve a modern build/test/deploy pipeline that meets security requirements. The examples shown above illustrate one approach. In practice, every organization must adapt these patterns to its own security policies and network topology. The principles remain consistent:  

  • Isolate Aggressively 
  • Cache Dependencies 
  • Log and Audit Continuously.  

Done right, even controlled projects can benefit from DevOps speed and automation, with confidence that no data has slipped beyond the perimeter. 

Connect with a DevSecOps expert over a call and see how we can help.   

 

Transform Your DevOps for Controlled Environments 

From air-gapped CI/CD pipelines to secure artifact management and containerized development, operating in controlled environments requires isolated tools and an integrated, security-first approach. 
 
Discover expert strategies and proven architectures to modernize your DevOps workflows without sacrificing control, performance, or compliance. 

Lynx
Lynx

Seize the Edge

The future won’t wait, neither should you. Let’s build, secure, and accelerate your next mission together. Contact us today to get started.