Level up Your Postgresql Container Authentication and Transport

[Disclaimer: This article is intended for educational purposes only.  The information disclosed is meant to be setup in a lab environment or with expressed written permission from relevant owners.  Any legal issues resulting from the use of this information, is solely the responsibility of the reader.  Any loss of data or damage caused by using any of the information within this article is the sole responsibility of the reader.]

The purpose of this article is to shed light on strengthening authentication for a PostgreSQL database and why it is important. We will also cover setting up an example database within a Docker container for ease of setup and system isolation. Containers are a versatile tool that allow you to build, test, and experiment in isolated environments without affecting your main system, making them ideal for development and deployment. While we will go over some database, container, and cryptographic concepts, however, this will not be a comprehensive overview of each. I encourage anyone curious about containers to take a deeper dive as they are a boon for building, testing, and scaling applications.

Before we start rolling our sleeves up, I would first like to discuss the primary driver for writing this post. All too often, we find that storage of clear text passwords and or API keys in configuration files pervade the lion’s share of guides, documentation and tutorials regarding [insert tech here :)]. Your favorite Large Language Model (LLM) was trained on much of this information, so we can infer that there is highly probable bias towards putting passwords in configuration files. Don’t take my word for it, check it out for yourself. Many information security professionals consistently call for secure coding practices and rightfully so. How do we move the needle in the direction of a more secure future? We could first stop storing credentials in clear text form (Beaten dead horse? yes, Less true? Hell no). Second, our documentation, guides and tutorials could default to a more secure setup. For this particular use case we will be focusing on a more secure setup for PostreSQL but the sentiment applies to any technology that can leverage a more secure means of authentication. This may sound like “pie in the sky” wishful thinking (In some ways it is), we can be certain it’s an area of improvement across most technologies.

Securing databases like PostgreSQL is essential because they store critical data that, if compromised, could lead to data breaches, financial losses, and damage to an organization’s reputation. The key to improving the security for any system involves layering improvements. In regards to PostgreSQL, we will be using the TLS protocol to improve our authentication and transport layers, as well as providing a means to getting rid of those insecurely stored credentials.

The following is a step-by-step process to configure a PostgreSQL container that forces mutual TLS authentication for access, highlighting relevant Docker features including it’s networking and isolation . By the end of this setup, only trusted, certificate-based connections will be able to access the PostgreSQL database. This approach reduces the attack surface by limiting the points of entry that attackers can exploit, focusing access only through secure, authenticated channels, and eliminating unnecessary exposure to external networks. By minimizing accessible components, it lowers the risk of vulnerabilities being exploited. Utilizing more than just passwords for authentication means that only verified users and systems can access the database, while data is transmitted and received in a more secure/layered way. Mutual TLS authentication helps thwart unauthorized access and man-in-the-middle attacks, increasing data confidentiality and integrity.

Enough of the blathering, let’s get to it.

Requirements

Before beginning, ensure you have Docker and Docker Compose (Optional) installed on your system. We will provide examples of using both Docker and Docker Compose to create and configure the PostgreSQL container. Familiarity with Docker concepts and certificate management is helpful, but we’ll cover each configuration step in detail to make the setup accessible to users at all levels.

If you need help installing Docker on your system, follow the official Docker installation guide for detailed, step-by-step instructions: Docker Installation Guide. This resource provides platform-specific setup instructions to ensure a secure and successful installation.

If you need help installing Docker Compose, refer to the official Docker Compose installation guide here: Docker Compose Installation Guide. This resource provides clear, platform-specific instructions to ensure a secure installation.

This tutorial will be demonstrated using a minimal Ubuntu 22.04 server VM, that only has the necessary packages installed to build docker containers. If you are also using a minimal install, you will also want to install vim or your favorite text editor for modifying files.

Before we build our PostgreSQL container, we need to set up certificates for a crucial part of our security: enabling mutual TLS (mTLS) authentication. With mTLS, both the client and server must present valid certificates to establish a trusted, encrypted connection, ensuring secure and authenticated data exchange.

Why Certificates Are Needed for PostgreSQL

In traditional setups, database connections often rely on username and password authentication. However, passwords can be compromised, and as our infrastructure grows, managing and securing them becomes a challenge. With TLS certificates, we add a robust layer of security through encrypted and verified connections, which protect sensitive data and eliminate the need to store or transmit passwords in plain text.

By using mutual TLS authentication, we ensure that:

  1. Server Authentication: The client can trust it’s connecting to the genuine PostgreSQL server and not a malicious impersonator.
  2. Client Authentication: The server can verify the client’s identity based on the client certificate, adding an extra layer of security.
  3. Encrypted Connections: All data exchanged between the client and server is encrypted, preventing unauthorized access or tampering during transmission.

Why We will Create Certificates Before Building the Container

In this example, the certificates will be in place before building the container so they can be securely embedded in the PostgreSQL configuration during the build process. The server will use these certificates to enable TLS and enforce authentication policies. If we wait until after the container is built, adding these certificates securely would require extra steps and software within the vanilla PostgreSQL container image, which adds more complexity and is prone to error.

Generating Certificates with OpenSSL

For our example, we’ll use OpenSSL to generate self-signed certificates. This method is suitable for demonstration purposes and developing and understanding of how PKI works in general. However, in a production environment, certificates should ideally be issued by a trusted Certificate Authority (CA). The following are just a few example CAs:

  • Internal CA: Often used in private networks where organizations manage their own trusted certificate authority.
  • Public CA: For production environments exposed to the internet, certificates from a public CA (e.g., DigiCert, GlobalSign) provide verified trust for external users.
  • Let’s Encrypt: Provides free, automated certificates for public-facing services, though it requires periodic renewal.

With these certificates, our PostgreSQL server will be set up to accept only authorized and encrypted connections, offering a more secure environment for sensitive data.

Once the certificates are generated, we can embed them in our container during the build process. Let’s get to generating our certificates shall we?

Step-by-Step Certificate Generation

  1. Generate the Certificate Authority (CA) Private Key and CertificateFirst, create a CA certificate that will sign both the PostgreSQL server and client certificates. This CA certificate will serve as the trusted root for our setup.
openssl req -new -x509 -days 365 -newkey rsa:4096 -nodes \
    -keyout openssl_created_ca.key \
    -out openssl_created_ca.crt \
    -subj "/CN=PostgreSQL CA"
  • Explanation:
    • -new -x509: Creates a new certificate in X.509 format.
    • -days 365: Sets the certificate validity to 365 days.
    • -newkey rsa:4096 -nodes: Generates a new 4096-bit RSA key without a passphrase. (We could add a passphrase to increase security for the CA key)
    • -keyout openssl_created_ca.key: Saves the CA private key as openssl_created_ca.key.
    • -out openssl_created_ca.crt: Saves the CA certificate as openssl_created_ca.crt.
    • -subj "/CN=PostgreSQL CA": Sets the certificate subject.

NOTE: The names of the files that get generated like openssl_created_ca.key and openssl_created_ca.crt , are arbitrary, so they can be as descriptive or abbreviated as you like within your systems limits.

Now that we’ve generated the CA key, let’s restrict its permissions for security:

chmod 400 openssl_created_ca.key    # Set the CA private key to read-only

The chmod command in Unix/Linux is used to change the permissions of files and directories. File permissions control who can read, write, and execute files and directories, and they are represented by numbers. Here’s how the permission values work:

Breakdown of Permission Values

Each type of permission has an associated numeric value:

  • 4 = Read: Allows reading or viewing the file’s contents.
  • 2 = Write: Allows modifying or writing to the file.
  • 1 = Execute: Allows executing the file (running it as a program or script).

If a permission is not granted, the value is 0 (no permission).

Combining Permissions

Permissions are assigned separately for:

  1. Owner: The user who owns the file.
  2. Group: Users in the file’s group.
  3. Others (or Everyone): All other users.

For each category (owner, group, others), the final permission value is the sum of the individual permissions. This means you can combine read, write, and execute permissions by adding their values together.

Examples of Combined Permissions

Using the sum of these values, you can set specific combinations of permissions:

  • 0: No permissions.
  • 1: Execute only.
  • 2: Write only.
  • 3: Write and execute (2 + 1).
  • 4: Read only.
  • 5: Read and execute (4 + 1).
  • 6: Read and write (4 + 2).
  • 7: Read, write, and execute (4 + 2 + 1).

Applying Permissions with chmod

When using chmod, you specify permissions for the owner, group, and others in a three-digit format (e.g., chmod 750 filename). Each digit represents the permissions for one of these categories.

Example: chmod 750 filename

For the command chmod 750 filename, the permissions are set as follows:

  • 7 (Owner): 4 + 2 + 1 = Read, Write, and Execute.
  • 5 (Group): 4 + 1 = Read and Execute.
  • 0 (Others): No permissions.

This means:

  • The owner can read, write, and execute the file.
  • The group can read and execute the file.
  • Others have no access.

Common Permission Settings

  • 644: Read and write for owner, read-only for group and others. Common for files.
  • 755: Read, write, and execute for owner; read and execute for group and others. Common for executable files and scripts.
  • 700: Read, write, and execute for owner; no permissions for group and others. Good for sensitive files.

Therefore, our value of 400 is: Read for owner; no permissions for group and others. Let’s move on to creating the certificate signing request.

Generate the PostgreSQL Server’s Private Key and Certificate Signing Request (CSR)

In order to create our server certificates we need to create a private key and certificate signing request (CSR) for the PostgreSQL server. The CSR will later be signed by our CA.

openssl req -new -newkey rsa:4096 -nodes \
    -keyout postresql_server.key \
    -out postresql_server.csr \
    -config postgresql_db.cnf

Explanation:

  • -new: Creates a new CSR.
  • -newkey rsa:4096 -nodes: Generates a new 4096-bit RSA key without a passphrase. (If we want to add another layer of security, we could set a passphrase)
  • -keyout postresql_server.key: Saves the server’s private key as postresql_server.key.
  • -out postresql_server.csr: Saves the CSR as postresql_server.csr.
  • -config postgresql_db.cnf: Specifies the configuration file for additional certificate settings.

As with the CA key, lets set the permissions for the server private key:

chmod 600 postresql_server.key

Sign the PostgreSQL Server’s Certificate with the CA

The next step is to use the CA to sign the PostgreSQL server’s CSR, creating a trusted server certificate that clients can verify.

openssl x509 -req -days 365 -in postresql_server.csr \
    -CA openssl_created_ca.crt \
    -CAkey openssl_created_ca.key \
    -CAcreateserial \
    -out postresql_server.crt \
    -extfile postgresql_db.cnf -extensions v3_ca

Explanation:

  • -req: Specifies we are signing a CSR.
  • -CA openssl_created_ca.crt -CAkey openssl_created_ca.key: Uses the CA certificate and key to sign the CSR.
  • -CAcreateserial: Creates a serial number file for the CA.
  • -out postresql_server.crt: Outputs the signed server certificate.
  • -extfile postgresql_db.cnf -extensions v3_ca: Adds extensions to the certificate based on the configuration file.

Generate the PostgreSQL Client’s Private Key and CSR

Next we’ll generate a private key and CSR for the client (in this example, pgsql_admin). This CSR will be signed by our CA to create a client certificate.

openssl req -new -newkey rsa:4096 -nodes \
    -keyout postresql_client.key \
    -out postresql_client.csr \
    -subj "/CN=pgsql_admin"
  • Explanation:
    • -new: Creates a new CSR.
    • -newkey rsa:4096 -nodes: Generates a new 4096-bit RSA key without a passphrase.
    • -keyout postresql_client.key: Saves the client’s private key as postresql_client.key.
    • -out postresql_client.csr: Saves the CSR as postresql_client.csr.
    • -subj "/CN=pgsql_admin": Sets the certificate subject for the client.

Now set permissions for the client private key:

chmod 400 postresql_client.key    # Set client key permissions to read-only

Sign the PostgreSQL Client’s Certificate with the CA

The next step is to sign the client CSR using the CA to create a trusted client certificate that the PostgreSQL server will recognize.

openssl x509 -req -days 365 -in postresql_client.csr \
    -CA openssl_created_ca.crt \
    -CAkey openssl_created_ca.key \
    -CAcreateserial \
    -out postresql_client.crt

Explanation

  • openssl x509: Invokes OpenSSL to work with X.509 certificates, which is the standard format for SSL/TLS certificates.
  • -req: Specifies that we’re working from an existing Certificate Signing Request (CSR), which contains the client’s public key and identity details.
  • -days 365: Sets the validity period of the certificate to 365 days (1 year) from the date of creation.
  • -in postresql_client.csr: Provides the input file (postresql_client.csr), which we generated for the PostgreSQL client.
  • -CA openssl_created_ca.crt: Specifies the CA certificate (openssl_created_ca.crt) that we used to sign the CSR, making the resulting certificate trusted by this CA.
  • -CAkey openssl_created_ca.key: Specifies the CA’s private key (openssl_created_ca.key) we used to sign and validate the certificate.
  • -CAcreateserial: Creates a serial number file if it doesn’t exist. Serial numbers uniquely identify each certificate issued by the CA.
  • -out postresql_client.crt: Specifies the output file, which will be the signed client certificate (postresql_client.crt). This certificate is now trusted by the CA and can be used for secure connections.

Clean Up CSR Files

Our last certificate action is not really necessary, however, it helps tidy up a bit. Once the CSRs are signed, they’re no longer needed. Deleting them helps keep the setup clean and minimizes potential security risks.

rm postresql_server.csr postresql_client.csr    # Remove CSR files

Explanation

  • rm: The rm command removes files or directories.
  • postresql_server.csr postresql_client.csr: These are the CSR (Certificate Signing Request) files we created for the server and client. Since the CSRs have already been signed by the CA to generate trusted certificates, they are no longer needed.

Removing the CSRs helps keep the setup clean by getting rid of unnecessary files. Additionally, it minimizes any security risk associated with leaving sensitive certificate request files in the directory.

Now that we’ve generated the necessary certificates to be used to setup our PostgreSQL container, let’s dive into the Docker side of things.

The first thing we need to do is create some directories to work in.

mkdir -p postgresql/postresql_container

Explanation:

  • mkdir: The command to create directories in Linux.
  • -p: Ensures that all necessary parent directories are created. If any of the specified directories already exist, it prevents errors.
  • postgresql/postresql_container: Specifies the directory path to be created.

In this case, the command will create a directory structure where:

  • postgresql is the top-level directory.
  • postresql_container is a subdirectory inside postgresql.
cd postgresql
  • Explanation:
    • cd: The command used to change directories.
    • postgresql: The target directory to switch to.

This command moves you into the postgresql directory, making it your current working directory. From here, you can work on files or run commands within this directory structure.

Together, these commands create a nested directory structure (postgresql/postresql_container) and navigate into the postgresql directory. This setup is useful for organizing project files, such as configuration files or scripts related to your PostgreSQL container.

The first file we will be configuring is the Dockerfile for the PostgreSQL container, which we will put in the postgresql_container directory. The reason for this will make more sense when we configure Docker Compose. Think of the postgresql_container directory as the container directory, the location we will put all of the configuration and support files the container needs.

Using your favorite text editor create the Dockerfile. The following example uses vi:

vi postgresql_container/Dockerfile

Explanation

  • vi: Launches the vi editor, a command-line text editor available on most Unix-like systems. It allows you to create, edit, and save text files directly from the terminal. On a side note, vi a great editor as it is installed by default on nearly every *nix distribution.
  • postgresql_container/Dockerfile:
    • postgresql_container: Specifies the directory where the file is located (or where it will be created if it doesn’t exist).
    • Dockerfile: The name of the file being opened or created. In Docker projects, the Dockerfile contains instructions for building a Docker image, such as specifying the base image, copying files, and running commands.
    • While in the vi, use ‘i’ to activate insert mode and type your text.
    • Once complete, use ‘:wq!’ to save/write and quit.
FROM postgres:latest

# Create certificate directory
RUN mkdir /ssl

# Copy relevant files to the container
COPY postresql_server.crt /ssl/postresql_server.crt
COPY postresql_server.key /ssl/postresql_server.key
COPY openssl_created_ca.crt /ssl/openssl_created_ca.crt
COPY postgresql_db_initialization_script.sql /docker-entrypoint-initdb.d/postgresql_db_initialization_script.sql
COPY container_initialization_script.sh /docker-entrypoint-initdb.d/container_initialization_script.sh


#Set ownership and permissions on certificate files
RUN chown postgres:postgres -R /ssl /docker-entrypoint-initdb.d
RUN chmod 700 -R /ssl 
RUN chmod 600 /ssl/postresql_server.*

# Switch back to the postgres user
USER postgres

Explanation:

  • FROM postgres:latest: Specifies the base image for the Docker container, using the latest version of the official PostgreSQL image. This image comes pre-configured with PostgreSQL, so you only need to add additional files and configurations.
  • RUN mkdir /ssl: Creates a new directory, /ssl, within the container to store SSL/TLS certificates. This directory will hold the server’s certificate, private key, and CA certificate, which are necessary for enabling secure connections.
  • COPY: Each COPY command copies a file from the host (the machine building the Docker image) into the container.postresql_server.crt /ssl/postresql_server.crt: Copies the server’s SSL certificate to /ssl inside the container.postresql_server.key /ssl/postresql_server.key: Copies the server’s private key to /ssl.openssl_created_ca.crt /ssl/openssl_created_ca.crt: Copies the CA certificate (used to verify client certificates) to /ssl.postgresql_db_initialization_script.sql /docker-entrypoint-initdb.d/postgresql_db_initialization_script.sql: Copies an SQL script to /docker-entrypoint-initdb.d, a directory where PostgreSQL will automatically execute .sql files during initial database setup.container_initialization_script.sh /docker-entrypoint-initdb.d/container_initialization_script.sh: Copies a shell script to /docker-entrypoint-initdb.d for further custom initialization steps.
  • RUN chown postgres:postgres -R /ssl /docker-entrypoint-initdb.d: Changes the ownership of the /ssl directory and /docker-entrypoint-initdb.d directory (and all their contents) to the postgres user. This ensures only the PostgreSQL process (running as postgres user) has control over these directories.RUN chmod 700 -R /ssl: Sets directory permissions to 700, allowing only the postgres user to read, write, and execute files within /ssl.RUN chmod 600 /ssl/postresql_server.*: Sets strict permissions on the server’s private key and certificate files, allowing only the postgres user to read and write them, which is important for security.
  • USER postgres: Sets the user context to postgres, ensuring that any following commands are executed by the postgres user instead of root. This is a best practice for security, especially in production environments, to prevent unauthorized access or accidental modifications by privileged users.

This Dockerfile prepares the PostgreSQL container by:

  1. Copying SSL/TLS certificates and CA files into the container to enable secure connections.
  2. Adding initialization scripts to set up the database with any initial data or configuration.
  3. Setting strict ownership and permissions on sensitive files to restrict access to the postgres user.
  4. Ensuring the container runs as the non-root postgres user by default.

This setup allows PostgreSQL to start with SSL enabled, using the provided certificates for encrypted, authenticated connections.

Now that we’ve created the Dockerfile, let’s create the files referenced in it:

vi postgresql_container/postgresql_db_initialization_script.sql

The file postgresql_container/postgresql_db_initialization_script.sql will be configured with a single SQL command:

CREATE DATABASE testdb;

Explanation

  • CREATE DATABASE: This SQL command creates a new database within the PostgreSQL server.
  • testdb: This is the name of the database being created. You can replace testdb with any name you prefer.

This file could contain any supported SQL code, for this example, we are just creating a single database.

Purpose

When this SQL script is placed in the /docker-entrypoint-initdb.d directory, PostgreSQL’s entrypoint script (provided by the official Docker image) will automatically execute it during the initial setup of the container. This automatic execution occurs only if the PostgreSQL data directory is empty, meaning this script will run the first time the container starts with a fresh data directory.

Use Case

  • This script is useful for initializing a database (testdb) and or other database specific configurations that may be required by applications connecting to the PostgreSQL server.
  • By using this script, you automate the database setup, eliminating the need to manually create the database after the container starts.

In summary, this SQL script ensures that a testdb database is created automatically upon the container’s first startup, simplifying initial setup and making the environment ready for immediate use.

The container_initialization_script.sh script is designed to enable and configure SSL/TLS for PostgreSQL and modify authentication settings. Let’s go through each line to understand its purpose and how it contributes to a secure, SSL-enabled PostgreSQL setup.

#!/bin/bash
set -e

whoami

echo "ssl = on" >> /var/lib/postgresql/data/postgresql.conf
echo "ssl_cert_file = '/ssl/postresql_server.crt'" >> /var/lib/postgresql/data/postgresql.conf
echo "ssl_key_file = '/ssl/postresql_server.key'" >> /var/lib/postgresql/data/postgresql.conf
echo "ssl_ca_file = '/ssl/openssl_created_ca.crt'" >> /var/lib/postgresql/data/postgresql.conf
echo "ssl_prefer_server_ciphers = on" >> /var/lib/postgresql/data/postgresql.conf
echo "ssl_ciphers = 'HIGH:MEDIUM:+AES256:!aNULL'" >> /var/lib/postgresql/data/postgresql.conf
echo "hostssl all all 0.0.0.0/0 cert" >> /var/lib/postgresql/data/pg_hba.conf
sed -i 's/^host\ all\ all\ all\ scram-sha-256//g' /var/lib/postgresql/data/pg_hba.conf
sed -i 's/^local[[:space:]]\+all[[:space:]]\+all[[:space:]]\+trust$/local\ all\ all\ scram-sha-256/g' /var/lib/postgresql/data/pg_hba.conf

Explanation:

#!/bin/bash
set -e
  • #!/bin/bash: Specifies the script should be run using the bash shell.
  • set -e: Configures the script to exit immediately if any command fails. This is a safeguard to prevent the script from continuing if an error occurs, ensuring consistent configuration.
whoami
  • whoami prints the name of the current user executing the script. This is mainly a diagnostic line to verify which user is running the script during the build process, typically postgres for a PostgreSQL Docker container.
echo "ssl = on" >> /var/lib/postgresql/data/postgresql.conf
  • This command enables SSL in PostgreSQL by appending ssl = on to postgresql.conf, the main PostgreSQL configuration file.
  • Enabling SSL ensures that PostgreSQL requires encrypted connections, enhancing data privacy and security.
echo "ssl_cert_file = '/ssl/postresql_server.crt'" >> /var/lib/postgresql/data/postgresql.conf
echo "ssl_key_file = '/ssl/postresql_server.key'" >> /var/lib/postgresql/data/postgresql.conf
echo "ssl_ca_file = '/ssl/openssl_created_ca.crt'" >> /var/lib/postgresql/data/postgresql.conf
  • These lines specify the paths to the SSL certificate files that PostgreSQL will use:
    • ssl_cert_file: Sets the path to the server’s SSL certificate (postresql_server.crt).
  • ssl_key_file: Sets the path to the server’s private key (postresql_server.key).
  • ssl_ca_file: Sets the path to the CA certificate (openssl_created_ca.crt), allowing PostgreSQL to verify client certificates.
  • These configurations are necessary for setting up mutual TLS (mTLS), where both client and server authenticate each other using certificates.
echo "ssl_prefer_server_ciphers = on" >> /var/lib/postgresql/data/postgresql.conf
echo "ssl_ciphers = 'HIGH:MEDIUM:+AES256:!aNULL'" >> /var/lib/postgresql/data/postgresql.conf
  • ssl_prefer_server_ciphers: Configures PostgreSQL to prioritize the server’s preferred ciphers for SSL connections, which helps enforce a specific encryption standard.
  • ssl_ciphers: Defines the cipher suites (encryption algorithms) that PostgreSQL can use for SSL connections.
  • 'HIGH:MEDIUM:+AES256:!aNULL': Specifies strong ciphers for SSL, prioritizing AES-256 and excluding weak ciphers (like those without authentication, indicated by !aNULL).
echo "hostssl all all 0.0.0.0/0 cert" >> /var/lib/postgresql/data/pg_hba.conf
  • Adds an entry to the pg_hba.conf file to enforce SSL-based client authentication.
    • hostssl: Restricts this rule to SSL (encrypted) connections.
    • all all 0.0.0.0/0 cert:
      • all all: Allows any database and any user to connect, provided they use SSL.
      • 0.0.0.0/0: Applies this rule to all IP addresses.
      • cert: Requires clients to present a valid certificate signed by the CA to connect.

This rule effectively requires all remote connections to use SSL and authenticate with a client certificate.

sed -i 's/^host\ all\ all\ all\ scram-sha-256//g' /var/lib/postgresql/data/pg_hba.conf

This sed command removes any line from pg_hba.conf that matches host all all all scram-sha-256.Purpose: This rule usually allows password-based connections using scram-sha-256, a secure hashing method. By removing it, we ensure that only SSL-based certificate authentication is used for remote connections.

sed -i 's/^local[[:space:]]\+all[[:space:]]\+all[[:space:]]\+trust$/local all all scram-sha-256/g' /var/lib/postgresql/data/pg_hba.conf

This command modifies pg_hba.conf to replace any local all all trust rule with local all all scram-sha-256.local all all scram-sha-256:

  • This rule applies to local Unix socket connections (not remote), where clients authenticate using passwords hashed with scram-sha-256, a secure hashing standard.

Purpose: This change ensures that local connections still require authentication, but use a secure hashing mechanism instead of the trust method, which allows unauthenticated local connections.

This container_initialization_script.sh script configures PostgreSQL to:

  1. Enable SSL for secure, encrypted connections.
  2. Use specific SSL certificates and ciphers for strong security.
  3. Enforce client certificate authentication for remote connections, allowing only SSL connections with valid client certificates.
  4. Require secure password authentication for local connections via scram-sha-256.

By placing this script in /docker-entrypoint-initdb.d/, PostgreSQL’s entrypoint will execute it automatically on container startup, ensuring the database is configured for secure, SSL-enabled communication from the first run.

Next let’s copy the relevant server certificates into the container directory. The following command assumes that the server certificates exist in the user’s home directory.

cp -va ~/openssl_created_ca.crt ~/postresql_server.crt ~/postresql_server.key postgresql_container/

Explanation

  • cp: The cp command is used to copy files or directories from one location to another.
  • -va: These are options for the cp command:
    • -v (verbose): Outputs detailed information about the copying process, listing each file as it is copied.
    • -a (archive): Ensures that all file attributes (like permissions, timestamps, and symbolic links) are preserved during the copy. It’s especially useful when copying files for applications that require specific permissions, such as certificates.
  • ~/openssl_created_ca.crt ~/postresql_server.crt ~/postresql_server.key:
    • Specifies the files to copy, each located in the home directory (~).
      • openssl_created_ca.crt: The Certificate Authority (CA) certificate.
      • postresql_server.crt: The PostgreSQL server’s SSL certificate.
      • postresql_server.key: The PostgreSQL server’s private key.
  • postgresql_container/:
    • This is the target directory for the copied files. The command places the files inside the postgresql_container directory, which may serve as a setup or configuration directory for a PostgreSQL container.

Let’s go through each of these Docker commands in detail:

Command 1: Build the Docker Image

docker build -t postgresql_container ./postgresql_container

Explanation:

  • docker build: Builds a Docker image from a Dockerfile and its context (the specified directory).
  • -t postgresql_container: Tags the image with the name postgresql_container. Tagging makes it easy to refer to this image by name when running containers based on it.
  • ./postgresql_container: Specifies the build context directory, which is where Docker will look for the Dockerfile and other required files. Here, ./postgresql_container is a directory in the current location containing the Dockerfile and related configuration for this PostgreSQL container.

This command creates a new Docker image called postgresql_container, incorporating all configurations defined in the Dockerfile located in ./postgresql_container.

Command 2: Create a Docker Network

docker network create docker_net

Explanation:

  • docker network create: Creates a new, user-defined Docker network, which enables isolated communication between containers.
  • docker_net: The name of the network being created. This is an arbitrary name that can be used when running containers to specify which network they should connect to.

This network allows containers to communicate with each other securely and independently from the default bridge network, which improves organization and isolation within a multi-container setup.

Command 3: Run the PostgreSQL Container

docker run -d --name postgresql_container --network docker_net -e "POSTGRES_USER=pgsql_admin" -e "POSTGRES_PASSWORD=$(openssl rand -base64 100)" postgresql_container

Explanation:

  • docker run: Starts a new container based on a specified Docker image.
  • -d: Runs the container in detached mode, allowing it to run in the background.
  • --name postgresql_container: Assigns the container a specific name (postgresql_container), which makes it easier to reference and manage this container.
  • --network docker_net: Connects the container to the previously created Docker network (docker_net). Containers on this network can communicate directly using container names instead of IP addresses.
  • -e "POSTGRES_USER=pgsql_admin": Sets an environment variable (POSTGRES_USER) within the container. Here, it defines pgsql_admin as the PostgreSQL superuser.
  • -e "POSTGRES_PASSWORD=$(openssl rand -base64 100)": Sets the POSTGRES_PASSWORD environment variable, which specifies the password for pgsql_admin. The command $(openssl rand -base64 100) generates a random 100-character password encoded in base64 for enhanced security.
  • postgresql_container: The name of the Docker image used to create this container, which was previously built in Command 1.

This command starts a PostgreSQL container with a secure, randomly generated superuser password, connects it to a user-defined network (docker_net), and names it postgresql_container. By using environment variables to configure the PostgreSQL user and password, it provides flexibility and security in setting up the database server.

Our container should now be up and running. Here are some commands we can verify we are up and running:

docker ps -a

Explanation:

  • docker ps: Lists all running containers. The ps command in Docker is similar to the Unix ps command, which lists running processes.
  • -a (all): Expands the command to list all containers, not just the currently running ones. This includes containers that are stopped, exited, or paused.

Use Case

Using docker ps -a is helpful for:

  • Viewing container statuses: You can see which containers have stopped, exited with errors, or completed successfully.
  • Identifying container IDs and names: The output includes details like container IDs, names, image names, command history, creation time, and current status.
  • Troubleshooting: You can quickly see if a container failed to start, exited unexpectedly, or encountered errors.

This command is essential for managing Docker containers by providing a complete view of all containers on the system.

Here is an example of the output from the docker ps -a

docker_admin@minimumdockervm:~/projects/git/postgresql$ docker ps -a
CONTAINER ID   IMAGE                  COMMAND                  CREATED          STATUS          PORTS      NAMES
14bd66ddca27   postgresql_container   "docker-entrypoint.s…"   47 seconds ago   Up 47 seconds   5432/tcp   postgresql_container

It’s always a good idea to check the logs and make sure everything built as planned.

docker logs postresql_container

Explanation:

  • docker logs: Fetches and displays the logs of a specified container. These logs include any output written to standard output (stdout) or standard error (stderr) by processes running in the container.
  • postresql_container: This is the name (or ID) of the container whose logs you want to view. In this case, postresql_container refers to a container running PostgreSQL, as named in the docker run command.

Here is an example of some of output you might see from this command:

docker_admin@minimumdockervm:~/projects/git/postgresql$ docker logs postgresql_container
The files belonging to this database system will be owned by user "postgres".
This user must also own the server process.

The database cluster will be initialized with locale "en_US.utf8".
The default database encoding has accordingly been set to "UTF8".
The default text search configuration will be set to "english".

Data page checksums are disabled.

fixing permissions on existing directory /var/lib/postgresql/data ... ok
creating subdirectories ... ok
selecting dynamic shared memory implementation ... posix
selecting default "max_connections" ... 100
selecting default "shared_buffers" ... 128MB
selecting default time zone ... Etc/UTC
creating configuration files ... ok
running bootstrap script ... ok
performing post-bootstrap initialization ... ok
initdb: warning: enabling "trust" authentication for local connections
initdb: hint: You can change this by editing pg_hba.conf or using the option -A, or --auth-local and --auth-host, the next time you run initdb.
syncing data to disk ... ok

Use Case

Using docker logs postresql_container is useful for:

  • Troubleshooting and Debugging: Viewing log output can help you diagnose startup issues, configuration errors, or runtime problems in the container.
  • Monitoring Container Behavior: Logs show the output from applications and processes running inside the container, allowing you to track operations, errors, and other significant events.

By default, this command shows the entire log history for the container, but you can add flags (like --tail for recent lines or -f to follow live logs) for more specific needs.

docker exec -it postgresql_container bash

Explanation:

  • docker exec: Runs a command in an already running container. It’s commonly used to start an interactive session or run specific commands within the container.
  • -it: Combines two options:
    • -i: Keeps the input stream open, allowing you to interact with the container.
    • -t: Allocates a pseudo-TTY, providing a terminal-like interface.
  • postgresql_container: The name (or ID) of the container in which the command will run. Here, postgresql_container refers to the container running PostgreSQL.
  • bash: The command to run inside the container. In this case, bash opens an interactive shell session within the container, allowing you to execute commands as if you were in a regular Linux environment.

Loging into the container should look something like the following:

docker_admin@minimumdockervm:~/projects/git/postgresql$ docker exec -it postgresql_container bash
postgres@14bd66ddca27:/$ 

Use Case

Running docker exec -it postgresql_container bash is helpful for:

  • Inspecting and Managing the Container: You can view and modify files, check configurations, or diagnose issues within the container.
  • Running Database Commands: In a PostgreSQL container, this command allows you to directly access PostgreSQL tools and configurations.
  • Debugging: This command gives you a live, interactive session within the container, useful for debugging and checking real-time changes.

This command effectively lets you “enter” the running container and work inside it, making it ideal for direct interaction with the containerized environment.

Once inside the PostgreSQL container, you can check the environment variables by running:

env

Explanation

  • env: This command lists all environment variables currently set in the container’s environment. Environment variables provide configuration details that can affect the behavior of the container and the applications running within it.

Why Check Environment Variables?

Inspecting the environment variables inside the PostgreSQL container allows you to:

  1. Verify PostgreSQL Configuration: The official PostgreSQL Docker image often uses environment variables to configure important settings like the superuser credentials. Common variables include:
    • POSTGRES_USER: Defines the PostgreSQL superuser name.
    • POSTGRES_PASSWORD: Specifies the password for the database superuser. In our case a randomly generated 100 character string.
    • POSTGRES_DB: Sets the name of the default database created upon startup.
  2. Confirm SSL/TLS and Other Security Settings: If you passed custom environment variables for SSL configuration, these should also appear. Checking them helps verify that the container has the correct settings to ensure secure connections.
  3. Troubleshoot Issues: If the container isn’t behaving as expected, checking the environment variables can reveal misconfigurations, like missing or incorrect values that could prevent PostgreSQL from starting properly.

Using env provides a quick overview of the container’s current configuration, allowing you to confirm that essential variables were set correctly during container startup. This verification step is especially important for security-sensitive configurations, like database credentials and network settings.

psql

If you run psql with no parameters in the PostgreSQL container, it attempts to log in as the postgres user. The login fails because the database server was set up with an user account named pgsql_admin, not postgres. By default, when running psql without specifying a username, it tries to connect as the postgres user. It’s important that we distinguish between the system or in this case the container admin account, and the database superuser as they are completely disperate functioning user accounts. The postgres account is the default system account built within the container image. The pgsql_admin account is the superuser we specified when initiating the container build command.

Let’s see what happens when we login with the user we configured pgsql_admin by using the following command and the POSTGRES_PASSWORD environment variable value:

psql -U pgsql_admin

Enter the POSTGRES_PASSWORD Value: When prompted, use the password to login to the database server.

The output should look a little something like this.

postgres@438b2f9166b9:/$ psql -U pgsql_admin
Password for user pgsql_admin: 
psql (17.0 (Debian 17.0-1.pgdg120+1))
Type "help" for help.

pgsql_admin=# 

Once in the server, you can run:

\l

Explanation

  • \l: This command displays a list of all databases on the PostgreSQL server, along with some basic information about each database.
  • It’s a shortcut for the SQL command SELECT * FROM pg_database;, providing a convenient way to view database details.

Output Details

When you run \l, the output includes:

  • Name: The name of each database.
  • Owner: The role or user who owns the database.
  • Encoding: The character encoding used by the database (e.g., UTF8).
  • Collate and Ctype: The collation and character type settings, which determine how text is sorted and compared in the database.
  • Access Privileges: Lists privileges granted to users or roles for accessing each database.

The output from the \l command should look similar to the following (Notice the testdb that we configured within the init.sql script ):

pgsql_admin=# \l
                                                          List of databases
    Name     |    Owner    | Encoding | Locale Provider |  Collate   |   Ctype    | Locale | ICU Rules |      Access privileges      
-------------+-------------+----------+-----------------+------------+------------+--------+-----------+-----------------------------
 pgsql_admin | pgsql_admin | UTF8     | libc            | en_US.utf8 | en_US.utf8 |        |           | 
 postgres    | pgsql_admin | UTF8     | libc            | en_US.utf8 | en_US.utf8 |        |           | 
 template0   | pgsql_admin | UTF8     | libc            | en_US.utf8 | en_US.utf8 |        |           | =c/pgsql_admin             +
             |             |          |                 |            |            |        |           | pgsql_admin=CTc/pgsql_admin
 template1   | pgsql_admin | UTF8     | libc            | en_US.utf8 | en_US.utf8 |        |           | =c/pgsql_admin             +
             |             |          |                 |            |            |        |           | pgsql_admin=CTc/pgsql_admin
 testdb      | pgsql_admin | UTF8     | libc            | en_US.utf8 | en_US.utf8 |        |           | 
(5 rows)

pgsql_admin=# 
\q

Explanation

  • \q: Typing \q at the psql prompt immediately closes the psql session and returns you to the shell or command line.

We can test weather our mutual TLS authentation works by running the following command:

psql -U pgsql_admin -h localhost

This command is used to connect to the PostgreSQL server as the pgsql_admin user, specifying localhost as the host. Let’s break down each part of this command and understand why it’s different from psql -U pgsql_admin.

Explanation

  • psql: Launches the PostgreSQL interactive terminal.
  • -U pgsql_admin: Specifies the PostgreSQL username as pgsql_admin.
  • -h localhost: Sets the host to localhost, which tells psql to connect over TCP/IP instead of a local Unix domain socket.

Difference from psql -U pgsql_admin

  • Connection Method:
    • psql -U pgsql_admin (without -h localhost) connects via a local Unix domain socket (the default method when no host is specified).
    • psql -U pgsql_admin -h localhost connects over TCP/IP, even though it’s on the same machine, by specifying localhost as the host.
  • Authentication Rules:
    • PostgreSQL treats Unix domain socket and TCP/IP connections differently, based on rules set in the pg_hba.conf file. For example, local connections over a socket might use peer or scram-sha-256 authentication, while TCP/IP connections often require password-based or SSL authentication.

Using -h localhost is useful when you want to test or enforce specific network-based authentication methods, or when local socket connections are not allowed by your configuration. You should receive output that looks similar to this when connecting.

postgres@724511a61292:/$ psql -U pgsql_admin -h localhost
psql (17.0 (Debian 17.0-1.pgdg120+1))
SSL connection (protocol: TLSv1.3, cipher: TLS_AES_256_GCM_SHA384, compression: off, ALPN: postgresql)
Type "help" for help.

pgsql_admin=# 

This output shows a successful connection to a PostgreSQL server over a secure SSL/TLS connection. Here’s a breakdown of each part:

Output Breakdown

  1. psql (17.0 (Debian 17.0-1.pgdg120+1)):
    • Indicates the psql client version, which is 17.0 in this case, provided by the PostgreSQL Debian package (pgdg120+1 refers to the PostgreSQL repository version).
  2. SSL connection:
    • Confirms that the connection to the PostgreSQL server is secured with SSL (TLS), meaning the data sent between the client and server is encrypted.
  3. Connection Details:
    • protocol: TLSv1.3: The version of the TLS (Transport Layer Security) protocol being used. TLS 1.3 is the latest version and offers better security and performance compared to previous versions.
    • cipher: TLS_AES_256_GCM_SHA384: Specifies the encryption algorithm (cipher) used for this connection:
      • TLS_AES_256_GCM_SHA384: AES-256 in Galois/Counter Mode (GCM) with SHA-384 hashing, providing strong encryption and integrity.
    • compression: off: Compression is disabled for the connection, meaning data isn’t compressed before encryption.
    • ALPN: postgresql: Application-Layer Protocol Negotiation (ALPN) indicates the protocol in use within the TLS session. Here, it specifies postgresql, identifying the application protocol over the encrypted channel.
  4. Type "help" for help.:
    • This prompt from psql indicates that you can type help to see a list of commands available within the PostgreSQL interactive terminal.
  5. pgsql_admin=#:
    • This is the psql prompt for the PostgreSQL user pgsql_admin. The =# symbol indicates that pgsql_admin has superuser privileges.

The output confirms that pgsql_admin has successfully connected to the PostgreSQL server over a secure SSL/TLS connection with strong encryption, using localhost as the host. This secure connection protects data in transit, ensuring privacy and integrity for all database operations.

The output shows us that our TLS authentication is working over TCP/IP. Speaking of networking, this is good time to review the different networking options we have with Docker.

Docker networking allows containers to communicate with each other, with the host, and with external networks. Docker offers several network modes, each with unique use cases and characteristics:

1. Bridge Network (default for standalone containers)

  • Overview: Containers on the same bridge network can communicate with each other, but are isolated from other networks, including the host.
  • Use Case: Best for standalone containers needing to communicate with each other in a contained, private environment.
  • Example: Containers can be connected to a bridge network with docker network create <network-name> and docker run --network <network-name> ....

2. Host Network

  • Overview: The container shares the network stack with the host, meaning there’s no network isolation between the container and the host.
  • Use Case: Ideal for scenarios requiring high network performance or when you need the container to listen on the same ports as the host.
  • Example: Use --network host when starting a container. Note: This mode is only available on Linux.

3. None Network

  • Overview: The container has its own network namespace but is not connected to any network. There’s no external network access unless explicitly configured.
  • Use Case: Useful for security-focused applications that don’t require network connectivity.
  • Example: Start a container with --network none to fully isolate it from the network.

4. Overlay Network (for Swarm or Kubernetes)

  • Overview: Allows containers to communicate across multiple Docker hosts by creating an internal, distributed network.
  • Use Case: Primarily used in Docker Swarm or Kubernetes for orchestrating services that need to communicate across nodes.
  • Example: Created with docker network create --driver overlay <network-name> and used with Swarm services.

5. Macvlan Network

  • Overview: Assigns a unique MAC address to each container, making each container appear as a physical device on the local network.
  • Use Case: Useful for applications that need direct network access, such as when containers need to appear as separate devices on a physical network.
  • Example: Created with docker network create --driver macvlan ....

Summary

  • Bridge: Default, isolated, internal networking between containers.
  • Host: Shares host’s network stack, high performance, no isolation.
  • None: Fully isolated, no network access.
  • Overlay: Distributed network across multiple Docker hosts, for orchestrated environments.
  • Macvlan: Direct physical network access with unique MAC addresses per container.

Each network mode serves specific needs, from isolation and security to distributed communication and high performance.

Given our docker build and run commands, we know that we are using the default network mode, which is a bridged network. Bridged networking is convenient, as we can use our hosts networking to communicate with networks outside the container envrironment. Since we haven’t exposed any ports, ingress communication is not currently possible. However, if we were to build another container and configure it to use the docker_net it would be able to connect to port 5432/tcp running on the PostgreSQL container.

Before moving on to our Docker Compose overview, let’s quickly go over how to stop and clear out the Docker container, network and images reclaiming the system resources it was consuming.

Stop All Running Containers

docker stop $(docker ps -q)

Explanation:

  • docker stop: Stops one or more running containers. The command requires a container ID or name to specify which container(s) to stop.
  • $(docker ps -q): This part is a subshell that runs docker ps -q to list the container IDs of all running containers:
    • docker ps: Lists all running containers.
    • -q (quiet): Outputs only the container IDs, without additional details.

This command stops all currently running containers by passing their IDs to docker stop. It’s particularly useful when you want to quickly shut down all running containers without specifying each one individually.

This is what the output of stopping all running containers looks like:

docker_admin@minimumdockervm:~/projects/git/postgresql$ docker stop $(docker ps -q)
14bd66ddca27

Clean Up Docker Resources

docker system prune -a --volumes -f

Explanation:

  • docker system prune: Removes unused Docker data, including stopped containers, unused images, networks, and other resources.
  • -a (all): Removes all unused images, not just dangling ones (images not associated with any container). This helps free up disk space by removing all unreferenced images.
  • --volumes: Removes unused Docker volumes as well. Volumes are persistent storage for containers, so this option deletes volumes that are no longer in use by any container.
  • -f (force): Runs the command without prompting for confirmation. This flag is helpful if you want to automate the cleanup process.

This command thoroughly cleans up Docker resources, reclaiming disk space by removing all unused images, stopped containers, unused networks, and volumes. Be cautious with this command, as it deletes data that may not be recoverable, especially if you still need the images or volumes.

Here is what the output of this command looks like:

docker_admin@minimumdockervm:~/projects/git/postgresql$ docker system prune -a --volumes -f
Deleted Containers:
14bd66ddca2703525c8e81cab6c1ed7e7155d889684da4341154d6e883fb0614

Deleted Networks:
docker_net

Deleted Volumes:
4087ab7504419f08b689a98a9625c30249bdf1feb2b107da8e3cdf1d036aa9ae

Deleted Images:
untagged: postgresql_container:latest
deleted: sha256:eed823c91f0bcf2f476cd276f08605a42e9d96d9ad7781b422d0a1818ccfa8db

Deleted build cache objects:
n9ps0l7srrcar02x1azyfi7ky
9kil7fmu2h0eeeip60b1dingi
ca5ywod6c7gh22lgmcfgcwred
n1dgr0sznqh3b5n1oztlbwhno
xak1me8da6wbaga65isbfv6k1
5zuj3jj5ok2lc0bmhk7zz5ipp
w71fmpytpqi9nhu9vpw23191g
aqyp72nm647tc6iw0lclhs5lr
v0q70ufv18p7lflb43si5f90v
tooerx7vp60rfw7lxkw5ytkf5
g5w7wcgtq3svpyjw6je75lgna
dij0o6c45tdfbs0c2s7sbs35c
u7gxz85k8erhqsetvq9glshka
mtyez6ywr9eafwvnsia18592u
wdhmlm9xwm31fodygzt6xz5f5
j6sjm7zet6dfov1g684li41ju
ttb7rjs322jf57i92psmsr4ek
mgp2s60tkjk82ogbq6nxv17li
09fohn6v2od0hafuat2drno2l
vfpvics3hqdju81g69091b10t
qiixy9106iyxlv6nt8stlacp0
odhpmmu5t0mwodzolgvknzyqw
kyzf0av5oln3n3uf169o3113n
rhlyv7z3lys98om6af6o9s32r
xx1hy68zzxb1s2nvgllkdd7hk
lzo350437v9g8vjj84uu3fmxq

Total reclaimed space: 72.35MB
docker_admin@minimumdockervm:~/projects/git/postgresql$ 

Now that we’ve witnessed the lifecycle of our leveled up PostgreSQL container, let’s discuss Docker Compose and why we might want to use it.

Docker Compose is a tool that simplifies the management of multi-container Docker applications. It allows you to define and manage multiple containers, networks, and volumes in a single YAML file (docker-compose.yml). This file specifies how each container should be built, configured, and connected, making it easy to set up complex applications that require multiple services, like databases, web servers, and caches, all working together.

Why Use Docker Compose?

  1. Simplifies Multi-Container Setup:
    • Docker Compose enables you to define all your containers and configurations in one file, making it easy to spin up complex, multi-container applications with a single command (docker-compose up). This is especially useful for development and testing environments.
  2. Idempotency:
    • Docker Compose is idempotent, meaning running docker-compose up multiple times will have the same result: it will only start containers that aren’t already running, recreate those that need changes, and ensure the entire setup is consistent with the docker-compose.yml file. This characteristic makes Compose ideal for infrastructure-as-code practices, where you can reliably and predictably manage container configurations and states.
  3. Easy Configuration of Networks and Volumes:
    • Compose simplifies the configuration of Docker networks and volumes. Containers defined in the same Compose file can easily communicate using container names, and you can specify shared volumes, making it straightforward to set up persistent data storage and inter-container networking.
  4. Environment Variable Management:
    • Docker Compose supports .env files, which can be used to manage environment-specific configurations, such as API keys, passwords, and hostnames, in a centralized and secure way.
  5. Version Control:
    • The docker-compose.yml file can be added to version control (e.g., Git), making it easy to track changes to the infrastructure configuration over time. This also facilitates collaboration within development teams, as everyone can access the same configuration file and spin up the same environment.

Idempotent Alternatives to Docker Compose

Here are some genuinely idempotent tools, including HashiCorp’s Terraform, which is indeed a key player in this space.

  1. Docker Swarm:
    • Docker’s native clustering solution, Swarm allows you to manage multi-container applications across multiple hosts with a Swarm configuration file similar to Docker Compose. Swarm reconciles the declared state to ensure containers are running as specified, providing idempotency within a Docker-native ecosystem.
  2. Kubernetes:
    • Kubernetes is the leading container orchestration platform for managing multi-container applications at scale. It manages the desired state of applications by ensuring each container is deployed, scaled, and updated according to the specifications in its YAML configuration files. Kubernetes continuously monitors and reconciles the state of the cluster, making it inherently idempotent, which is ideal for production environments.
  3. HashiCorp Terraform:
    • Terraform, a popular infrastructure-as-code tool, enables you to define and manage cloud infrastructure using configuration files. Terraform’s idempotency is achieved by maintaining a state file that tracks resource configurations and changes. By comparing the current infrastructure state with the desired state, Terraform can make only the necessary adjustments, making it highly reliable for managing Docker resources alongside cloud infrastructure (e.g., networks, virtual machines, and container registries).
    • Although primarily focused on infrastructure, Terraform has provider modules for Docker, enabling it to manage Docker containers, networks, and volumes in a declarative, idempotent way. This allows Docker Compose-like functionality within a broader infrastructure management context.

For production-grade, idempotent container management, Docker Swarm, Kubernetes, and Terraform offer reliable alternatives to Docker Compose, each with strengths for specific scenarios:

  • Swarm and Kubernetes are container-native solutions that manage container lifecycles effectively.
  • Terraform is an infrastructure-as-code solution with idempotent Docker support, making it suitable for managing both container and cloud resources within a unified framework.

Each tool has strengths depending on your environment’s complexity, scalability needs, and infrastructure components.

Now that we’ve got a good idea of how Docker Compose can help us, let’s setup the same container we used prior, but this time we will let Docker Compose do the heavy lifting.

The first thing we need to configure is the docker-compose.yml file. Create a new file using your preferred text editor.

services:
  postgresql_container:
    build: ./postgresql_container/
    healthcheck:
      test: ["CMD-SHELL", "pg_isready -U pgsql_admin"]
      interval: 10s
      timeout: 5s
      retries: 5

    restart: always
    environment:
      - POSTGRES_USER=pgsql_admin
      - POSTGRES_PASSWORD=${POSTGRES_PASSWORD}
    networks:
      - docker_net

networks:
  docker_net:
    driver: bridge

Explanation:

This docker-compose.yml file defines a Docker Compose configuration for managing a PostgreSQL container with a health check, environment variables, and networking setup. Here’s a breakdown of each part:

services

Defines the application’s services. In this case, there’s only one service, postgresql_container, which is set up to run a PostgreSQL instance.

postgresql_container
  • build: ./postgresql_container/:
    • Specifies the build context for the service. Docker Compose will use the Dockerfile located in the ./postgresql_container/ directory to build the image for postgresql_container.
  • healthcheck:
    • Configures a health check to monitor the container’s health and ensure PostgreSQL is ready to accept connections.
    • test: Runs pg_isready -U pgsql_admin as the health check command, which checks if the PostgreSQL service is accepting connections for the specified user (pgsql_admin).
    • interval: 10s: Runs the health check every 10 seconds.
    • timeout: 5s: Allows each health check 5 seconds to respond before considering it a failure.
    • retries: 5: Considers the container “unhealthy” after five consecutive failed checks.
  • restart: always:
    • Configures the container to automatically restart if it stops for any reason, including failures. This ensures the PostgreSQL container stays running consistently, even after reboots or errors.
  • environment:
    • Defines environment variables for configuring the PostgreSQL server.
    • POSTGRES_USER=pgsql_admin: Sets the PostgreSQL superuser to pgsql_admin.
    • POSTGRES_PASSWORD=${POSTGRES_PASSWORD}: Sets the password for pgsql_admin using an environment variable from the host. ${POSTGRES_PASSWORD} will pull the value of POSTGRES_PASSWORD from the environment when running docker-compose up, allowing for secure configuration without hardcoding sensitive values in the YAML file.
  • networks:
    • Attaches the postgresql_container service to the specified network (docker_net), allowing it to communicate with other containers on the same network.

networks

Defines custom networks for the services. Here, it creates a single network:

docker_net
  • driver: bridge:
    • Specifies the network type as a bridge network, which provides private, isolated networking between containers on the same network. Containers on docker_net can communicate with each other using their service names.

Since all we’ve covered all of the other necessary files to bring up the container, there is one additional script that we need to add since we are randomly generating a string for the POSTGRES_PASSWORD key value pair. For this example we named it pre-docker-compose_script.sh. Here is the code for this script:

#!/bin/bash

gen_pgpass_envar() {
        #Generate random password for initial Postgresql
        export POSTGRES_PASSWORD=$(openssl rand -base64 100 | tr -d '\n')
        if [ -z "$POSTGRES_PASSWORD" ]; then
                echo "Error: POSTGRES_PASSWORD is not set." >&2
                exit 1
        fi
}


# Main function
main() {
    gen_pgpass_envar

}

# Call the main function
main

This bash script generates a random password for PostgreSQL, assigns it to the POSTGRES_PASSWORD environment variable, and checks if it was successfully set. Here’s a breakdown of each part:

#!/bin/bash

Explanation: Specifies the script should be run using the bash shell.

gen_pgpass_envar() {
    # Generate random password for initial PostgreSQL
    export POSTGRES_PASSWORD=$(openssl rand -base64 100 | tr -d '\n')
    if [ -z "$POSTGRES_PASSWORD" ]; then
        echo "Error: POSTGRES_PASSWORD is not set." >&2
        exit 1
    fi
}

Explaination:

gen_pgpass_envar():

  • Defines a function named gen_pgpass_envar.
  • export POSTGRES_PASSWORD=$(openssl rand -base64 100 | tr -d '\n'):
    • Generates a random 100-character password encoded in base64 using openssl rand -base64 100.
    • tr -d '\n' removes any newline characters to ensure the password is a single continuous string.
    • export sets POSTGRES_PASSWORD as an environment variable, making it available globally for any processes started by this shell.
  • if [ -z "$POSTGRES_PASSWORD" ]:
    • Checks if POSTGRES_PASSWORD is empty. This could happen if there was an error during password generation.
  • echo "Error: POSTGRES_PASSWORD is not set." >&2:
    • If POSTGRES_PASSWORD is empty, outputs an error message to stderr.
  • exit 1:
    • Exits the script with status 1, indicating an error if the password generation failed.
# Main function
main() {
    gen_pgpass_envar
}

Explanation:

  • Defines the main function, which serves as the main execution flow for the script.
  • Calls gen_pgpass_envar, generating and exporting the POSTGRES_PASSWORD variable.
# Call the main function
main

Explanation:

  • Calls the main function to start the script. This is a common approach to keep script execution organized, making it easier to expand the script by adding additional functions if needed.

This script defines and calls a function to generate a secure, random password for PostgreSQL, assigns it to the POSTGRES_PASSWORD environment variable, and verifies that it was set correctly. If password generation fails, the script outputs an error and exits. This approach makes sure POSTGRES_PASSWORD is always securely generated before proceeding with other operations in a larger script. Also, don’t forget to set your permissions for this script which should be:

chmod 700 pre-docker-compose_script.sh

In order for our environment variable to be set properly, we need create a script that performs all of the tasks needed to build and bring up our PostgreSQL container leveraging Docker Compose. We have named this file up_docker_compose.sh. Once again crank up your favorite text editor make add the following code:

#!/bin/bash

source ./pre-docker-compose_script.sh
docker-compose -f docker-compose.yml up --build -d
Explanation:
  1. #!/bin/bash
    • Specifies that the script should be executed using the bash shell.
  2. source ./pre-docker-compose_script.sh
    • source: Runs an external script file in the current shell environment, rather than in a subshell.
    • ./pre-docker-compose_script.sh: Specifies the script file to be sourced, which is located in the current directory (./). Sourcing this file means any environment variables, functions, or commands defined in pre-docker-compose_script.sh are loaded and available in this script.
    • Purpose: Commonly used to set up environment variables or run prerequisite commands before the main commands in the current script.
  3. docker-compose -f docker-compose.yml up --build -d
    • docker-compose: Invokes Docker Compose, a tool for managing multi-container applications.
    • -f docker-compose.yml: Specifies the Docker Compose file to use. Here, docker-compose.yml in the current directory is specified explicitly.
    • up: Starts the services defined in the Docker Compose file.
    • --build: Builds images before starting the containers, ensuring the latest code and configuration changes are applied to the images.
    • -d: Runs the containers in detached mode, meaning they’ll run in the background, freeing up the terminal.

This script:

  1. Sources pre-docker-compose_script.sh to load any necessary environment variables, configurations, or functions needed for the Docker Compose setup.
  2. Runs docker-compose up with options to build images and run the containers in detached mode, based on the specified docker-compose.yml file.

This approach ensures that any required setup defined in pre-docker-compose_script.sh is applied before launching the Docker environment.

Script overview:

  1. Sources pre-docker-compose_script.sh to load any necessary environment variables, configurations, or functions needed for the Docker Compose setup.
  2. Runs docker-compose up with options to build images and run the containers in detached mode, based on the specified docker-compose.yml file.

This approach ensures that any required setup defined in pre-docker-compose_script.sh is applied before launching the Docker environment.

Once again, don’t forget to set the file permissions:

chmod 700 up_docker_compose.sh

Since we have an up, we must have a down. Here is the script for bringing the container down and clearing up resources. We’ve appropriately named it down_docker_compose.sh:

 #!/bin/bash

docker-compose down
docker system prune -a --volumes -f

for vol in $(docker volume ls | grep -vE "^DRIVER " | awk '{print $2}'); do
	if [ -n $vol ]; then
		printf "Docker volumes to remove.\n"
		docker volume rm ${vol}
	else
		printf "No docker volumes to remove.\n"
	fi

done

Explanation:

docker-compose down

Explanation: Stops and removes all containers, networks, and images created by docker-compose up as specified in the docker-compose.yml file. This command gracefully shuts down the running Docker Compose environment.

docker system prune -a --volumes -f

Explanation:

  • docker system prune: Cleans up unused Docker resources.
  • -a: Removes all unused images, not just dangling ones.
  • --volumes: Deletes all unused volumes (volumes that are not actively attached to a container).
  • -f: Forces the prune operation without prompting for confirmation.

This command frees up disk space by removing all unused containers, images, networks, and volumes, ensuring a clean Docker environment.

for vol in $(docker volume ls | grep -vE "^DRIVER " | awk '{print $2}'); do
  • Explanation:
    • This loop iterates over all Docker volumes.
    • docker volume ls: Lists all Docker volumes on the system.
    • grep -vE "^DRIVER ": Filters out the header line (DRIVER VOLUME NAME) from the docker volume ls output.
    • awk '{print $2}': Extracts the second column (volume names) from the output.

The result is a list of volume names, which are then assigned one by one to the vol variable within the loop.

	if [ -n $vol ]; then

Explanation:

  • [ -n $vol ]: Checks if the vol variable is non-empty.
  • Purpose: Ensures that the command only tries to delete a volume if one exists (avoiding errors from empty values).
		printf "Docker volumes to remove.\n"
		docker volume rm ${vol}

Explanation:

  • printf "Docker volumes to remove.\n": Prints a message indicating that Docker volumes are being removed.
  • docker volume rm ${vol}: Deletes the Docker volume specified by vol.

This removes each unused Docker volume identified in the loop.

	else
		printf "No docker volumes to remove.\n"
	fi

Explanation:

  • else printf "No docker volumes to remove.\n": If no volume names were found, this message is printed, indicating there are no volumes to remove.

Script overview:

  1. Shuts down all Docker Compose containers.
  2. Removes all unused Docker resources, including images, containers, networks, and volumes, with docker system prune.
  3. Iterates over all remaining Docker volumes and removes them one by one, printing a message for each volume removed or if no volumes are left.

The script ensures that the Docker environment is completely cleaned, freeing up disk space and removing any stale resources. We will also need to set the permissions on this script as well:

chmod 700 down_docker_compose.sh 

Once all of our files have been created, configured and have had their permissions set, we are ready to crank this bad boy up leveraging Docker Compose.

./up_docker_compose.sh 

Explanation:

  • ./: Specifies the current working directory as the location of the script. This ensures that up_docker_compose.sh is executed from the directory where it resides. The ./ prefix tells the shell to look in the current directory rather than in the system’s PATH.
  • up_docker_compose.sh: The name of the script file being executed. This script contains commands to generate the necessary environment variables as well as use Docker Compose to bring up our PostgreSQL container.

Here is an example of what the output of running the up_docker_compose.sh script:

./up_docker_compose.sh 
Creating network "postgresql_docker_net" with driver "bridge"
Building postgresql_container
[+] Building 7.1s (15/15) FINISHED                                                                                                                                               docker:default
 => [internal] load build definition from Dockerfile           
...
Creating postgresql_postgresql_container_1 ... done

Let’s take a look at the differences when running:

docker_admin@minimumdockervm:~/projects/git/postgresql$ docker ps -a
CONTAINER ID   IMAGE                             COMMAND                  CREATED         STATUS                            PORTS      NAMES
1cc758e23ecc   postgresql_postgresql_container   "docker-entrypoint.s…"   5 seconds ago   Up 4 seconds (health: starting)   5432/tcp   postgresql_postgresql_container_1

Notice how Docker Compose names the container by adding the parent directory and the service name we configured in the docker-compose.yml as well as an integer at the end indicating the instance order, which becomes helpful when running multiples of the same container.

We can perform the same verificaton step we used in our first example using just the Docker commands. Here is an example of some of our previous testing and verification:

docker_admin@minimumdockervm:~/projects/git/postgresql$ docker exec -it postgresql_postgresql_container_1 bash
postgres@1cc758e23ecc:/$ psql -U pgsql_admin -h localhost
psql (17.0 (Debian 17.0-1.pgdg120+1))
SSL connection (protocol: TLSv1.3, cipher: TLS_AES_256_GCM_SHA384, compression: off, ALPN: postgresql)
Type "help" for help.

pgsql_admin=# \l
                                                          List of databases
    Name     |    Owner    | Encoding | Locale Provider |  Collate   |   Ctype    | Locale | ICU Rules |      Access privileges      
-------------+-------------+----------+-----------------+------------+------------+--------+-----------+-----------------------------
 pgsql_admin | pgsql_admin | UTF8     | libc            | en_US.utf8 | en_US.utf8 |        |           | 
 postgres    | pgsql_admin | UTF8     | libc            | en_US.utf8 | en_US.utf8 |        |           | 
 template0   | pgsql_admin | UTF8     | libc            | en_US.utf8 | en_US.utf8 |        |           | =c/pgsql_admin             +
             |             |          |                 |            |            |        |           | pgsql_admin=CTc/pgsql_admin
 template1   | pgsql_admin | UTF8     | libc            | en_US.utf8 | en_US.utf8 |        |           | =c/pgsql_admin             +
             |             |          |                 |            |            |        |           | pgsql_admin=CTc/pgsql_admin
 testdb      | pgsql_admin | UTF8     | libc            | en_US.utf8 | en_US.utf8 |        |           | 
(5 rows)

pgsql_admin=# \q
postgres@1cc758e23ecc:/$

Conclusion:

I hope this guide has shown the importance and benefits of securing PostgreSQL connections in Docker containers through mutual TLS authentication. By implementing certificate-based verification and encrypted connections, we’ve reduced the attack surface, enhanced data confidentiality, and increased the expectation that only trusted users and systems can access the database. With the convenience of Docker and Docker Compose, setting up secure and robust environments has never been easier. Whether for testing, development, or production, these steps bring PostgreSQL authentication and transport to the next level.

Helpful links:
https://docs.docker.com/guides/databases/
https://docs.docker.com/compose/
https://www.postgresql.org/docs/current/ssl-tcp.html
https://docs.openssl.org/master/man1/#copyright
https://www.gnu.org/software/bash/manual/bash.html

How-to build a resilient Raspberry Pi 3 Kali Box

How-to build a resilient Raspberry Pi 3 Kali Box

[Disclaimer: The following article is intended for educational purposes only.  The information disclosed is meant to be setup in a lab environment or with expressed written permission from relevant owners.  Any legal issues resulting from the use of this information, is solely the responsibility of the reader.  Any loss of data or damage caused by using any of the information within this article is the sole responsibility of the reader.]

This article is intended to show one of the many options for providing an inexpensive and flexible on-premise penetration testing system.   The intended audience for this article is for penetration testers, internal red team testers, security engineers or anyone with the desire to learn new things.  Some of the content may be a bit elementary to some of you, however, keep in mind that we all started somewhere and if this gives just one person the spark to get interested in the information security space, the time was completely worth it.  At times, some of the concepts many of us take for granted will be explained in greater detail as to provide a more complete picture.  The following will be a step-by-step instruction of building, configuring and deploying a system that will create and maintain a continual connection to a specified SSH server.  It is important to note that there are many different solutions available for providing a penetration test system, such as a virtual image (OVA/OVF), laptop/desktop, live usb/cd…  However, it is this author’s opinion that many of the complications associated with having a customer or third party deploy one of the aforementioned solutions can be avoided by using a pre-configured small form factor system such as Raspberry Pi.  There are many small form factor options, however the author has chosen the Raspberry Pi 3 system for it’s cost, large user community and versatility.  The only assistance on-site personnel would need to provide for deployment is power and network connectivity.  While there are many how-to articles regarding the steps required to build a penetration testing system, this article will focus on not only the build of the machine but the detailed practical configurations, applications and potential pitfalls related to setting up such a system.  Finally, additional feature development will be discussed for future functionality and resilience.

Before getting too far into the weeds so to speak, it would be appropriate to discuss why one would even need a dedicated on-site system for penetration testing.  First, sending an on-premise system to a customer or geographically remote site in lieu of sending a security professional can drastically reduce the costs associated with travel expenses and time.  For the cost of a moderately priced dinner bill ($50-60), a Raspberry Pi and relevant accessories can be purchased.  Travel expenses alone for a one week trip start around $1,500 and increase based on distance traveled etc.  There may be instances where a customer requires an on site visit, for instance government agencies or highly secure environments that do not permit remote connectivity, in these instances the proposed solution would not be a good fit.  For those organizations interested keeping costs down relative to penetration testing and do not have remote access restrictions, an on-premise system is the way to go.

Second, having an on-premise machine provides flexibility for both customer and tester.  Once a dedicated system is put in place, logistics regarding physical access are for the most part eliminated.  The system detailed within this article will be configured to phone home rather than require the customer to provide remote access such as VPN, relinquishing the need for additional user accounts and configurations.  This does not circumvent controls in place by the customer, as the appropriate connectivity will still need to be permitted.  As mentioned above, the only requirements to be fulfilled by on-site personnel will be power and appropriate network access including outbound access to the relevant server if agreeable.

This project will consist of building a Raspberry Pi 3 system running Kali Linux 2.0.  The method used for the phone home feature includes creating a reverse SSH tunnel to and from a relevant SSH server.   The final product will be a headless system, requiring only power and appropriate network connectivity.

The following is a list of items used for this project:

Raspberry Pi 3 with Case and Power Supply
32GB MicroSD Card
Kali Linux image for Raspberry Pi 2/3
SD Card Reader
Mac/*nix/Windows System (to perform imaging and access)
SSH Server
Optional:
2 Dynamic DNS hostnames
1 for the SSH Server
1 for the Kali Linux machine

NOTE:  A Mac was used to perform the setup and configuration and some of the commands displayed may reflect as such.

STEP 1 Image Micro SD Card:

Assuming all of the required items have been obtained.  The first step in the build process is to image the microSD card.  The speed class of the SD card used was 10 and highly recommended by the Kali team.   The minimum storage capacity for Kali Linux is 8GB, but the card used for this demonstration was 32GB.  Before imaging your new SD card with the Raspberry Pi Kali Linux img file, it is always a good idea to verify the checksums to determine file integrity.  This can be accomplished by running the following (* substituted for version):

shasum kali-*.img.xz

Command explaination:

shasum = Built-in *nix utility for calculating checksums for files and strings.  The default checksum hash is SHA1, which happens to be the format used on the Kali ARM image download page.

kali-*.img.xz = the file(s) to calculate checksums for.

Compare the output with the checksum specified by the Kali team for the image you downloaded:

kali-pi_checksums
If the checksums do not match, the integrity of the file should not be trusted.  Try downloading the file again.  Rinse and repeat as necessary until you can confirm the integrity of the compressed image file.

Now that the compressed image file integrity has been verified, the img file must be extracted.  To extract the image file from the .xz file, use an appropriate compression utility (tar, unxz, 7zip).   Using OSX 10.11.4 (El Capitan), the native gunzip or The Unarhiver utility work splendidly for .xz files.

OSX extract command:
gunzip kali-*.img.xz
Linux extract command:
unxz kali-*.img.xz

If you have not already done so, insert your microSD card into the card reader.  The next step is determining the microSD card storage device id within the setup system.

For OSX, run the following command to output your storage devices:
diskutil list
For linux, run this command to output your storage devices:
fdisk -l
VERY IMPORTANT: MAKE SURE YOU ARE IMAGING THE MICROSD STORAGE DEVICE.  OTHERWISE, YOU MAY END UP IMAGING THE WRONG DEVICE AND CAUSING SIGNIFICANT DATA LOSS.

Make sure you unmount the device as dd will fail due to the device being in use.  Here are the commands for unmounting a storage device:
OSX:
sudo diskutil unmountDisk /dev/disk?s?
Note:  Using just the device id was not sufficient.  Use the more specific partition within the device.
Linux:
sudo umount /<mount point > and or /dev/<device id>

 

The following is an example of the command used to copy the Kali image to the microSD card.
Both OSX and Linux (parameters in <> are specific to each setup):
sudo dd if=kali-<version>-rpi2.img of=/dev/disk<sdcardid> bs=1m

While the image is being copied (may take over 30 mins) to the microSD card, put together your Raspberry Pi device.  Install the Pi board inside the case according to the manufacturers instructions.  Purchasing a case is highly recommended, as the components and board can be easily damaged.
Once the dd command is complete, mount the SD card to verify the file system was successfully loaded.  This can be done by running a mount command or physically removing the card from the card reader and re-inserting it.  Here is an example of the command to run for mounting the storage device:

OSX:
sudo diskutil mountDisk /dev/disk?s?
Linux:
sudo mount /dev/<device id> /<mount point>

 

Given the file system is intact, the next step of this process is to unmount the microSD card and physically move it from the card reader to the Rasberry Pi device.  Use the previously mentioned commands to unmount the imaged storage device.

 

STEP 2 Harden the System:

At this point, the machine is ready to be configured.  There are a couple options regarding the initial setup of the new machine:

  1. Physically connect to an existing Ethernet network with working DHCP service.
  2. Connect a keyboard, monitor and mouse and log in locally.

In the case of this build, we want to use this system as a headless unit, meaning the only physical connections required are power and Ethernet.  Since, the Rasberry Pi 3 has a built–in wireless interface, once the initial setup is complete, the only physical connection required would be power.  Even then, you could completely eliminate all wires with wireless connectivity, battery power and or a consistent form of wireless power (solar, wind, geothermal…really?).   While going completely wireless could be useful for some applications, this particular use case will use power and Ethernet.

For this project, the newly built Raspberry Pi 3 was connected to an Ethernet network with a DHCP server providing an automatically assigned IP address after the unit is powered on.  If you have access to your DHCP server service, you could determine the IP address that was automatically assigned at startup.  Or you could use your network forensics skills to find the new system.  Maybe another post going over some basic network forensics is in order but for now, let’s just assume we have access to the DHCP server and we see a new lease for a machine with the name of kali (Kali’s default hostname) and an IPv4 address of 10.1.1.1.  Since the SSH server is automatically installed and running for this particular Kali image, the only thing left to do is connect to this machine using your favorite SSH client (native *nix client, Cygwin, putty…) to begin configuration.  The following is the command to connect to your running Kali linux instance:

*nix:
ssh [email protected]
The default password for the Kali linux distributions is toor.  Once successfully logged in, configuration and hardening is in order.  The following sections will be broken out into the different tasks to help harden the OS.  Since the device is intended be installed within a remote location (relative to the security professional/tester), physical access makes securing said device a challenge.  For the truly paranoid, an attempt will be made to try to discuss the major physical attack vectors.

Full disk encryption:
The first method of hardening any system includes encrypting the entire disk and associated partitions.  Unfortunately, for our particular use case, full disk encryption is not necessarily the most advantageous, since it would require authentication prior to OS boot.  Perhaps, an interesting project would be to develop a full disk encryption authentication function that would not require user input.  While full disk encryption is a good measure to take for system hardening we are going to skip this measure for practicality reasons.

Reset root password:
While passwords are an antiquated form of authentication, they are still very embedded in even the most modern systems to date.  In light of this fact, the root password should be changed to a long, complex and pseudo random string.  I say pseudo random for all you physicists out there.  For our purposes, pseudo random strings will suffice.  One of the methods I use to add a little entropy to pseudo-randomly-generated passwords is to change and or add several characters throughout the generated string.  It may sound a bit paranoid, but refrain from using online password generators as these strings are coming from an untrusted entity and could potentially be captured/stored/predictable.  Use a password generator script or program that can be verified for malicious code.  I don’t want to get too far into the weeds regarding password theories, just know that it is very important to have a long (the longer the better) and highly unpredictable string as your password.  I have run into situations where the system accepted the an extremely long complex password, but problems arose when using applications against said passwords failed due to input limitations.  In conjunction with a password manager such as KeePass, secure copy and paste functions will allow you to limit the attack surface regarding passwords.  Yes, for a limited time your password is contained within memory, however, the practicality and convenience of using a copy and paste function supersedes the risks.  Think about it, if an attacker has access to your memory, is the use of a password manager copy and paste function any less secure than typing a more than likely weaker password housed in your brain and input through your keyboard.  The following is the command to change your password in the vast majority of unix based systems:
passwd

passwd-cmd
The passwd command used without specifying a username will change the password for the user account running the command.

For completeness, it is important to note that you typically wouldn’t want to run as root on any system, however, an exception is made for penetration testing as many of the tools required also need super admin privileges.  The following is the Kali Linux Root User Policy (http://docs.kali.org/policy/kali-linux-root-user-policy ):

Most Linux distributions, quite sensibly, encourage the use of a non-privileged account while running the system and use a utility like sudo when and if escalation of privileges in needed. This is sound security advice: this provides an extra layer of protection between the user and any potentially disruptive or destructive operating system commands or operations. This is especially true for multiple user systems, where user privilege separation is a requirement — misbehavior by one user can disrupt or destroy the work of many users.

Kali Linux, however, as a security and auditing platform, contains many which tools can only run with root privileges. Further, Kali Linux’s nature makes its use in a multi-user environment highly unlikely.

For these reasons, the default Kali user is “root”, and no non-privileged user is created as a part of the installation process. This is one reason that Kali Linux is not recommended for use by Linux beginners who might be more apt to make destructive mistakes while running with root privileges.

Add a less privileged user account:
Adding a low privileged user account serves several purposes, such as restricting SSH access to just this account.  By restricting access to a standard user account, a layer is added which increases the necessity for privilege escalation if SSH is ever compromised.  This does not go against the Kali Linux Root User Policy as we are simply hardening our administrative access to the system.  This system will be going into an unknown and potentially hostile environment, therefore, forgoing simple measures to help prevent abuse is prudent.  Again, make sure the password you set for the new account is long and complex.
To create a new user account with just the default privileges, run the following command:
adduser <username>

adduser-cmd

Rekey SSH public and private keys:
Rekeying the SSH keys, increases the chances of maintaining confidentiality regarding SSH sessions.  The public and private keys already configured within the Kali image are not private and should therefore be reset to ensure the SSH private is truly private.  Not only should you reset your SSH keys, it’s a good opportunity to increase the bit strength as well.  Currently, 2048 bits of encryption are acceptable, however, increasing the bit strength to 4096 provides a much higher level of confidentiality.  You never know what kind of information your going to find,  so reducing the likelihood of that information is recommended.  Take note that increasing the bit strength can also have negative performance impacts.  Many of the necessary tasks required for penetration testing do not require an abundance of system resources and those that do (password cracking, decryption…) can be offloaded to more powerful systems.  The following are the commands to use in order to rekey the existing SSH keys:

[Run as root]
rm -v /etc/ssh/ssh_host_*

rm-ssh-keys
The above rm command, removes all files starting with ssh_host_ within the /etc/ssh/ directory.

Command explanation:
rm = the utility used in *nix sytems to delete files
-v = verbose, outputs what is being removed.
/etc/ssh/ssh_host_* = identifies the files to be deleted.  The * wild card is used at the end of the ssh_host_ string means that any file prefixed with ssh_host_ will be identified as input for the associated command.

dpkg-reconfigure openssh-server


This command reconfigures the openssh server to that of the default configuration and re-generates the certificates removed earlier.

Command explanation:
dpkg-reconfigure = Reconfigures debian packages after they have already been installed.
openssh-server = The identified package to be reconfigured.

ssh-keygen -N “” -t rsa -b 4096 -f /etc/ssh/ssh_host_rsa_key

ssh-keygen-rekey-existing
The command above generates new SSH keys per the following modifiers:
ssh-keygen = command used for generating and managing ssh keys.
-N “”= Set a new blank passphrase for the keys to be generated.
-t rsa = Set the key type to rsa.
-b 4096 = Set the bit strength to 4096.
-f /etc/ssh/ssh_host_rsa_key = Generates the SSH key files with the prefix ssh_host_rsa_key within the /etc/ssh/ directory.

[Run as the standard user account from the associated /home/user directory]
ssh-keygen -t rsa -b 4096

ssh-keygen-lowprivuser
This command creates two files within the ~/.ssh/ directory with the names of id_rsa and id_rsa.pub (~ = /home/<username> directory).

Command explanation:
-t rsa = Set sthe key type to rsa
-b 4096 = Sets the bit strength to 4096
You will be prompted for a passphrase to be associated with the keys, however, for convenience and simplicity no passphrase will be configured.  Simply hit enter to continue answering the questions regarding the key setup.

Add trusted public keys for clients
vi ~/.ssh/authorized_keys
The above command opens the vi editor (native to *nix) in order to create a file named authorized_keys within the ~.ssh/ directory.  The contents of this file should be populated with that of the public SSH keys for trusted users/systems.  We will discuss this in greater detail later in the article.  The following command can be run on the trusted machines to output the specified keys:
cat ~/.ssh/id_rsa.pub

cat-id_rsa.pub
Command explanation:
cat = outputs the contents of a file to the screen.
~/.ssh/id_rsa.pub = identifies the file to output
Regarding this use case, the above command should be run from the appropriate user account and systems such as the setup computer and the destination SSH server to be used for the reverse SSH tunnel.

chmod 600 ~/.ssh/authorized_keys

chmod-authorized_keysThe above image shows what your permissions for the authorized_keys file should look like, when looking at the details.
Command explanation:
chmod = The command used to modify file/directory permissions in *nix systems.
600 = Indicates the permissions to be applied:
6 = Read (4) + Write (2) for the owner of the file.
0 = No permissions for the associated group for the file.
0 = No permissions for others (not owner or group member).
~/.ssh/authorized_keys = the file for which the permissions will be applied.
service ssh restart

ssh-restart-highlighted
Command explanation:
service = Command used to manipulate a service or daemon.
ssh = The service to be manipulated. (Some distros it would be sshd)
restart = The manipulative action to be taken against the previously identified service.

The above command can be restarted from an SSH session and will not close or disconnect the existing session.

Reset the hostname of the Kali Linux System:
Renaming the system is not a true hardening technique.  However you can better disguise the system, immediately outing the system as kali identifies the purposes for penetration testing quite obviously.  In some instances, the kali hostname will be sufficient; this is just another measure to help stay quieter.  As the Backtrack/Kali mantra goes “The quieter you become the more you are able to hear.”   The following are the commands needed to change the system hostname:
hostname <new_hostname>
Command explanation:
hostname = command used to show or set the system hostname
<new_hostname> = The new hostname the system will be set to.
vi /etc/hosts

vi-etc_hosts
Modify the line within the hosts file that specifies kali with the hostname you have selected.

Test connectivity from a trusted machine using SSH PKI for authentication:
This step requires the appropriate id_rsa.pub string to be populated within the ~.ssh/authorized_keys file for the standard user created in the steps above.  The following is an example of a connection to the Kali linux machine from an authorized user/system:
ssh [email protected]
Command explanation:
ssh = Command used to invoke the SSH client.
[email protected] = The user account to be used at the destination IP address of the system to be connected to.  In this case, our new Kali Linux Raspberry Pi 3 machine.  Given the appropriate public keys have been added to the previously mentioned authorized_keys file,  a new session to the Kali system will be established, dropping you in the lowprivuser’s home directory (/home/lowprivuser).
The last step of testing would be to change to the root user context by running the following command:
su –
Command explanation:
su = The command used become another user within an already logged in session.
= Indicates opening the new user account using environment variables similar to those that would be applied if logging in locally as said user.
Since there is no username specified, it uses the default user account of root.  The command above is the same as running su – root.  But for efficiency sake, we can drop off specifying root as it is unnecessary.

Now that the SSH public key authentication has been tested and verified successful, the SSH server configuration can be hardened a little more than it already is.  The reason you want to test your authentication thoroughly is that you could potentially lock yourself out of the system requiring the reversal of the following changes if problems are experienced.

Harden SSH server in /etc/ssh/sshd_config:
AllowUsers lowprivssh
PermitRootLogin no
ChallengeResponseAuthentication no
PasswordAuthentication no
# Message Authentication Code (Hash, only SHA2-512)
MACs hmac-sha2-512
# Ciphers (only secure AES-256)
Ciphers aes256-cbc,aes256-ctr
# Key Exchange algorithms (Elliptic Curve Diffie-Hellman)
# DH-SHA-256 included for compat with PuTTY-WinCrypt clients
KexAlgorithms ecdh-sha2-nistp521,diffie-hellman-group-exchange-sha256

Parameter explanations:
AllowUsers lowprivssh = Allows only the lowprivssh account to connect to the SSH server.   Users can be specified in a space separated syntax (lowprivssh user1 user2 …)

PermitRootLogin no = Do not allow the root user account to login via SSH.  The AllowUsers parameter should take care of this but disabling root from logging in directly deserves a mention.

ChallengeResponseAuthentication no = Does not allow challenge-response such as those used within the login.conf configuration (PAM, RADIUS, LDAP…)
PasswordAuthentication no = Does not allow the use of username and password authentication.
MACs hmac-sha2-512 = Uses on the SHA2 512 bit hashing algorithm for establishing SSH connections.  For more information on the SHA2-512 hashing algorithm can be found here: https://en.wikipedia.org/wiki/SHA-2

Ciphers aes256-cbc,aes256-ctr = Only allows the use of AES256 cipher block chain and AES256 counter ciphers are allowed for establishing SSH sessions.  The reasons these two cryptographic ciphers are used is out of the scope of this article.  For more interesting reading regarding these ciphers can be found here: https://en.wikipedia.org/wiki/Block_cipher_mode_of_operation

KexAlgorithms ecdh-sha2-nistp521,diffie-hellman-group-exchange-sha256 = The only key exchange algorithms to be used for establishing SSH sessions are Elliptical Curve Diffie-Hellman SHA2 on the nistp521 curve and the diffie-hellman-group-exchange-sha256 method.   For more information regarding these to algorithms see:
https://www.ietf.org/rfc/rfc5656.txt
http://tools.ietf.org/html/rfc4419

Once all of the above sshd_config parameters have been added/modified, a restart of the ssh services are necessary.

service ssh restart
Suffice it to say that there are many more hardening measures you could take regarding SSH server configuration but the above should be good enough for this particular use case.

STEP 3 Configure Automated Phone Home:

Now that the basic framework is setup for this machine, we can now start bringing the phone home capabilities into the picture.  Before we start discussing the configuration, let’s talk about how the system is going to know how to establish a connection back to home base.  Essentially, the only requirements for establishing a reverse SSH tunnel is:
1)    The FQDN or IP address of the home SSH server.
2)    Access to the relevant SSH server from the client site.
3)    The appropriate configuration for autossh.
For this demonstration, we are going to define the requirements as follows:
Home  SSH Server FQDN/IP = homessh.example.com/1.1.1.1

Assumptions:
1)    The home server infrastructure allows connections from clients from anywhere in the world.  Preferably, this server would sit in a very restrictive DMZ as it will be internet facing.  We will discuss ways to reduce the attack surface for this server but for now we will simply make the assumption that it connectable from any relevant IP address.
2)    The home SSH server has been built and hardened using the same parameters mentioned regarding the Kali linux SSH setup.
3)    Assume this is a minimalist build providing only the services required for this use case.
4)    The Kali box is going to have the relevant access allowed from the customer site to the home server.
5)    A stable Internet connection exists between both the customer and home server.

Given the above definitions and assumptions, we can move onto setting up the reverse SSH tunnel(s).  There is a great write-up and how-to here https://raymii.org/s/tutorials/Autossh_persistent_tunnels.html.  Luckily, most of the items required have already been configured to this point.  The last step in this project is to setup and make resilient the autossh package.  According to it’s man page, autossh is described as follows:
autossh is a program to start a copy of ssh and monitor it, restarting it as necessary should it die or stop passing traffic.

The original idea and the mechanism were from rstunnel (Reliable SSH Tunnel). With version 1.2 of autossh the method changed: autossh uses ssh to construct a loop of ssh forwardings (one from local to remote, one from remote to local), and then sends test data that it expects to get back. (The idea is thanks to Terrence Martin. With version 1.3, a new method is added (thanks to Ron Yorston): a port may be specified for a remote echo service that will echo back the test data. This avoids the congestion and the aggravation of making sure all the port numbers on the remote machine do not collide. The loop-of-forwardings method remains available for situations where using an echo service may not be possible.

We are going to use autossh as the mechanism for our Kali Linux box to connect back to our home SSH server and attempt to maintain a persistent connection.  By default, the autossh package is not installed on the Raspberry Pi 2/3 Kali Linux image making it necessary to install the necessary packages.   The following is the command used to install the autossh package on Kali Linux:
[Run as root]
apt-get install autossh
Command explanation:
apt-get = The command invoking the standard package manager program for Debian based  linux operating systems (Debian, Kali, Ubuntu…) which facilitates the necessary functions such as installing, updating and removing packages.
install = The apt-get command modifier indicating installation of the subsequently specified packages.
autossh = The package we want to install.  More packages can be added using a space separated syntax (<package_1> <package_2>…<package_n> ).

Now that autossh is installed, the next logical step is to configure it.  This is actually quite simple since we know where our initial connection is destined (the home SSH server).   Keep in mind, the commands and scripts for the following are only examples.  You can make your commands and scripts as elaborate and resilient as you would like but for brevity sake, functionality will be the key for this script.  The following is a simple command regarding establishing a reverse ssh tunnel using  auto ssh (not persistent):

autossh -nNT -o ServerAliveInterval=15 -R 6666:localhost:22 [email protected] &

Command explanation:
autossh = The primary package that will help maintain the reverse SSH tunnel for connectivity.
-nNT:
-n = Redirects stdin from /dev/null (actually, prevents reading from stdin). This must be used when ssh is run in the background.
-N = Do not execute a remote command. This is useful for just forwarding ports (protocol version 2 only).
-T = Disables the default pseudo-tty allocation.  This is important to disable as we are scripting the SSH session and will not require the standard input output (stdin/stdout, e.g. keyboard, screen…).

-o ServerAliveInterval=15 = This parameter enables the option to send server keep alive messages every 15 seconds, in order to maintain the session.
-R = This is the option that actually facilitates the setting of up the reverse tunnel.
6666:localhost:22 = This string identifies TCP 6666 as the port to listen to on the server side: localhost (127.0.0.1) is identified as the host /IP address to bind to: TCP 22 as the remote (Kali host) port to be forwarded to over the existing tunnel.
[email protected] = This is the standard input you for connecting to any ssh server (<user>@<hostname/IP).
& = Opens the SSH session in the background.  Note: If this modifier is not entered, the established session is contingent upon your existing session from which you are running the command.

If this command is successful, you can verify the appropriate connectivity is available by running the following:
[Run on Home SSH Server]
netstat -an | grep ‘:6666 \| 10.1.1.1:22’
Command explanation:
netstat = command used to display the status of various network states.
-an = Options used to display all (-a) network listeners/connections in number (-n) format.
| = Passes the output (stdout) of a previous command to the input (stdin) of the next one, or to the shell. This is a method of chaining commands together.
grep = The defacto search tool in *nix for finding specified strings or regular expressions (regex).
‘:6666 = Searchs the output for any lines that contain :6666.  If identified it will display all lines in the output that contain the string :6666.
\| = Indicates a grep OR, meaning it will look for the previous string or the next string you pass to it.
10.1.1.1:22’ = The second parameter that grep will search for through the netstat output, as it is the string that follows the grep OR option.   This string should filter output that shows all SSH sessions regarding the home server IP address.
The following is an example of the output that should display (Assume your local administrative client machine IP is 2.2.2.2):
tcp        0      0 127.0.0.1:6666              0.0.0.0:*                      LISTEN
tcp        0      0 1.1.1.1:22                        10.1.1.1:59751               ESTABLISHED
tcp        0      0 1.1.1.1:22                        2.2.2.2:41820               ESTABLISHED
tcp        0      0 ::1:6666                           :::*                               LISTEN

If you are able to verify that the appropriate sessions are established for using the reverse tunnel, use the following command to connect to the Kali linux machine via the reverse tunnel:
[Run from an SSH session on the home SSH server]
ssh -p 6666 lowprivssh@localhost
Command Explanation:
ssh = Command to invoke the ssh client on *nix systems.
-p 6666 = The TCP port (6666) to be used when connecting to the SSH server.
lowprivssh@localhost = Username at the destination server to be connected to.  Since the reverse tunnel is listening on the localhost (loopback IP) over TCP 6666, all connections to TCP 6666 will be forwarded to the Kali Linux machine.

Since we hardened the Kali SSH server, public key authentication is the only way to login.  If the appropriate public keys are not populated within the ~.ssh/authorized_keys file of the user account, authentication will fail.  Once logged in, we can use the following command to login to the root environment:
su –

To help visualize what is happening here is a text diagram of what has happened to get this to work:
1)    Kali →Home SSH server:22 (Also, creates reverse tunnel listener on localhost:6666)
2)    Tester_machine → Home SSH server:22
Home SSH Server → Kali (over reverse tunnel 6666:localhost:22)

Given authentication success, session established with Kali SSH server.

Hopefully, the above explains the reverse tunnel in a nutshell.  Now that we have verified that the reverse tunnel is fully functional, implementing some persistence to this tunnel is necessary.  Before moving forward let’s kill the current sessions and autossh processes.
[Run as root on Kali machine]
pidof /usr/bin/ssh | xargs kill -9
Command explanation:
pidof = Native command for looking up process IDs within unix.
/usr/bin/ssh = The path of the command we are looking for to stop/kill.  You could simply put autossh in lieu of he full path to ssh, however, it appears that when specifying the parent process the child processes remain intact.  By killing all of the processes using /usr/bin/ssh, autossh is included killing all necessary ssh processes.  Using this parameter on an SSH server/client that needs to maintain other ancillary connections using port forwarding and the like, you may want to simply identify the relevant processes and kill them specifically.
| xargs kill -9 = This takes the arguments passed on by pidof and applies the kill -9, which will kill all of the relevant process IDs regardless of their state.

Configure for persistence:
By no means will the following be the only way this can be done.  This is what worked at the time of this writing and should be taken with a grain of salt.  Up until now, pretty much all of the steps have utilized only command logic devoid of any custom scripting.  Unfortunately, the autossh package and existing SSH configurations did not behave with the anticipated resilience, even when scripting the autossh to start at boot.  As luck would have it, whilst writing this post, a storm rolled through and caused the test Kali Linux box to restart and or crash, as it was not connected to an uninterruptible power supply.  Even though reboot testing and the like had been performed successfully (probably not as thorough as should have been), in this instance, the reverse tunnel had not re-established.  Further investigation showed two major problems were revealed:
1)    The autossh process was attempting to start prior to the server obtaining a DHCP assigned IP address, essentially terminating and never attempting again.
2)    The home SSH server, was not closing connections that were not gracefully shutdown.  The sessions showed established for well over 30 minutes after the connection had been lost.
Luckily, the above conditions were overcome by implementing a little bit of help in logic and SSH server reconfiguration.  The following are the steps taken to maintain and or re-establish SSH tunnels to/from the home SSH server.

First we want to help ensure that autossh is turned up after a reboot and or power failure.  Two scripts were used to accomplish this task:
[On Kali Machine]
1)    /etc/rc.local
2)    Custom bash script placed in /etc/init.d/autossh.sh

The rc.local file is typically not used but is run once when the system is booted.  Since this script already runs at boot time, we don’t have to worry about applying the necessary configuration changes to get the custom script to run at boot.  Additional logic can be added as well if so desired.  The following is the output of the example /etc/rc.local script (additions in green):

vi /etc/rc.local

#!/bin/sh -e
#
# rc.local
#
# This script is executed at the end of each multiuser runlevel.
# Make sure that the script will “exit 0” on success or any other
# value on error.
#
# In order to enable or disable this script just change the execution
# bits.
#
# By default this script does nothing.

sleep 600
/etc/init.d/autossh.sh

exit 0

Notice the sleep command; this waits 10 minutes before attempting to run the autossh.sh script.  This is an attempt to allow a network connection to be established prior to starting autossh, which according to the default behavior relevant to the scripted command, terminates the autossh process and never attempts a connection again.

Now to start our autossh process as well as make sure it remains running, a custom script was used to accomplish this.  There is much more that could be added but this example should suffice for our purposes.  The following is the script used for starting and maintaining the autossh reverse tunnel to/from our home server:

vi /etc/init.d/autossh.sh
#!/bin/bash
#Define variables
svc=’autossh’
DATETIME=$(date “+%B %d %H:%M:%S”)

# If statement where the condition is based on whether the variable svc is running.
pidof $svc
if [ $? != 0 ];then
#If the service is not running perform the following actions
echo “$DATETIME $HOSTNAME $svc is not running, attempting to connect home.” >> /var/log/messages
autossh -nNT -o ServerAliveInterval=15 -R 6666:localhost:22 [email protected] &
fi
# If the service is running, nothing is performed. Adding an else statement above, would allow for doing something regarding that condition if required.

The last step in making sure this remains resilient, is to schedule the /etc/init.d/autossh.sh script in cron.
[From the root account on the Kali machine]
crontab –e
Command explaination:
crontab = According to the crontab man page, it is the program used to install, deinstall or list the tables used to drive the cron(8) daemon in Vixie Cron.
-e = Facilitates the editing of cron.
You may be prompted for a request to specify your editor of choice, (nano or vim).  Select the text editor you are most comfortable with.  Once you have opened cron using the root user account add the following line to the end of the file if you want to check the maintain the reverse tunnels built by autossh:
* * * * * /etc/init.d/autossh.sh
Crontab line explanation:
* * * * * = The 5 stars represent (min, hr, day, week, month) the schedule to run at the subsequent script at the top of every minute of every hour of every day of every week of every month.
/etc/init.d/autossh.sh = The script to be scheduled.

You don’t have to run the script every minute, just know that the longer you wait between verification the longer it will take to re-establish lost sessions.  More information regarding scheduling tasks in cron can be found here:
http://crontab.org

The following are some verification steps to be taken to make certain your build is resilient (connection wise):
1)    Monitor sessions using the aforementioned netstat command on the home SSH server to determine when the reverse tunnel re-establishes.
a)    Run pidof /usr/bin/ssh | xargs kill -9 on the Kali box and verify if your tunnel is re-established.
b)    Reboot the Kali box and see if your tunnel is re-established.
c)    Disconnect the Kali box from the network for several minutes and verify if your tunnel is re-established.
d)    Pull the power from your Kali box, wait a few seconds, power up and verify if your tunnel is re-established.

If all of the above tests are successful, your system exhibits resilience.  You now have a setup that can be utilized for practical purposes, such as an inexpensive and easily deployable penetration-testing platform for multiple purposes (in our case, to play).

In the spirit of being thorough, the following are some interesting additions to this project:
1)    Provide a secure means for full disk encryption without requiring human authentication.
2)    If not using static IP addressing, add a dynamic DNS agent and FQDN to help identify the associated Kali machine.
3)    If no network connection is established, configure wifi to automatically connect to any open networks.
a)    Set a preference for Ethernet over Wifi.
b)    Write program or bash script to attempt to crack wifi, starting with weakest available to strongest.  Once cracked and credentials are stored, connect to said network, loop until successful, upon disconnect, resume wifi cracking.
4)    Automate network discovery with NMAP.

The automation possibilities are seemingly endless.  Hopefully, this article hasn’t muddied the waters too much so to speak and you had fun with it.

 

Helpful links:

http://www.amazon.com/s/ref=nb_sb_noss_2/177-2361236-4848657?url=search-alias%3Daps&field-keywords=raspberry+pi+3
https://www.offensive-security.com/kali-linux-arm-images/
http://docs.kali.org/downloading/kali-linux-live-usb-install
https://www.howtoforge.com/reverse-ssh-tunneling
http://www.cyberciti.biz/faq/howto-regenerate-openssh-host-keys/
https://gmorehou.wordpress.com/2013/10/26/change-ssh-host-keys-raspbian-cubian-premade-linux-virtual-machines/
https://raymii.org/s/tutorials/Autossh_persistent_tunnels.html
http://www.tecmint.com/command-line-tools-to-monitor-linux-performance/
http://www.hacking-tutorial.com/tips-and-trick/how-to-send-email-using-telnet-in-kali-linux/
https://help.dyn.com/ddclient/
http://souptonuts.sourceforge.net/postfix_tutorial.html
http://www.cyberciti.biz/tips/linux-unix-bsd-openssh-server-best-practices.html
http://www.brennan.id.au/12-Sendmail_Server.html
https://www.sdcard.org/developers/overview/speed_class/
http://docs.kali.org/kali-on-arm/install-kali-linux-arm-raspberry-pi
http://docs.kali.org/policy/kali-linux-root-user-policy
https://en.wikipedia.org/wiki/Chmod
https://www.freebsd.org/cgi/man.cgi?sshd_config%285%29
https://www.freebsd.org/cgi/man.cgi?query=login.conf&sektion=5&apropos=0&manpath=FreeBSD+10.3-RELEASE+and+Ports
https://en.wikipedia.org/wiki/SHA-2
https://en.wikipedia.org/wiki/Block_cipher_mode_of_operation
https://wiki.tools.ietf.org/id/draft-ietf-curdle-ssh-kex-sha2-02.html
https://www.ietf.org/rfc/rfc5656.txt
http://tools.ietf.org/html/rfc4419
http://askubuntu.com/questions/157779/how-to-determine-whether-a-process-is-running-or-not-and-make-use-it-to-make-a-c
https://www.godaddy.com/help/how-to-set-an-ssh-timeout-12300
http://linux.die.net/man/1/ssh
http://man.openbsd.org/ssh
http://tldp.org/LDP/abs/html/special-chars.html#PIPEREF
http://crontab.org/