One of the best way to interact with a running container is via SSH. Everyone is familiar with SSH so why not using it. You could
You may already have setup sshd in your containers so you can skip this guide and simply use our generic noah guide. If not, or if you want to learn a few tricks (such as limiting the daemon to listen to the loopback interface only) carry on and follow the guide.
The objective of this step-by-step guide is to setup a SSHD daemon listening on port 22 on the loopback address. We do this y creating a new custom docker image container. The server will not be listening to any client that is not local to the machine. Specifically, we are interested in the noah agent to be able to forward SSH requests to a locally listening SSHD daemon.
In doing so, we will:
Basic security considerations will be covered but since each situation is unique it is imperative that the overall security posture is properly reviewed. NearEDGE makes no claim that by following this step-by-step guide your deployment will be properly secure.
We are essentially running the SSHD daemon with as little dependency on the container's operating system. We could run it using a distroless image. Look at this discussion. We will, however, use alpine Linux so that some terminal level tools are available.
As indicated in this guide, a fully static openSSH SSHD daemon is available in the noah container image. If you want to use it, just pull it the same way you pulled the agent itself. Using your own SSH daemon (or the one provided by your base image) is perfect too!
To pull the daemon file from the noah image, use
COPY --from=<noah> /opt/ne/sshd </opt/sshd>Replace <noah> with any stage name you use. See this page on the docker web site to learn more about multi-stage builds.
You may use any final destination and filename as you wish.
At the minimum, the sshd daemon requires a configuration file and host key(s). We create them in an intermediate build stage, using
#
# Generate an SSH host key in a dedicated build stage
#
FROM alpine:latest as alpine
# Host key
RUN apk add openssh-server && ssh-keygen -q -t rsa -N "" -C "" -f /etc/ssh/ssh_host_rsa_key
# SSHD configuration
RUN echo "#Port 22" >/sshd_config && \
echo "#AddressFamily any" >>/sshd_config && \
echo ListenAddress 127.0.0.1 >>/sshd_config && \
echo HostKey /etc/ssh/ssh_host_rsa_key >>/sshd_config && \
echo PermitRootLogin yes >>/sshd_config && \
echo PasswordAuthentication no >>sshd_config && \
echo KbdInteractiveAuthentication no >>/sshd_config && \
echo PrintMotd no >>/sshd_config && \
echo AllowTcpForwarding no >>/sshd_configYou can also use your runtime files via some docker volumes or mounts.
In the SSHD configuration we did above, we decided to only permit key based authentication. There is, however, no public key(s) available yet. We will do so at run time. See later.
We also did not create any new users. For a demo, the root user is good enough. In your real use case you may want to manage users. But this is outside the scope of the step-by-step guide.
Here is our final Dockerfile. We include both the noah agent and the sshd daemon in the custom image but other scenarios are possible. In particular, the side-car model (kubernetes) and the associated model (docker) work very and have the noah agent in a separate container instance.
You can learn more about the noah agent and its inclusion in a custom image by following this guide.
#
# Get the noah agent image
#
# Use the repository specific to YOUR organization.
FROM <image> as noah
#
# Generate an SSH host key in a dedicated build stage
#
FROM alpine:latest as alpine
# Host key
RUN apk add openssh-server && ssh-keygen -q -t rsa -N "" -C "" -f /etc/ssh/ssh_host_rsa_key
# SSHD configuration
RUN echo "#Port 22" >/sshd_config && \
echo "#AddressFamily any" >>/sshd_config && \
echo ListenAddress 127.0.0.1 >>/sshd_config && \
echo HostKey /etc/ssh/ssh_host_rsa_key >>/sshd_config && \
echo PermitRootLogin yes >>/sshd_config && \
echo PasswordAuthentication no >>sshd_config && \
echo KbdInteractiveAuthentication no >>/sshd_config && \
echo PrintMotd no >>/sshd_config && \
echo AllowTcpForwarding no >>/sshd_config
RUN mkdir /noah
#
# Final image stage (our target image) - We use a separate
# stage to keep the image size small.
#
FROM alpine:latest
# Persistent location for noah
COPY --from=alpine /noah /noah
COPY --from=alpine /etc/ssh/ssh_host_rsa_key /etc/ssh/ssh_host_rsa_key
COPY --from=alpine /sshd_config /opt/ne/sshd_config
# Static binaries
COPY --from=noah /opt/ne/sshd /opt/ne/sshd
COPY --from=noah /opt/ne/chainstart /opt/ne/chainstart
COPY --from=noah /opt/ne/noah /opt/ne/noah
#
# Start sshd and noah - Using chainstart
#
ENTRYPOINT ["/opt/ne/chainstart", \
"--", "/opt/ne/sshd", "-f", "/opt/ne/sshd_config", \
"--", "/opt/ne/noah", "--dataloc", "/noah"]The Dockerfile above only does define a noah agent and SSHD services. No other service is present. In real life scenario, you may include your application. You may also run this image as a side-car or associated container. See this guide for a discussion on this topic.
To start using the SSHD daemon we just added, we just need to do this:
The build is simple, just do:
cd <locationOfYourDockerfile>
docker build --tag <imagename> .Then, running the container can be done using something similar to:
docker run --detach --volume <volumename>:/noah --volume <somepath>:/root --name <imagename> <imagename> <somepath> is a host directory that replaces the image's user (root) home directory. You simply need to drop an SSH public key as <somepath>/.ssh/authorized_keys
At this stage, the noah agent should communicate with the Control center and should be visible in your dashboard. Before using an SSH client, you must complete the following:
The Access Gateways must also have port 22 included in its Forwarding port(s) list.
On an initial run, it may take a minute or 2 before the new instance appears in the dashboard. If not, then check the following
This usually takes a little longer for the new instance to appear than appearing in the dashboard. Assuming that the Access Gateway otherwise functions properly, you should check the following items:
Presuming that the Access gateway and its configuration functions properly, the issue(s) should normally be related to the container configuration. Check the following: