Sunday, February 15, 2015

Backing up sqlite to another machine

Much have been written about backing up mysql/mariadb (mysqldump anyone?), but what about lowly sqlite? It might not have as many features and does not run as its own process, but sqlite is rather useful in its own right (embedded anyone?). If you have an android phone you are running sqlite. And that does not mean its data is not worth saving, so let's do some saving.

I will call the backup server backupbox and the machine running sqlite webserver. Note that usually a sqlite db is not run in a separate server like other databases, which is why in this example we claim it is a backend to some website.

Requirements and Assumptions

  1. sqlite3. After all, it is a bit challenging backing up a database if you do not have the database.
  2. Path to the database file you want to backup. Remember, sqlite is primarily a file; if you do not know where the file is, backing it up might pose a few challenges. In this example, it is found at /var/local/www/db/stuff.db.
  3. Both backupbox (IP 192.168.42.90) and webserver are running Linux or OSX. If there is an interest we can talk about when one (or both) of them are running Windows.
  4. We will break the backup process in two steps: backing up the database (to local drive) and then copying backup into the backup server. Reason is that we can then time each step to run at convenient times to them. Also, if one of the steps fail, it is much easier to take care of that. I know how monolithic do-everything-plus-clean-kitchen programs are the hip and fashionable solution nowadays, but this is my blog and I prefer simple and (hopefully) easy to maintain solutions whenever possible. Deal with it.

Procedure

  1. On server to be backed up, webserver
    1. Create backup user
      useradd -m backupsqlite -G www-data
      sudo -u backupsqlite mkdir -p /home/backupsqlite/.ssh
      touch /home/backupsqlite/.ssh/authorized_keys
      chown backupsqlite:backupsqlite /home/backupsqlite/.ssh/authorized_keys
      
    2. Create script to dump the database in ~backupsqlite/sqlite-backup.bz2.
      cat > /usr/local/bin/backupsqlite << 'EOF'
      #!/bin/bash
      BACKUP_USER=sqlitebackup
      BACKUP_GROUP=services
      DATABASE=/var/local/www/db/stuff.db
      
      sqlite3 ${DATABASE} ".dump" |
      sudo -u  ${BACKUP_USER} bzip2 -c > /home/${BACKUP_USER}/sqlite-backup.bz2
      chmod 0600 /home/${BACKUP_USER}/sqlite-backup.bz2
      EOF
      chmod 0700 /usr/local/bin/backupsqlite
    3. Run the above script manually as user sqlitebackup. Do not continue until this step is successful.
    4. Now you know your hard work paid off, how about running this script once a day? Maybe at 3:00am the natives will be quiet enough so you can safely run the backup script:
      cat > /etc/cron.d/backupsqlite << 'EOF'
      MAILTO=admin@example.com
      0 3 * * *  backupsqlite    /usr/local/bin/backupsqlite
      EOF
  2. On backup server, backupbox:
    1. Create ssh key pair to authenticate the connection. Note we are assuming we are will be running this script from backupbox's root account; that is not required and probably not the smartest thing to do, but will work fine for our little test. You could have used Kerberos or LDAP or something else, but would need to make changes as needed.
      ssh-keygen -f /root/.ssh/sqlitebackup-id-rsa
      You will need to copy sqlitebackup-id-rsa.pub to webserver and place it in ~backupsqlite/.ssh/authorized_keys by any means you want. If you are a better typist than me, you could even enter it manually.
    2. Test: can you retrieve the backup file?
      rsync -az -e "ssh -i /root/.ssh/sqlitebackup-id-rsa " \
      backupsqlite@webserver.example.com:sqlite-backup.bz2 .
      We can restrict this connection later. Right now let's just make sure this step works. If not, find out why before continuing.
    3. Now let's create the script that will get the database and put it, say, in /export/backup/databases
      cat > /usr/local/bin/backupsqlite << 'EOF'
      #!/bin/bash
      BACKUP_USER=sqlitebackup
      BACKUP_PATH='/export/backup/databases'
      DATABASE_SERVER=webserver.example.com
      $DATABASE=sqlite-backup.bz2
      KEY='/root/.ssh/sqlitebackup-id-rsa'
      
      cd $DATABASE_PATH
      rsync -az -e "ssh -i $KEY " $BACKUP_USER@$DATABASE_SERVER:$DATABASE .
      EOF
      chmod 0700 /usr/local/bin/backupsqlite

      Test it before continuing. Note there are other ways to do this step, like add the above rsync statement to a larger backup script; I tried to hint at that and the fact this could be the start of a function that would loop over all the servers you need to grab backup files from. So final implementation is up to you.

    4. If this backup is to be run independently, create a cron job to run it at a convenient time. How does 11:45 in the evening sounds?

      cat > /etc/cron.d/backupsqlite << 'EOF'
      MAILTO=admin@example.com
      45 23 * * *  root    /usr/local/bin/backupsqlite
      EOF

      Otherwise, tell your main backup script about this new entry.

  3. And back to webseerver
    1. Now time to make access to the database more restrict. We will be rather lazy here: we will start with what we have done in a previous article and make a few changes here and there.
      sudo -u backupsqlite cat > /home/backupsqlite/cron/validate-rsync << 'EOF'
      #!/bin/sh
      case "$SSH_ORIGINAL_COMMAND" in
      rsync\ --server\ --sender\ -vlogDtprze.iLsf\ --ignore-errors\ .\ sqlite-backup.bz2)
      $SSH_ORIGINAL_COMMAND
      ;;
      *)
      echo "Rejected"
      ;;
      esac
      EOF
      chmod +x /home/backupsqlite/cron/validate-rsync
      You probably noticed that I named the scripts running in both machines the same. Why not? If you do not like that, change it!
    2. Then we need to modify /home/backupsqlite/.ssh/authorized_keys to tell it to accept only connections from our backup server using our key pair and then only allow the backup script to be run. In other words, add something like
      from="192.168.42.90",command="/home/backupsqlite/cron/validate-rsync"
      to the beginning of the line in /home/backupsqlite/.ssh/authorized_keys containing the public key.

References

The Sqlite dump command

Thursday, February 12, 2015

Talking to MySQL/MariaDB from powershell using ODBC

This was bit frustrating to me. Let's say you have a MySQL or a MariaDB (whose names I will write in all lowercase from now on because I am typing challenged) database running somewhere in a Windows/Linux/OSX/Solaris/AIX box. And need to reach it, say to retrieve the value of some row in a table somewhere. That does not sound that much of a challenge: unleash Python or even Bash and you should be able to create a script to do the deed. But, what if you had to do so from a Windows machine?

If you have read any of my previous Windows-related posts, you would know that I will probably try to find a way to do the deed in Powershell. And you will also know the reason I do that is so working on Windows feels more Linuxy to me. And this article will not break this precedent: there will definitively be some powershell involved. You have been warned!

The first thing we will need is a way to connect using ODBC, so we need to get a ODBC Connector. The one I used I got directly from mysql, and it was called the Windows (x86, 64-bit), MSI Installer. After installing it we did not configure it. Instead, we will do that from the script itself; all we need is the driver name. Since I forgot the name of the .msi package and driver, I decided to ask it directly. So, I went to Control Panel->Administrative Tools->Data Source (ODBC). There we can click on System DSN->Add. At the time this article was written, the driver name we'll use is "MySQL ODBC 5.3 ANSI Driver" as shown in the picture to the left. Yes, you want to save the entire string verbatim. With that recorded, we close the little non-resizeable ODBC-related dialog boxes and move forward.

Now let's talk about what frustrated me: at first I thought I had to create a System DSN and then feed that to my script. Big mistake; I could not get that working no matter how hard I tried. Realizing I was going nowhere slowly, I instead decided to do the deed directly from Powershell. So, let's start by writing a powershell script that only tries to connect to the database penguium (the penguin continuum?) on IP 10.0.0.113 as user admin and then checks if the connection was successful. Here is how the database looks like:

mysql> select * from penguium.stats;
+-----+-----------------------+-------------+
| pid | name                  | happy_state |
+-----+-----------------------+-------------+
|   1 | Fat Penguin           | Relaxed     |
|   2 | Doctor Penguin, M.D.  | Pensive     |
|   3 | Angry Penguin of Doom | Angry       |
|   4 | Lazy Penguin          | Not Present |
+-----+-----------------------+-------------+
4 rows in set (1.07 sec)

mysql>
We will be asking help from .Net, specifically from the framework data provider for ODBC, System.Data.ODBC. My script looks like this (real IPs and username/password changed to protect the innocent):

$SQLServer = "10.0.0.113"
$SQLDBName = "penguium"
$SQLUser = "admin"
$SQLPW = "supersecret"

# Connect to the database
$SqlConnection = New-Object System.Data.ODBC.ODBCConnection
$SqlConnection.connectionstring = `
   "DRIVER={MySQL ODBC 5.3 ANSI Driver};" +
   "Server = $SQLServer;" +
   "Database = $SQLDBName;" +
   "UID = $SQLUser;" +
   "PWD= $SQLPW;" +
   "Option = 3"
$SqlConnection.Open()

$SqlConnection.State

First thing I want to draw your attention to is the line.

"DRIVER={MySQL ODBC 5.3 ANSI Driver};" +
If it reminds you of the driver name we made a point to write down, you are absolutely correct. We are telling Windows which ODBC driver we want to use. The next few variables -- server, database, UID, and PWD -- normally would be defined if we had create and configure a System DSN. Which we didn't because as I mentioned above I tried to do so and it gleefully ignored me. Do note this is not the only way, or the safest, to authenticate, but it is the laziest. In fact, let me say this once again: my example here is great as an example only; it is insecure (no encryption and the authentication is for all to see). Go read the references, specially the Microsoft docs, for how to make this mess more secure.

Next step is to create/open the connection to the db server.

$SqlConnection.Open()
And then we ask, using the command
$SqlConnection.State
to report on whether the connection was indeed opened. A successful connection will send
Open
as the reply. Now we know the connection works, let's now find out how happy Angry Penguin of Doom is feeling this morning. If logged into the mysql server, we would do that by saying
mysql> select happy_state from penguium.stats where name = 'Angry Penguin of Doom';
+-------------+
| happy_state |
+-------------+
| Angry       |
+-------------+
1 row in set (0.03 sec)

mysql>

Reading from database

Since we have a beautiful script above, let's send that query by adding the following lines to our script
$SqlQuery = "select happy_state from penguium.stats where name = 'Angry Penguin of Doom'"
$Command = New-Object `
    System.Data.ODBC.ODBCCommand($SQLQuery, $SQLConnection)
$Reply = $Command.executescalar()
$Reply

If we run the script again, we would see something like

Open
Angry
where the first line is the output of $SqlConnection.State just like before and the second line (Angry) is the content of the happy_state row associated with our penguin.

If we expected seeing more than one reply to our query, we could do something like this instead:

$Reply = $Command.executereader()
while($reply.Read()){ $reply.GetString(0) }

What we are doing here in the while line is to retrieve the first column ($reply.GetString(0)) from the reply to our query. Since there is only one entry, that is all we get back. I'll leave as an exercise to get all columns or the next one.

Editing database

So far so good? Now, how about making poor Angry Penguin of Doom happier? Since this is not real life, nor a real penguin, we can bring a smile to its lips with the power of sql:

mysql> UPDATE penguium.stats SET happy_state='Happy' WHERE name='Angry Penguin of Doom';
Query OK, 1 row affected (2.67 sec)
Rows matched: 1  Changed: 1  Warnings: 0

mysql> select * from penguium.stats;
+-----+-----------------------+-------------+
| pid | name                  | happy_state |
+-----+-----------------------+-------------+
|   1 | Fat Penguin           | Relaxed     |
|   2 | Doctor Penguin, M.D.  | Pensive     |
|   3 | Angry Penguin of Doom | Happy       |
|   4 | Lazy Penguin          | Not Present |
+-----+-----------------------+-------------+
4 rows in set (0.03 sec)

mysql>
So let's see if we can do that in powershell. I am going to cheat and copy how we made the query in the previous session, changing as needed:
$SQLCommand = "UPDATE penguium.stats SET happy_state='Happy' WHERE name='Angry Penguin of Doom'"
$Command = New-Object `
    System.Data.ODBC.ODBCCommand($SQLCommand, $SQLConnection)
$Command.executescalar()

Putting It All Together or Something Like That

The article is over and we found out how to read and write data to the database using ODBC and powershell. How about if we prettify it a bit and convert that into a script that monitors the database and change it as needed? Say, if the Angry Penguin of Doom is not happy, we will make it so.

$SQLServer = "10.0.0.113"
$SQLDBName = "penguium"
$SQLUser = "admin"
$SQLPW = "supersecret"
$ODBCDriver='{MySQL ODBC 5.3 ANSI Driver}'

$SqlQuery = "select happy_state from penguium.stats where name = 'Angry Penguin of Doom'"
$SQLCommand = "UPDATE penguium.stats SET happy_state='Happy' WHERE name='Angry Penguin of Doom'"

# Connect to the database
function DBConnect ($server, $driver, $dbname, $user, $password)
{
   $Connection = New-Object System.Data.ODBC.ODBCConnection
   $Connection.connectionstring = `
      "DRIVER= $driver;" +
      "Server = $server;" +
      "Database = $dbname;" +
      "UID = $user;" +
      "PWD= $password;" +
      "Option = 3"
   $Connection.Open()
   return $Connection
}

# Send a Command to the database and get the reply
function SendQuery ($Query, $Connection)
{
   $Command = New-Object `
      System.Data.ODBC.ODBCCommand($Query, $Connection)
   return $Command.executescalar()
}

# Query DB and update it as needed
$SQLConnection = DBConnect $SQLServer $ODBCDriver $SQLDBName $SQLUser $SQLPW

$answer = SendQuery $SqlQuery $SQLConnection

if ( $answer -ne "Happy" )
{
   # Let us know. Could email us instead
   "Crap: $answer"

   # Change it!
   SendQuery $SqlCommand $SQLConnection
}

Note that the criteria we used was if the penguin is not happy to do something; we could as well as have tested only if he was angry (if ( $answer -eq "Angry" )). The above code could be improved by adding
try{} catch{} in appropriate locations and optionally contacting (email?) whoever needs to know when a change is required, but I am not in the mood of doing that right now.

References

Notes

  • Just corrected a typo. Shame on me!

Wednesday, February 04, 2015

Creating a git server in docker... with NFS and custom port and ssh key pair

I usually like to start my post describing what we will try to accomplish here, but I think I can't do any better than what the title states. So, let's say if I can come up with a convincing excuse. Well, all I can come up with right now is that I think it is wasteful to create an entire VM to run a distributed version control system. At least one that does not have helpful paperclips with eyes and other features requiring you to download crap. And, it is nice to know if the docker host (or cloud) takes a dump, we can bring this service back rather quickly. For this article we will use git; some other time we can talk about svn.

The git server I will be using is gitolite because it is reasonably simple and quick to manage and get going. What I really like on it is that the accounts for the users using git are not accounts in the host itself, so they cannot login to the machine hosting git. By default the git users login using

Since I am lazy, I will store the repositories in a NFS fileshare that is mounted into the container at runtime. We talked about how to do the mounting in a previous article.

Assumptions

  1. gitolite running off /home/git
  2. We will connect to the git server on port 2022. I chose that because I need port 22 for to ssh into the docker host. Yes, they are in different IPs (in my case completely different VLANs), but I am weird like that.
  3. We will use ssh key pair authentication. And will use a different key than the default one. Note you can authenticate against LDAP, but that would go against what I wrote in the name of this article.
  4. gitolite being run as user git
  5. I am running this on centos6.

Install

  1. I created a CNAME for the docker host, gitserver.example.com, so it looks pretty.
  2. In the NFS server, create a fileshare owned by the user git, which in this example has uid=1201.
  3. We will need to create a ssh key pair for the gitadmin. I created my pair by doing something like
    ssh-keygen -t rsa -C gitadmin -f ~/.ssh/gitadmin
    You will need to copy ~/.ssh/gitadmin.pub into the docker host by whatever means you desire.
  4. I create a directory in the docker host to put all the files (docker-entrypoint.sh and Dockerfile) related to this container. Here is the Dockerfile
    ############################################################
    # Dockerfile to build a gitolite git container image
    # Based on CentOS
    ############################################################
    
    # Set the base image to CentOS
    FROM centos:centos6
    
    # File Author / Maintainer
    MAINTAINER Mauricio Tavares "raubvogel@gmail.com"
    
    ################## BEGIN INSTALLATION ######################
    # We need epel
    RUN rpm -Uvhi http://dl.fedoraproject.org/pub/epel/6/x86_64/epel-release-6-8.no
    arch.rpm && \
        sed -i -e 's/^enabled=1/enabled=0/' /etc/yum.repos.d/epel.repo
    
    # We need NFS, openssh, and git
    # And ssmtp (from Epel)
    RUN yum update -y && yum install -y \
            git \
            nfs-utils \
            openssh-server && \
        yum install -y ssmtp --enablerepo=epel
    
    # Configure NFS
    RUN sed -i -e '/^#Domain/a Domain = example.com' /etc/idmapd.conf
    
    ##################### INSTALLATION END #####################
    
    # Create git user
    RUN adduser -m -u 1201 git
    
    # Configure ssmtp
    
    # Configure sshd
    RUN sed -i -e 's/^#Port .*$/Port 2022/' \
               -e 's/^#PermitRootLogin .*$/PermitRootLogin no/' \
               /etc/ssh/sshd_config && \
        sed -i -e \
            's@session\s*required\s*pam_loginuid.so@session optional pam_loginuid.s
    o@g' \
             /etc/pam.d/sshd && \
        ssh-keygen -f /etc/ssh/ssh_host_rsa_key -N '' -t rsa && \
        ssh-keygen -f /etc/ssh/ssh_host_dsa_key -N '' -t dsa
    
    # And a mountpoint for repositories
    # Note: can't NFS mount from dockerfile, so will do it in an entrypoint script
    RUN su - git -c 'mkdir repositories'
    
    
    # And now the git server
    # Gitolite admin: gitadmin (it is based on the name of the pub key file)
    RUN su - git -c 'mkdir bin' && \
        su - git -c 'git clone git://github.com/sitaramc/gitolite' && \
        su - git -c 'mkdir -m 0700 .ssh' && \
        su - git -c 'echo "ssh-rsa AAAAB3NzaC1yc2EAASLDAQCOOKIEQDehf5hxGq9//34yrsL
    [...]
    7CfSpbiP gitadmin" > .ssh/gitadmin.pub'
    # The rest will be configured in the entrypoint script
    
    # Put the entrypoint script somewhere we can find
    COPY docker-entrypoint.sh /entrypoint.sh
    ENTRYPOINT ["/entrypoint.sh"]
    
    EXPOSE 2022
    # Start service
    CMD ["/usr/sbin/sshd", "-D"]
    

    Where

    1. You will need to put the public key gitadmin.pub between the double quotes in the line beginning with su - git -c 'echo "ssh-rsa.
    2. I am running a lot of things in the Dockerfile as user git.
    3. The NFS setup was mentioned before, so I will not bother with it right now.
    4. I forgot to add the setup for ssmtp. I will think about that sometime later.
  5. The docker-entrypoint.sh file looks vaguely like this:
    #!/bin/sh
    set -e
    
    # Mount git's repositories
    mount.nfs4 fileserver.example.com:/git /home/git/repositories
    
    su - git -c 'gitolite/install -to $HOME/bin'
    # setup gitolite with yourself as the administrator
    su - git -c 'gitolite setup -pk .ssh/gitadmin.pub'
    
    # And we are out of here
    exec "$@"
  6. So far so good? Ok, so let's build the image. I will call it git.
    docker build -t git .
    If the last build message looks like
    Successfully built 6fb1ac15b47a
    chances are the build was successful and you can go to the next step. Otherwise, figure out what went boink.
  7. Now start the service. Remember you need to run in priviledge mode because of NFS. Since this is a test, I am calling the contained test-git.
    docker run --privileged=true -d -P -p 3306:3306 --name test-git git

Setup and Testing

  1. Let's start testing by seeing what repositories we can see as an admin:
    $ /usr/bin/ssh -i /home/raub/.ssh/gitadmin git@gitserver.example.com info
    hello gitadmin, this is git@docker running gitolite3 v3.6.2-12-g1c61d57 on git 1.7.1
    
     R W    gitolite-admin
     R W    testing
    
    
    FYI, testing is a repo everyone allowed to use the git server can play with. Think of it as a, as its name implies, test repo. Since that worked we can proceed to the next step.
  2. Now we should edit your .ssh/config file to access the gitadmin repository.
    cat >> ~/.ssh/config << EOF
    Host gitadmin
            Hostname        gitserver.example.com 
            User            git
            Port            2022
            identityfile    /home/raub/.ssh/gitadmin
            protocol        2
            compression     yes
    EOF
    Yes, you can ask about what to do if you have a script that needs to pull stuff out of a repo, and I will tell you to wait for the next installment. This article deals with getting it to work.
  3. Retrieving the admin repository is now much easier. So, instead of having to do something like
    git clone ssh://git@gitserver.example.com:[port]/gitolite-admin
    which would also require us to feed the key (or rename it as the default which IMHO is a bad idea), thanks to the previous step we can now lazily do
    git clone gituser:/gitolite-admin
  4. Repository config file is in gitolite-admin/conf/gitolite.conf
  5. Adding a new user (and perhaps a repository)
    1. Get user's ssh public key, say raub.pub. How did the user created the ssh key pair? Don't know, don't care.
    2. Copy raub.pub to gitolite-admin/keydir. NOTE:file must be named after username user will use to connect to the git server; it does not need to have anything to do with the user's normal/real username.
    3. Create a repository for the user. Let's give it a nice and snuggly name, like somethingawful
      cat >> conf/gitolite.conf << EOF
      
      repo somethingawful
          RW      = raub
      EOF
    4. Commit changes
      git add conf keydir
      git commit -m 'Added new user and config a repo'
      git push origin master
  6. Now, let's pretend we are the user (i.e. what the user should do/see). Which repos can you/user see? If this looks like a step we did before, it is. Just using a new user.
    $ ssh -i /home/raub/.ssh/raub git@gitserver.example.com info 
    hello raub, this is git@docker running gitolite3 v3.6.2-12-g1c61d57 on git 1.7.1
    
    
     R W    somethingawful
     R W    testing
    
    There is somethingswful!
  7. Let's grab somethingawful
    $ git clone gituser:/somethingawful
    Cloning into 'somethingawful'...
    warning: You appear to have cloned an empty repository.
    Checking connectivity... done.
  8. Edit, commit, and off you go

I will put all those guys in my github account later on. And shall sneakly update this post afterwards.

Sunday, February 01, 2015

Entrypoint scripts and data persistency: mounting a NFS fileshare inside a docker container

This is a quick article; you have been warned.

The traditional way to have persistent data in a docker container is to feed it with a volume from the docker host. You begin by locating a directory, be it in the local docker host drive, network mounted to the docker host, or a volume in another docker container. Then feed it to the container using the VOLUME statement defined in the Dockerfile or to command line. We all know that. But what some (very few we hope) of you might not have been aware of is the volume is only mounted/attached to the container when you tell docker to run the container, which might make running/configuring something that require files in those volumes a bit challenging.

At this point I would expect one of you to shout "Aha! But you can then use an entrypoint script to do all that monkeying that needs to happen after volume is mounted and before whatever service this container provides starts!" And you would be quite right! Here's a quick example in case we lost somebody: if our Dockerfile ended up like this:

# Put the entrypoint script somewhere we can find
COPY docker-entrypoint.sh /entrypoint.sh
ENTRYPOINT ["/entrypoint.sh"]

EXPOSE 22
# Start service
CMD ["/usr/sbin/sshd", "-D"]

we would have created a file called entrypoint.sh in the root path of the container, which is run just after the volume is created and before the, in this example, sshd service is started. Said entrypoint file could do, I don't know:

#!/bin/sh
set -e

# Do something in /home, which is the mounted volume
/do-something-at-home.sh

# And we  are done here
exec "$@"

point is that do-something-at-home.sh is called only after the persistent volume is mounted into the container.

What if you want to mount a network fileshare straight into the container? Before I have time to answer that, someone from the audience will jump and state "Why would you want to do that? You can mount the network fileshare in the docker host and then use VOLUME just like Wicked Witch of the West intended!" What if I don't want to do that? What if I can't for whatever reason mount a network fileshare on the docker host?

The answer is that it is completely doable; you just have to ask the entrypoint script to do the mounting for you. To show what I mean I will use the example of mounting a NFSv4 fileshare that is supposed to be used by some website. So we modify our little entrypoint script a bit

#!/bin/sh
set -e

# /export/www was created in the Dockerfile
mount.nfs4 fileserver.example.com:/export/www /www/some-website 

# And we are done here
exec "$@"

Since we are using NFSv4, chances are you might need to add something like

RUN sed -i -e '/^#Domain/a Domain = example.com' /etc/idmapd.conf

to your dockerfile in addition to telling it to get the nfs client packages, but the general idea should apply for, say, SMB or ZFS or whatever other network filesystem that you fancy: let your entrypoint script do the heavy lifting!