This was bit frustrating to me. Let's say you have a MySQL or a MariaDB (whose names I will write in all lowercase from now on because I am typing challenged) database running somewhere in a Windows/Linux/OSX/Solaris/AIX box. And need to reach it, say to retrieve the value of some row in a table somewhere. That does not sound that much of a challenge: unleash Python or even Bash and you should be able to create a script to do the deed. But, what if you had to do so from a Windows machine?
If you have read any of my previous Windows-related posts, you would know that I will probably try to find a way to do the deed in Powershell. And you will also know the reason I do that is so working on Windows feels more Linuxy to me. And this article will not break this precedent: there will definitively be some powershell involved. You have been warned!
The first thing we will need is a way to connect using ODBC, so we need to get a ODBC Connector. The one I used I got directly from mysql, and it was called the Windows (x86, 64-bit), MSI Installer. After installing it we did not configure it. Instead, we will do that from the script itself; all we need is the driver name. Since I forgot the name of the .msi package and driver, I decided to ask it directly. So, I went to Control Panel->Administrative Tools->Data Source (ODBC). There we can click on System DSN->Add. At the time this article was written, the driver name we'll use is "MySQL ODBC 5.3 ANSI Driver" as shown in the picture to the left. Yes, you want to save the entire string verbatim. With that recorded, we close the little non-resizeable ODBC-related dialog boxes and move forward.
Now let's talk about what frustrated me: at first I thought I had to create a System DSN and then feed that to my script. Big mistake; I could not get that working no matter how hard I tried. Realizing I was going nowhere slowly, I instead decided to do the deed directly from Powershell. So, let's start by writing a powershell script that only tries to connect to the database penguium (the penguin continuum?) on IP 10.0.0.113 as user admin and then checks if the connection was successful. Here is how the database looks like:
mysql> select * from penguium.stats;
+-----+-----------------------+-------------+
| pid | name | happy_state |
+-----+-----------------------+-------------+
| 1 | Fat Penguin | Relaxed |
| 2 | Doctor Penguin, M.D. | Pensive |
| 3 | Angry Penguin of Doom | Angry |
| 4 | Lazy Penguin | Not Present |
+-----+-----------------------+-------------+
4 rows in set (1.07 sec)
mysql>
We will be asking help from .Net, specifically from the framework data provider for ODBC,
System.Data.ODBC. My script looks like this (real IPs and username/password changed to protect the innocent):
$SQLServer = "10.0.0.113"
$SQLDBName = "penguium"
$SQLUser = "admin"
$SQLPW = "supersecret"
# Connect to the database
$SqlConnection = New-Object System.Data.ODBC.ODBCConnection
$SqlConnection.connectionstring = `
"DRIVER={MySQL ODBC 5.3 ANSI Driver};" +
"Server = $SQLServer;" +
"Database = $SQLDBName;" +
"UID = $SQLUser;" +
"PWD= $SQLPW;" +
"Option = 3"
$SqlConnection.Open()
$SqlConnection.State
First thing I want to draw your attention to is the line.
"DRIVER={MySQL ODBC 5.3 ANSI Driver};" +
If it reminds you of the driver name we made a point to write down, you are absolutely correct. We are telling Windows which ODBC driver we want to use. The next few variables -- server, database, UID, and PWD -- normally would be defined if we had create and configure a System DSN. Which we didn't because as I mentioned above I tried to do so and it gleefully ignored me. Do note this is not the only way, or the safest, to authenticate, but it is the laziest. In fact, let me say this once again: my example here is great as an
example only; it is insecure (no encryption and the authentication is for all to see). Go read the references, specially the Microsoft docs, for how to make this mess more secure.
Next step is to create/open the connection to the db server.
$SqlConnection.Open()
And then we ask, using the command
$SqlConnection.State
to report on whether the connection was indeed opened. A successful connection will send
Open
as the reply. Now we know the connection works, let's now find out how happy
Angry Penguin of Doom is feeling this morning. If logged into the mysql server, we would do that by saying
mysql> select happy_state from penguium.stats where name = 'Angry Penguin of Doom';
+-------------+
| happy_state |
+-------------+
| Angry |
+-------------+
1 row in set (0.03 sec)
mysql>
Reading from database
Since we have a beautiful script above, let's send that query by adding the following lines to our script
$SqlQuery = "select happy_state from penguium.stats where name = 'Angry Penguin of Doom'"
$Command = New-Object `
System.Data.ODBC.ODBCCommand($SQLQuery, $SQLConnection)
$Reply = $Command.executescalar()
$Reply
If we run the script again, we would see something like
Open
Angry
where the first line is the output of
$SqlConnection.State just like before and the second line (
Angry) is the content of the
happy_state row associated with our penguin.
If we expected seeing more than one reply to our query, we could do something like this instead:
$Reply = $Command.executereader()
while($reply.Read()){ $reply.GetString(0) }
What we are doing here in the while line is to retrieve the first column ($reply.GetString(0)) from the reply to our query. Since there is only one entry, that is all we get back. I'll leave as an exercise to get all columns or the next one.
Editing database
So far so good? Now, how about making poor Angry Penguin of Doom happier? Since this is not real life, nor a real penguin, we can bring a smile to its lips with the power of sql:
mysql> UPDATE penguium.stats SET happy_state='Happy' WHERE name='Angry Penguin of Doom';
Query OK, 1 row affected (2.67 sec)
Rows matched: 1 Changed: 1 Warnings: 0
mysql> select * from penguium.stats;
+-----+-----------------------+-------------+
| pid | name | happy_state |
+-----+-----------------------+-------------+
| 1 | Fat Penguin | Relaxed |
| 2 | Doctor Penguin, M.D. | Pensive |
| 3 | Angry Penguin of Doom | Happy |
| 4 | Lazy Penguin | Not Present |
+-----+-----------------------+-------------+
4 rows in set (0.03 sec)
mysql>
So let's see if we can do that in powershell. I am going to cheat and copy how we made the query in the previous session, changing as needed:
$SQLCommand = "UPDATE penguium.stats SET happy_state='Happy' WHERE name='Angry Penguin of Doom'"
$Command = New-Object `
System.Data.ODBC.ODBCCommand($SQLCommand, $SQLConnection)
$Command.executescalar()
Putting It All Together or Something Like That
The article is over and we found out how to read and write data to the database using ODBC and powershell. How about if we prettify it a bit and convert that into a script that monitors the database and change it as needed? Say, if the Angry Penguin of Doom is not happy, we will make it so.
$SQLServer = "10.0.0.113"
$SQLDBName = "penguium"
$SQLUser = "admin"
$SQLPW = "supersecret"
$ODBCDriver='{MySQL ODBC 5.3 ANSI Driver}'
$SqlQuery = "select happy_state from penguium.stats where name = 'Angry Penguin of Doom'"
$SQLCommand = "UPDATE penguium.stats SET happy_state='Happy' WHERE name='Angry Penguin of Doom'"
# Connect to the database
function DBConnect ($server, $driver, $dbname, $user, $password)
{
$Connection = New-Object System.Data.ODBC.ODBCConnection
$Connection.connectionstring = `
"DRIVER= $driver;" +
"Server = $server;" +
"Database = $dbname;" +
"UID = $user;" +
"PWD= $password;" +
"Option = 3"
$Connection.Open()
return $Connection
}
# Send a Command to the database and get the reply
function SendQuery ($Query, $Connection)
{
$Command = New-Object `
System.Data.ODBC.ODBCCommand($Query, $Connection)
return $Command.executescalar()
}
# Query DB and update it as needed
$SQLConnection = DBConnect $SQLServer $ODBCDriver $SQLDBName $SQLUser $SQLPW
$answer = SendQuery $SqlQuery $SQLConnection
if ( $answer -ne "Happy" )
{
# Let us know. Could email us instead
"Crap: $answer"
# Change it!
SendQuery $SqlCommand $SQLConnection
}
Note that the criteria we used was if the penguin is not happy to do something; we could as well as have tested only if he was angry (if ( $answer -eq "Angry" )). The above code could be improved by adding
try{} catch{} in appropriate locations and optionally contacting (email?) whoever needs to know when a change is required, but I am not in the mood of doing that right now.References
Notes
- Just corrected a typo. Shame on me!
I usually like to start my post describing what we will try to accomplish here, but I think I can't do any better than what the title states. So, let's say if I can come up with a convincing excuse. Well, all I can come up with right now is that I think it is wasteful to create an entire VM to run a distributed version control system. At least one that does not have helpful paperclips with eyes and other features requiring you to download crap. And, it is nice to know if the docker host (or cloud) takes a dump, we can bring this service back rather quickly. For this article we will use git; some other time we can talk about svn.
The git server I will be using is gitolite because it is reasonably simple and quick to manage and get going. What I really like on it is that the accounts for the users using git are not accounts in the host itself, so they cannot login to the machine hosting git. By default the git users login using
Since I am lazy, I will store the repositories in a NFS fileshare that is mounted into the container at runtime. We talked about how to do the mounting in a previous article.
Assumptions
- gitolite running off /home/git
- We will connect to the git server on port 2022. I chose that because I need port 22 for to ssh into the docker host. Yes, they are in different IPs (in my case completely different VLANs), but I am weird like that.
- We will use ssh key pair authentication. And will use a different key than the default one. Note you can authenticate against LDAP, but that would go against what I wrote in the name of this article.
- gitolite being run as user git
- I am running this on centos6.
Install
- I created a CNAME for the docker host, gitserver.example.com, so it looks pretty.
- In the NFS server, create a fileshare owned by the user git, which in this example has uid=1201.
- We will need to create a ssh key pair for the gitadmin. I created my pair by doing something like
ssh-keygen -t rsa -C gitadmin -f ~/.ssh/gitadmin
You will need to copy ~/.ssh/gitadmin.pub into the docker host by whatever means you desire.
- I create a directory in the docker host to put all the files (docker-entrypoint.sh and Dockerfile) related to this container. Here is the Dockerfile
############################################################
# Dockerfile to build a gitolite git container image
# Based on CentOS
############################################################
# Set the base image to CentOS
FROM centos:centos6
# File Author / Maintainer
MAINTAINER Mauricio Tavares "raubvogel@gmail.com"
################## BEGIN INSTALLATION ######################
# We need epel
RUN rpm -Uvhi http://dl.fedoraproject.org/pub/epel/6/x86_64/epel-release-6-8.no
arch.rpm && \
sed -i -e 's/^enabled=1/enabled=0/' /etc/yum.repos.d/epel.repo
# We need NFS, openssh, and git
# And ssmtp (from Epel)
RUN yum update -y && yum install -y \
git \
nfs-utils \
openssh-server && \
yum install -y ssmtp --enablerepo=epel
# Configure NFS
RUN sed -i -e '/^#Domain/a Domain = example.com' /etc/idmapd.conf
##################### INSTALLATION END #####################
# Create git user
RUN adduser -m -u 1201 git
# Configure ssmtp
# Configure sshd
RUN sed -i -e 's/^#Port .*$/Port 2022/' \
-e 's/^#PermitRootLogin .*$/PermitRootLogin no/' \
/etc/ssh/sshd_config && \
sed -i -e \
's@session\s*required\s*pam_loginuid.so@session optional pam_loginuid.s
o@g' \
/etc/pam.d/sshd && \
ssh-keygen -f /etc/ssh/ssh_host_rsa_key -N '' -t rsa && \
ssh-keygen -f /etc/ssh/ssh_host_dsa_key -N '' -t dsa
# And a mountpoint for repositories
# Note: can't NFS mount from dockerfile, so will do it in an entrypoint script
RUN su - git -c 'mkdir repositories'
# And now the git server
# Gitolite admin: gitadmin (it is based on the name of the pub key file)
RUN su - git -c 'mkdir bin' && \
su - git -c 'git clone git://github.com/sitaramc/gitolite' && \
su - git -c 'mkdir -m 0700 .ssh' && \
su - git -c 'echo "ssh-rsa AAAAB3NzaC1yc2EAASLDAQCOOKIEQDehf5hxGq9//34yrsL
[...]
7CfSpbiP gitadmin" > .ssh/gitadmin.pub'
# The rest will be configured in the entrypoint script
# Put the entrypoint script somewhere we can find
COPY docker-entrypoint.sh /entrypoint.sh
ENTRYPOINT ["/entrypoint.sh"]
EXPOSE 2022
# Start service
CMD ["/usr/sbin/sshd", "-D"]
Where
- You will need to put the public key gitadmin.pub between the double quotes in the line beginning with su - git -c 'echo "ssh-rsa.
- I am running a lot of things in the Dockerfile as user git.
- The NFS setup was mentioned before, so I will not bother with it right now.
- I forgot to add the setup for ssmtp. I will think about that sometime later.
- The docker-entrypoint.sh file looks vaguely like this:
#!/bin/sh
set -e
# Mount git's repositories
mount.nfs4 fileserver.example.com:/git /home/git/repositories
su - git -c 'gitolite/install -to $HOME/bin'
# setup gitolite with yourself as the administrator
su - git -c 'gitolite setup -pk .ssh/gitadmin.pub'
# And we are out of here
exec "$@"
- So far so good? Ok, so let's build the image. I will call it git.
docker build -t git .
If the last build message looks likeSuccessfully built 6fb1ac15b47a
chances are the build was successful and you can go to the next step. Otherwise, figure out what went boink.
- Now start the service. Remember you need to run in priviledge mode because of NFS. Since this is a test, I am calling the contained test-git.
docker run --privileged=true -d -P -p 3306:3306 --name test-git git
Setup and Testing
- Let's start testing by seeing what repositories we can see as an admin:
$ /usr/bin/ssh -i /home/raub/.ssh/gitadmin git@gitserver.example.com info
hello gitadmin, this is git@docker running gitolite3 v3.6.2-12-g1c61d57 on git 1.7.1
R W gitolite-admin
R W testing
FYI, testing is a repo everyone allowed to use the git server can play with. Think of it as a, as its name implies, test repo. Since that worked we can proceed to the next step.
- Now we should edit your .ssh/config file to access the gitadmin repository.
cat >> ~/.ssh/config << EOF
Host gitadmin
Hostname gitserver.example.com
User git
Port 2022
identityfile /home/raub/.ssh/gitadmin
protocol 2
compression yes
EOF
Yes, you can ask about what to do if you have a script that needs to pull stuff out of a repo, and I will tell you to wait for the next installment. This article deals with getting it to work.
- Retrieving the admin repository is now much easier. So, instead of having to do something like
git clone ssh://git@gitserver.example.com:[port]/gitolite-admin
which would also require us to feed the key (or rename it as the default which IMHO is a bad idea), thanks to the previous step we can now lazily dogit clone gituser:/gitolite-admin
- Repository config file is in gitolite-admin/conf/gitolite.conf
- Adding a new user (and perhaps a repository)
- Get user's ssh public key, say raub.pub. How did the user created the ssh key pair? Don't know, don't care.
- Copy raub.pub to gitolite-admin/keydir. NOTE:file must be named after username user will use to connect to the git server; it does not need to have anything to do with the user's normal/real username.
- Create a repository for the user. Let's give it a nice and snuggly name, like somethingawful
cat >> conf/gitolite.conf << EOF
repo somethingawful
RW = raub
EOF
- Commit changes
git add conf keydir
git commit -m 'Added new user and config a repo'
git push origin master
- Now, let's pretend we are the user (i.e. what the user should do/see). Which repos can you/user see? If this looks like a step we did before, it is. Just using a new user.
$ ssh -i /home/raub/.ssh/raub git@gitserver.example.com info
hello raub, this is git@docker running gitolite3 v3.6.2-12-g1c61d57 on git 1.7.1
R W somethingawful
R W testing
There is somethingswful!
- Let's grab somethingawful
$ git clone gituser:/somethingawful
Cloning into 'somethingawful'...
warning: You appear to have cloned an empty repository.
Checking connectivity... done.
- Edit, commit, and off you go
I will put all those guys in my github account later on. And shall sneakly update this post afterwards.
This is a quick article; you have been warned.
The traditional way to have persistent data in a docker container is to feed it with a volume from the docker host. You begin by locating a directory, be it in the local docker host drive, network mounted to the docker host, or a volume in another docker container. Then feed it to the container using the VOLUME statement defined in the Dockerfile or to command line. We all know that. But what some (very few we hope) of you might not have been aware of is the volume is only mounted/attached to the container when you tell docker to run the container, which might make running/configuring something that require files in those volumes a bit challenging.
At this point I would expect one of you to shout "Aha! But you can then use an entrypoint script to do all that monkeying that needs to happen after volume is mounted and before whatever service this container provides starts!" And you would be quite right! Here's a quick example in case we lost somebody: if our Dockerfile ended up like this:
# Put the entrypoint script somewhere we can find
COPY docker-entrypoint.sh /entrypoint.sh
ENTRYPOINT ["/entrypoint.sh"]
EXPOSE 22
# Start service
CMD ["/usr/sbin/sshd", "-D"]
we would have created a file called entrypoint.sh in the root path of the container, which is run just after the volume is created and before the, in this example, sshd service is started. Said entrypoint file could do, I don't know:
#!/bin/sh
set -e
# Do something in /home, which is the mounted volume
/do-something-at-home.sh
# And we are done here
exec "$@"
point is that do-something-at-home.sh is called only after the persistent volume is mounted into the container.
What if you want to mount a network fileshare straight into the container? Before I have time to answer that, someone from the audience will jump and state "Why would you want to do that? You can mount the network fileshare in the docker host and then use VOLUME just like Wicked Witch of the West intended!" What if I don't want to do that? What if I can't for whatever reason mount a network fileshare on the docker host?
The answer is that it is completely doable; you just have to ask the entrypoint script to do the mounting for you. To show what I mean I will use the example of mounting a NFSv4 fileshare that is supposed to be used by some website. So we modify our little entrypoint script a bit
#!/bin/sh
set -e
# /export/www was created in the Dockerfile
mount.nfs4 fileserver.example.com:/export/www /www/some-website
# And we are done here
exec "$@"
Since we are using NFSv4, chances are you might need to add something like
RUN sed -i -e '/^#Domain/a Domain = example.com' /etc/idmapd.conf
to your dockerfile in addition to telling it to get the nfs client packages, but the general idea should apply for, say, SMB or ZFS or whatever other network filesystem that you fancy: let your entrypoint script do the heavy lifting!