Wednesday, October 25, 2017

Can't find the Windows drive I want to mount a fileshare in

If you have read this blog before, you know I am not a Microsoft Windows fanboy. For those who have not read it before, let's talk about one of its annoying features: the use of drive letters to make fileshares available to the user. I understand the idea of using drive letters... in the 80s personal computers since they had at best two floppy drives or if you were really rich a floppy and a hard drive. During those bad hairstyle times, you only needed some way to differentiate two devices, so why not call them A and B or 0 and 1? But we now live in a time where a lot of people in our planet do not even know what a floppy drive is (hint: it is not a sex toy) and might connect a phone, a portable storage device of some kind, and who knows what will come next? 26 "drives" might run out quickly. Also,
why must we mount a new device/fileshare at the top level ("drive") instead of in a directory inside another fileshare like you do in UNIX (including Linux and OSX)?

The dirty little secret is that you can, but most users who grew up using Windows are so used to drive letters they cannot think of the option. And, there are Windows programs out there which can only handle drive letters. Point is, Microsoft is not to blame. In fact, they have been trying to convince people to use Microsoft (duh!) UNC Paths, which might not follow the path convention used by every other operating system out there but it is a huge improvement from the drive letter thingie. So, credit to where credit is due.

But, this article is not about path conventions and the religion around them. All we want to do is mount a fileshare inside another in windows. Humble goal, yes? Well, as the poem says, "The best laid schemes of mice and men / Often go awry."

I can has mah driv?

There are instructions out there to mount a fileshare without using drive letters in Windows; the one I picked uses the Computer Management GUI thingie, which might be important since coworkers are paralytic fearful of the command line:




Why is it not showing the D: drive? After all, diskpart can see it:

PS C:\Users\raub.adm> diskpart

Microsoft DiskPart version 6.3.9600

Copyright (C) 1999-2013 Microsoft Corporation.
On computer: Srv12R2

DISKPART> list volume

  Volume ###  Ltr  Label        Fs     Type        Size     Status     Info
  ----------  ---  -----------  -----  ----------  -------  ---------  --------
  Volume 0     F   New Volume   NTFS   Simple        99 GB  Healthy
  Volume 1     D   Data         NTFS   Simple      2959 GB  Healthy
  Volume 2     C   Srv12R2      NTFS   Simple        59 GB  Healthy    Boot
  Volume 3         System Rese  NTFS   Simple       350 MB  Healthy    System
  Volume 4     E   Utility      NTFS   Simple        61 GB  Healthy
  Volume 5     Z                       DVD-ROM         0 B  No Media
  Volume 6         Volume       NTFS   Partition   4095 GB  Healthy

DISKPART> 

And I can mount it from diskpart into some folder in the D: drive without a problem.

DISKPART> select volume 6

Volume 6 is the selected volume.

DISKPART> assign mount=d:\tmp

DiskPart successfully assigned the drive letter or mount point.

DISKPART>

I wants 2 c mah driv!

Similar to an issue we talked about in an early article, we have a bad case of access control being too controlling. Specifically, the mount stuff is defined in Drive_Letter:\System Volume Information\spp for each drive and those who need to see it are not. Let me show you what I mean. Here is the E: drive, which we know we can mount a fileshare inside it using the GUI.

PS C:\Users\raub.adm> cacls 'e:\System Volume Information\spp'
e:\System Volume Information\SPP AD\SOD_Domain Admins:(OI)(CI)(ID)F
                                 BUILTIN\Administrators:(OI)(CI)(ID)F
                                 NT AUTHORITY\SYSTEM:(OI)(CI)(ID)F

PS C:\Users\raub.adm>

Now here is the problematic D: drive:

PS C:\Users\raub.adm> cacls 'd:\System Volume Information\spp'
d:\System Volume Information\SPP AD\NETWORK_Domain Admins:(OI)(CI)(ID)F

PS C:\Users\raub.adm>

As you can see, the local admin entities, Administrators and the SYTEM accounts, can't access D:. So let's correct that:

PS C:\Users\raub.adm> cacls 'd:\System Volume Information\spp' /e /g system:f
processed dir: d:\System Volume Information\SPP
PS C:\Users\raub.adm> cacls 'd:\System Volume Information\spp' /e /g administrators:f
processed dir: d:\System Volume Information\SPP
PS C:\Users\raub.adm> cacls 'd:\System Volume Information\spp'
d:\System Volume Information\SPP NT AUTHORITY\SYSTEM:(OI)(CI)F
                                 BUILTIN\Administrators:(OI)(CI)F
                                 AD\NETWORK_Domain Admins:(OI)(CI)(ID)F

PS C:\Users\raub.adm>


Much better! Moral of the story, if you or your computer cannot access a drive in some way or form, do check permissions.

Tuesday, September 26, 2017

Forcing a fuse (sshfs) network fileshare to unmount in OSX

As some of you already know, I do have an old MacBook Air which I use as my main (as in the computer I sit in front of, not the computer I store data on. Laptops can be stolen, you know) machine until I find a new Linux laptop replacement. For this reason I need it to play nice with other machines, and that requires sometimes to mount a fileshare. If the other host is in the same VLAN, that is rather easy because there are ways to mount a Windows (SMB/CIFS) and even a Linux/UNIX (nfs) fileshare without breaking a sweat. But what if the machine is remote? If we can ssh into it, why not then use sshfs?

As we are aware of (since we read the link. There are a few more sshfs examples here), sshfs requires fuse. Since I am using OSX, which at the present time does not have it, I need to install. If you are curious, the one I use is FUSE for MacOS.

Mounting: business as usual

Let's say we are in the machine boris as user pickles trying to mount my home directory off desktop. We create the mountpoint (Let's use /tmp/D or ~/D so it looks more like what we would do in Linux:

boris:Documents pickles$ mkdir /tmp/D; sshfs raub@desktop.in.example.com:. /tmp/D
boris:Documents pickles$ df -h
Filesystem                     Size   Used  Avail Capacity  iused    ifree %iused  Mounted on
/dev/disk1                    112Gi   79Gi   33Gi    71% 20783599  8546848   71%   /
devfs                         364Ki  364Ki    0Bi   100%     1259        0  100%   /dev
map -hosts                      0Bi    0Bi    0Bi   100%        0        0  100%   /net
map auto_home                   0Bi    0Bi    0Bi   100%        0        0  100%   /home
raub@desktop.in.example.com:.  492Gi  389Gi  102Gi    80%   408428 32359572    1%   /private/tmp/D
boris:Documents pickles$

So far so good. To unmount it we can use diskutil, as in (Mac)

boris:Documents pickles$ diskutil umount /tmp/D
Unmount successful for /tmp/D
boris:Documents pickles$

or (Linux)

fusermount -u /tmp/D

Or go old school (both):

sudo mount /tmp/D

Since boris is a laptop, sometimes if we just let it go to sleep it will unmount it. Then, all we have to do is mount it again.

Mounting again: not so fast

Thing is, sometimes it does not work.

boris:Documents pickles$ mkdir /tmp/D; sshfs raub@desktop.in.example.com:. /tmp/D
mkdir: /tmp/D: File exists
fuse: bad mount point `/tmp/D': Input/output error
boris:Documents pickles$ 

Ok, maybe it did not automagically unmounted while laptop was off. So, let's tell it to do so:

boris:Documents pickles$ diskutil umount /tmp/D
Unmount failed for /tmp/D
boris:Documents pickles$ 

Just before you ask, sudo mount /tmp/D did not work either. What if the old sshfs processes did not cleanly closed and as a result are still lingering? To answer that we must elicit some help from one of grep's cousins, pgrep:

boris:Documents pickles$ pgrep -lf sshfs
384 sshfs raub@desktop.in.example.com:. /tmp/D
1776 sshfs raub@desktop.in.example.com:. /tmp/D
7356 sshfs user@other.in.example.com:. /tmp/D
boris:Documents pickles$

Just as we guessed, there are not only but quite a few unhappy sshfs instances. Let's see if we can kill them:

boris:Documents pickles$ kill 384 1776 7356
boris:Documents pickles$ pgrep -lf sshfs
384 sshfs raub@desktop.in.example.com:. /tmp/D
1776 sshfs raub@desktop.in.example.com:. /tmp/D
boris:Documents pickles$ kill 384
boris:Documents pickles$ pgrep -lf sshfs
384 sshfs raub@desktop.in.example.com:. /tmp/D
1776 sshfs raub@desktop.in.example.com:. /tmp/D
boris:Documents pickles$ kill 1776
boris:Documents pickles$ pgrep -lf sshfs
384 sshfs raub@desktop.in.example.com:. /tmp/D
1776 sshfs raub@desktop.in.example.com:. /tmp/D
boris:Documents pickles$
Hmmm, this is going nowhere slowly. Let's crank up a notch and force it to kill the mount.
boris:Documents pickles$ kill -9 1776
boris:Documents pickles$ pgrep -lf sshfs
384 sshfs raub@desktop.in.example.com:. /tmp/D
boris:Documents pickles$ kill -9 384
boris:Documents pickles$ pgrep -lf sshfs
\boris:Documents pickles$

Sounds like we got them all. Now, let's try and mount once more:

boris:Documents pickles$ mkdir /tmp/D; sshfs raub@desktop.in.example.com:. /tmp/D
mkdir: /tmp/D: File exists
raub@desktop.in.example.com's password:
boris:Documents pickles$

I think we have a winner!

Tuesday, August 15, 2017

Connecting to multiple VPNs using one single Cisco AnyConnect

Like many here, I remote into networks to work. I access organization X's network using Cisco's AnyConnect VPN client because that is what they use. When I first got involved, they told me to login to a given url in their webserver and get the client for my machine (a MacBook Air if you are curious; I do need to get a new Linux laptop but the Mac has been working great so far). Probably if my machine machine was a company-owned laptop they would have pushed the packaged using SCCM/Chocolatey (Windows) or Casper(now called jamf)/Munki (Mac). Or ansible, but that is another bag of cats. In any case, the point is I got their package, which was configured to work on their VPN. And, it works: double-click on the silly link, connect, enter my authentication info, and off I go.

Now also need to access organization B's machines. And they also chose to use AnyConnect. And just like X they also told me to install their package. Thing is if I do that it will wipe the X configuration, which would get annoying very quickly. I did try seeing if there was a way to add another profile from the client's menu but not luck. Maybe each company disabled the option so you can only use it to access their network; I do not know. Now what I could do since this is a Mac is rename Company X's VPN folder to, say, Cisco.Old (the default folder name is Cisco and then install Company B's VPN package.

This way, if I need to go to X, I would open Cisco.Old and then run that vpn client. If I then wanted to go to B, I would quit the client, go to Cisco, and then run that client. I do not know about you, but that looks a bit cumbersome to me. And, if my laptop was running Windows, I think it would not let me install 2 instances of the client that easily. There has to be a better way.

Probulating

First of all, let's assume there is a configuration file somewhere for the AnyConnect VPN client. Since I am using OSX, chances are it has some plist-sounding name. And I found something called com.cisco.Cisco-AnyConnect-Secure-Mobility-Client.plist in my preferences folder, /Users/raub/Library/Preferences, but it does not look particularly legible from the command line (yes, I know there is probably an app to do that but I like to do things from the command line):

bplist00Ñ^A^B]UILogLocation¥^C^D^E^F^G_^PA/Users/raub/.cisco/vpn/log/UIHistory_2017.08.28.23.35.34.010.txt_^PA/Users/dalek/.cisco/vpn/log/UIHistory_2017.08.28.23.53.04.734.txt_^PA/Users/raub/.cisco/vpn/log/UIHistory_2017.08.29.00.10.34.504.txt_^PA/Users/raub/.cisco/vpn/log/UIHistory_2017.08.29.00.28.05.785.txt_^PA/Users/raub/.cisco/vpn/log/UIHistory_2017.10.04.04.44.45.284.txt^@^H^@^K^@^Y^@^_^@c^@§^@ë^A/^@^@^@^@^@^@^B^A^@^@^@^@^@^@^@^H^@^@^@^@^@^@^@^@^@^@^@^@^@^@^As

So we make a copy of it and then run

plutil -convert xml1 com.cisco.Cisco-AnyConnect-Secure-Mobility-Client.plist
to convert it to something more legible, and then look inside it:

boris:~ raub$ cat com.cisco.Cisco-AnyConnect-Secure-Mobility-Client.plist




 UILogLocation
 
  /Users/raub/.cisco/vpn/log/UIHistory_2016.12.12.13.41.31.883.txt
  /Users/raub/.cisco/vpn/log/UIHistory_2016.12.12.13.58.40.264.txt
  /Users/raub/.cisco/vpn/log/UIHistory_2016.12.12.14.15.56.295.txt
  /Users/raub/.cisco/vpn/log/UIHistory_2017.02.14.06.03.40.692.txt
  /Users/raub/.cisco/vpn/log/UIHistory_2017.07.24.21.43.25.742.txt
 


boris:~ raub$

Hmmm, that does not look like what I want. Maybe the AnyConnect client has a global configuration file somewhere. And it does, and it is called glvpn-anyconnect-profile.xml and is located in /opt/cisco/anyconnect/profile/:

boris:~ raub$ ls /opt/cisco/anyconnect/profile/
AnyConnectProfile.xsd  glvpn-anyconnect-profile.xml
boris:~ raub$

If we look into it, this xml file starts as expected with some system-wide config settings

cat /opt/cisco/anyconnect/profile/glvpn-anyconnect-profile.xml
<?xml version="1.0" encoding="UTF-8"?>
<AnyConnectProfile xmlns="http://schemas.xmlsoap.org/encoding/" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://schemas.xmlsoap.org/encoding/ AnyConnectProfile.xsd">
        <ClientInitialization>
                <UseStartBeforeLogon UserControllable="true">false</UseStartBeforeLogon>
                <AutomaticCertSelection UserControllable="true">true</AutomaticCertSelection>
                <ShowPreConnectMessage>false</ShowPreConnectMessage>
                <CertificateStore>All</CertificateStore>
                <CertificateStoreOverride>false</CertificateStoreOverride>
                <ProxySettings>Native</ProxySettings>
                <AllowLocalProxyConnections>true</AllowLocalProxyConnections>
                <AuthenticationTimeout>60</AuthenticationTimeout>
                <AutoConnectOnStart UserControllable="true">false</AutoConnectOnStart>
                <MinimizeOnConnect UserControllable="true">true</MinimizeOnConnect>
                <LocalLanAccess UserControllable="true">true</LocalLanAccess>
                <ClearSmartcardPin UserControllable="true">true</ClearSmartcardPin>
                <IPProtocolSupport>IPv4,IPv6</IPProtocolSupport>
                <AutoReconnect UserControllable="true">true

But then get to the part we have been anxiously waiting for: how to access company X's vpn:

<ServerList>
                <HostEntry>
                        <HostName>Company X VPN</HostName>
                        <HostAddress>vpn.companyx.com</HostAddress>
                </HostEntry>
        </ServerList>
</AnyConnectProfile>

It does not look very complicated to me: we probably could just add a new HostEntry for Company B, as in

<ServerList>
                <HostEntry>
                        <HostName>Company X VPN</HostName>
                        <HostAddress>vpn.companyx.com</HostAddress>
                </HostEntry>
                <HostEntry>
                        <HostName>Company B VPN</HostName>
                        <HostAddress>vpn.b-company.com</HostAddress>
                </HostEntry>
        </ServerList>
</AnyConnectProfile>

and be done. And that will work. But, I think we can do one better; can we avoid cluttering the profile file? Long story short is yes. Just put something like this

cat > B-profile.xml << 'EOF'
<?xml version="1.0" encoding="UTF-8"?>
<AnyConnectProfile xmlns="http://schemas.xmlsoap.org/encoding/" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://schemas.xmlsoap.org/encoding/AnyConnectProfile.xsd">
    <!--
        This section contains the list of hosts the user will be able to
        select from.
      -->
    <ServerList>
        <!--
            This is the data needed to attempt a connection to a specific
            host.
          -->
        <HostEntry>
            <!--
                Can be an alias used to refer to the host or an  FQDN or
                IP address.  If an FQDN or IP address is used, a
                HostAddress is not required.
              -->
            <HostName>Company B VPN</HostName>
            <HostAddress>vpn.b-company.com</HostAddress>
        </HostEntry>
    </ServerList>
</AnyConnectProfile>

in /opt/cisco/anyconnect/profile/:

boris:~ raub$ ls /opt/cisco/anyconnect/profile/
AnyConnectProfile.xsd  glvpn-anyconnect-profile.xml
B-profile.xml
boris:~ raub$

Now when we run the client, we can select either company's VPN:

What about Windows

I've never tried but there is a file called (starting at your homedir) .\AppData\Local\Cisco\Cisco AnyConnect Secure Mobility Client\preferences.xml which would be my starting point. The global profile folder is c:\ProgramData\Cisco\Cisco AnyConnect Secure Mobility Client\Profile.

Final thoughts

I do not like that I have to configure the different profiles at the global level; I might share this laptop with other people and would like to have my profiles uncluttered away from theirs. But, at least now I can use multiple profiles to access different networks. Looking at the Windows configuration file, I wonder if I can do that int he Mac too. That will be the subject for another article.

Monday, July 31, 2017

Downloading a single file from github (using ansible perhaps)

I was going to setup Ansible to talk to one of my Windows servers. According to the Ansible page on Windows, the easiest way to install the windows side is to download the powershell script ConfigureRemotingForAnsible.ps1 and blindly run it on the windows server. It supposedly does all the magic to install and set things up. Should I run it? I don't know, but let's try to get it. We can inspect its code once we have the file.

NOTE: Setting up Ansible on windows is not the topic of this thread. All I want to show is a way to download a single file from a github repo.

Since I do not know how to do it, let's do some searching. And then we try each method out and see what's what.

Attempt I

There is a thread in stackoverflow called How to pull a single file from a server repository in Git? which
suggested using the git clone command as in
git clone https://github.com/igniterealtime/Openfire.git \
Openfire/src/java/org/apache/mina/management/MINAStatCollector.java

Let's try it out:

raub@desktop:/tmp/rmoo$ git clone https://github.com/igniterealtime/Openfire.git \
Openfire/src/java/org/apache/mina/management/MINAStatCollector.java
Cloning into 'Openfire/src/java/org/apache/mina/management/MINAStatCollector.java'...
remote: Counting objects: 107450, done.
remote: Compressing objects: 100% (53/53), done.
Receiving objects:  14% (15868/107450), 61.11 MiB | 209.00 KiB/s
[...]
remote: Total 107450 (delta 32), reused 31 (delta 16), pack-reused 107380
Receiving objects: 100% (107450/107450), 802.60 MiB | 8.23 MiB/s, done.
Resolving deltas: 100% (63893/63893), done.
Checking connectivity... done.
raub@desktop:/tmp/rmoo$ ls Openfire/src/java/org/apache/mina/management/MINAStatCollector.java
build/          i18n/     webadmin/     LICENSE.txt  README.md
dbutil/         src/      webadmintld/  Makefile
documentation/  starter/  xmppserver/   pom.xml
raub@desktop:/tmp/rmoo$

Er, does that look like it grabbed the right thing? For some reason I thought the .java file was, well, a file and not a bunch of files and directories. At least I could swear writing .java files in vi so they were text files. Maybe I am wrong, so let's see if we can get the file I really want:

raub@desktop:/tmp/rmoo$ git clone https://github.com/ansible/ansible.git ansible
/blob/devel/examples/scripts/ConfigureRemotingForAnsible.ps1
Cloning into 'ansible/blob/devel/examples/scripts/ConfigureRemotingForAnsible.ps
1'...
remote: Counting objects: 236787, done.
remote: Compressing objects: 100% (66/66), done.
remote: Total 236787 (delta 33), reused 25 (delta 6), pack-reused 236712
Receiving objects: 100% (236787/236787), 73.53 MiB | 8.23 MiB/s, done.
Resolving deltas: 100% (152234/152234), done.
Checking connectivity... done.
raub@desktop:/tmp/rmoo$ ls
ansible/
raub@desktop:/tmp/rmoo$ ls ansible/blob/devel/examples/scripts/ConfigureRemotin$ForAnsible.ps1/
ansible-core-sitemap.xml  .gitattributes            RELEASES.txt
bin/                      .github/                  requirements.txt
CHANGELOG.md              .gitignore                ROADMAP.rst
CODING_GUIDELINES.md      .gitmodules               setup.py
contrib/                  hacking/                  shippable.yml
CONTRIBUTING.md           lib/                      test/
COPYING                   .mailmap                  ticket_stubs/
.coveragerc               Makefile                  tox.ini
docs/                     MANIFEST.in               VERSION
docsite_requirements.txt  MODULE_GUIDELINES.md      .yamllint
examples/                 packaging/
.git/                     README.md
raub@desktop:/tmp/rmoo$ ls ansible/blob/devel/examples/scripts/ConfigureRemotin$ForAnsible.ps1/
bin/        test/                     docsite_requirements.txt  ROADMAP.rst
contrib/    ticket_stubs/             Makefile                  setup.py
docs/       ansible-core-sitemap.xml  MANIFEST.in               shippable.yml
examples/   CHANGELOG.md              MODULE_GUIDELINES.md      tox.ini
hacking/    CODING_GUIDELINES.md      README.md                 VERSION
lib/        CONTRIBUTING.md           RELEASES.txt
packaging/  COPYING                   requirements.txt
raub@desktop:/tmp/rmoo$

I do not know about you, but that not look like what I really wanted: a single file. On a second thought, that sure looks like the root for the ansible git repo:

I don't know about you but the files and directories sure look familiar.

I guess it is time to try something else.

Attempt II

Let's try something else: in StackOverflow there is a thread called Retrieve a single file from a repository, which suggests

git clone --no-checkout --depth 1 git@github.com:foo/bar.git && cd bar && git show HEAD:path/to/file.txt

For this attempt, we will try to get the file ConfigureRemotingForAnsible.ps1 I want:

git clone --no-checkout --depth 1 https://github.com/ansible/ansible.git  && cd ansible && \
git show HEAD:examples/scripts/ConfigureRemotingForAnsible.ps1

Thing is that will just spit the file to the screen, literally:

raub@desktop:/tmp/rmoo$ git clone --no-checkout --depth 1 https://github.com/ansible/ansible.git  && \
cd ansible && git show HEAD:examples/scripts/ConfigureRemotingForAnsible.ps1
Cloning into 'ansible'...
remote: Counting objects: 5873, done.
remote: Compressing objects: 100% (4282/4282), done.
remote: Total 5873 (delta 962), reused 3652 (delta 660), pack-reused 0
Receiving objects: 100% (5873/5873), 7.13 MiB | 8.09 MiB/s, done.
Resolving deltas: 100% (962/962), done.
Checking connectivity... done.
#Requires -Version 3.0

# Configure a Windows host for remote management with Ansible
# -----------------------------------------------------------
#
# This script checks the current WinRM (PS Remoting) configuration and makes
# the necessary changes to allow Ansible to connect, authenticate and
# execute PowerShell commands.
#
[...]

We could improve that by saving the file into a file. But how? The quickest solution I can think of is to pipe it to a file:

file="ConfigureRemotingForAnsible.ps1" ; git clone --no-checkout --depth 1 https://github.com/ansible/ansible.git  && cd ansible && $(git show HEAD:examples/scripts/$file > $file)

will put $file inside the ansible dir:

raub@desktop:/tmp/rmoo/ansible$ ls
ConfigureRemotingForAnsible.ps1
raub@desktop:/tmp/rmoo/ansible$

Of course we can do better, like placing it on the original pwd and delete the (now temporary) ansible dir. Something like

file="ConfigureRemotingForAnsible.ps1" ; git clone --no-checkout --depth 1 https://github.com/ansible/ansible.git  && cd ansible && $(git show HEAD:examples/scripts/$file > ../$file) && cd .. && rm -rf ansible

should do just fine. But you do not have to believe on me; here's it in action:

raub@desktop:/tmp/rmoo$ file="ConfigureRemotingForAnsible.ps1" ; git clone --no-checkout --depth 1 https://github.com/ansible/ansible.git  && cd ansible && $(git show HEAD:examples/scripts/$file > ../$file) && cd .. && rm -rf ansible
Cloning into 'ansible'...
remote: Counting objects: 5873, done.
remote: Compressing objects: 100% (4282/4282), done.
remote: Total 5873 (delta 962), reused 3652 (delta 660), pack-reused 0
Receiving objects: 100% (5873/5873), 7.13 MiB | 758.00 KiB/s, done.
Resolving deltas: 100% (962/962), done.
Checking connectivity... done.
raub@desktop:/tmp/rmoo$ ls
ConfigureRemotingForAnsible.ps1
raub@desktop:/tmp/rmoo$

Just to show this is not an accident, let's validate it by applying it to get the java source file we tried to get earlier.

raub@desktop:/tmp/rmoo$ file="MINAStatCollector.java" ; git clone --no-checkout --depth 1 https://github.com/igniterealtime/Openfire.git  && cd Openfire && $(git show HEAD:src/java/org/apache/mina/management/$file > $file)
Cloning into 'Openfire'...
remote: Counting objects: 5291, done.
remote: Compressing objects: 100% (4433/4433), done.
remote: Total 5291 (delta 807), reused 3394 (delta 479), pack-reused 0
Receiving objects: 100% (5291/5291), 92.80 MiB | 8.24 MiB/s, done.
Resolving deltas: 100% (807/807), done.
Checking connectivity... done.
raub@desktop:/tmp/rmoo/Openfire/Openfire$ head MINAStatCollector.java
package org.apache.mina.management;

import static org.jivesoftware.openfire.spi.ConnectionManagerImpl.EXECUTOR_FILTER_NAME;

import org.apache.mina.core.service.IoService;
import org.apache.mina.core.service.IoServiceListener;
import org.apache.mina.core.session.IdleStatus;
import org.apache.mina.core.session.IoSession;
import org.apache.mina.filter.executor.ExecutorFilter;
import org.apache.mina.filter.executor.OrderedThreadPoolExecutor;
raub@desktop:/tmp/rmoo/Openfire$

I think we can improve it by making it more generic, which might be the subject of another post. Or something; I've been dragging on finishing this article for a few weeks now so I just want it gone.

Now what about something completely Ansible-ly?

Er, this Article is getting way longer than I originally planned. I will put the ansible side on another.< How about that for raising your expectations and then expertly crushing them?/p>

Friday, June 30, 2017

Tivoli (TSM) Backup can't backup a drive in Windows

Over here we use IBM's TSM backup system. I do not want to go over its features and setup, but the bottom line is that I get an email listing the backup status for each machine (known as nodes in TSM lingo) I am backing up. And one day one of those nodes barked:

backup7x SERVER02.EXAMPLE           Failed***    12     2017-06-22 00:00:00 2017-06-22 
00:01:06 2017-06-22 00:01:07

If you are curious about the 12, here is what it means right out of that very same email (I copied that session including the wasteful blank lines):

Result:

0 - Success.

1 - See explanation for 'Missed'.

4 - The operation completed successfully, but some files were not
processed.

8 - The operation completed with at least one warning message.

12 - The operation completed with at least one error message
(except for error messages for skipped files).

That does not help me much. You see, I like to have access to logs and not sad face cryptic messages. So I went to C:\Program Files\Tivoli\TSM\baclient to look into dsmsched.log for any funny business. And funny business I found:

06/22/2017 00:01:09 --- SCHEDULEREC OBJECT BEGIN D-0000AM 06/22/2017 00:00:00
06/22/2017 00:01:10 Incremental backup of volume '\\server02\d$'
06/22/2017 00:01:11 ANS1228E Sending of object '\\server02\d$' failed.
06/22/2017 00:01:11 ANS1751E Error processing '\\server02\d$': The file system can not 
be accessed.
06/22/2017 00:01:11 --- SCHEDULEREC STATUS BEGIN
06/22/2017 00:01:11 --- SCHEDULEREC OBJECT END D-0000AM 06/22/2017 00:00:00
06/22/2017 00:01:11 ANS1512E Scheduled event 'D-0000AM' failed.  Return code = 12.
06/22/2017 00:01:11 Sending results for scheduled event 'D-0000AM'.
06/22/2017 00:01:11 Results sent to server for scheduled event 'D-0000AM'.

Ok, what's so special about the D drive? I looked at the config file, C:\Program Files\Tivoli\TSM\baclient\dsm.opt, and it seems to be right. If you do not believe me (I wouldn't and I have to live with me), here are its first few lines:

NODENAME SERVER02.EXAMPLE
TCPSERVERADDRESS backup7x.example.com

DOMAIN "\\server2\d$"
MANAGEDSERVICES WEBCLIENT SCHEDULE
webports 1501 1581

txnbytelimit 25600
schedmode prompted
schedlogretent 30,d
errorlogretent 30,d
passwordaccess generate
quiet
tapeprompt no


EXCLUDE.BACKUP "*:\Thumbs.db"
EXCLUDE.BACKUP "*:\desktop.ini"
EXCLUDE.BACKUP "*:\*.tmp"
EXCLUDE.BACKUP "*:\...\Scans\mpcache-*"
EXCLUDE.BACKUP "*:\microsoft uam volume\...\*"
EXCLUDE.BACKUP "*:\microsoft uam volume\...\*.*"
EXCLUDE.BACKUP "*:\...\EA DATA. SF"
EXCLUDE.BACKUP "*:\IBMBIO.COM"
EXCLUDE.BACKUP "*:\IBMDOS.COM"
EXCLUDE.BACKUP "*:\IO.SYS"
[...]

As you can see, I am telling it to only backup the D drive. Maybe we should take a look at this drive and see who can access it:

C:\Users\raub> icacls d:\
d:\ AD\EXAMPLE_Domain Admins:(OI)(CI)(F)
    AD\EXAMPLE_Users:(RX)

Successfully processed 1 files; Failed processing 0 files
C:\Users\raub>
Where:
  • OI: Object inherit
  • CI: Container inherit
  • F: Full access
  • RX: Read and execute

We can also do that through powershell:

PS C:\Users\raub> get-acl d:\ | fl


Path   : Microsoft.PowerShell.Core\FileSystem::D:\
Owner  : BUILTIN\Administrators
Group  : AD\Domain Users
Access : AD\EXAMPLE_Domain Admins Allow  FullControl
         AD\EXAMPLE_Users Allow  ReadAndExecute, Synchronize
Audit  :
Sddl   : O:BAG:DUD:PAI(A;OICI;0x1200a9;;;SY)(A;OICI;FA;;;S-1-5-21-344340502-4252695000-2390403120-1439459)(A;;0x1200a9;
         ;;S-1-5-21-344340502-4252695000-2390403120-1439468)(A;OICI;FA;;;S-1-5-21-344340502-4252695000-2390403120-14759
         66)



PS C:\Users\raub>

which as you can see is a more verbose way to say the same thing. But what is missing here? You see, by default Windows services are run by the system user (it's full name is NT AUTHORITY\SYSTEM. So let's add it. Does it need to write to the drive as far as TSM is concerned? We are backing up here. Maybe if we need to restore we might need to write but we will cross that bridge when we get to it (hopefully never).

You can add that user and setup the permissions (I did read-execute; but wonder if read only would suffice. Let me know if you find the answer) either using the windows explorer, icacls, or Set-Acl. Pick one; what really matters is that in the end of the day you should have something like this:

C:\Users\raub> icacls d:\
d:\ AD\EXAMPLE_Domain Admins:(OI)(CI)(F)
    AD\EXAMPLE_Users:(RX)
    NT AUTHORITY\SYSTEM:(OI)(CI)(RX)

Successfully processed 1 files; Failed processing 0 files
C:\Users\raub>
or in powershell,

PS C:\Users\raub> get-acl d:\ | fl


Path   : Microsoft.PowerShell.Core\FileSystem::D:\
Owner  : BUILTIN\Administrators
Group  : AD\Domain Users
Access : NT AUTHORITY\SYSTEM Allow  ReadAndExecute, Synchronize
         AD\EXAMPLE_Domain Admins Allow  FullControl
         AD\EXAMPLE_Users Allow  ReadAndExecute, Synchronize
Audit  :
Sddl   : O:BAG:DUD:PAI(A;OICI;0x1200a9;;;SY)(A;OICI;FA;;;S-1-5-21-344340502-4252695000-2390403120-1439459)(A;;0x1200a9;
         ;;S-1-5-21-344340502-4252695000-2390403120-1439468)(A;OICI;FA;;;S-1-5-21-344340502-4252695000-2390403120-14759
         66)



PS C:\Users\raub>

And now I get an email saying all is well:

backup7x SERVER02.EXAMPLE           Completed    0      2017-06-24 00:00:00 2017-06-24 
00:00:54 2017-06-24 01:07:57

Some of you noticed this status email is from 2 days later. The reason was that on the 23rd it was catching up and that took quite a while.

Saturday, April 29, 2017

Network packet capturing in Windows without extra programs


One of the things that separate the Linux from Windows is that

When you want to take a look at what is happening on the network, you want to listen to the wire. In Linux you can run tcpdump, wireshark (GUI) and tshark (console), or even wiping up a script in python or bash. So, it can be done with something that comes with the OS by default (most of distros come with python and bash and a lot also have tcpdump) or can be easily added (wireshark).

Then we have Windows... it better be since the title of this article hints that it might be involved. Common sense and standard practices dictate that if you want to do packet capture in that OS, you should buy or download a program/app such as (surprise!) wireshark or something that was created specifically for Windows. Which is fine... unless you are running in a server. Ask yourself: why should we install and run wireshark in a Web Server? And probably leave it there in case we might need it again, so someone can have it ready to go after breaking into the system (this is related to my pet peeve about developing or at least leaving development software on production servers in general and web servers specifically). Or worse: search the web and download a suspicious packet capture app because it had "EZ" on its name and a cute turtle as its logo? That smells like a security risk besides adding weight to your server; ideally you should only have the packages and programs you need.

That looks like a bit of a drag. It would be really nice if we could do packet capturing in Windows without needing to install yet another program. Perhaps even using what is built-in the host.

One can dream...

Starting the capture

So, let's see how to do it then. The command we want is netsh trace, which will need to be run with escalated privileges because it is accessing the network port. Here is how we would capture everything and save it to the file pickles.etl:

netsh trace start persistent=yes capture=yes tracefile=pickles.etl

There are a few useful options we might want to know:

  • maxsize : max size of log file before it gets overwritten
    • maxsize=250 MB is the default
    • maxsize = 0 unlimited
    NOTE: if you use this option you also need to add the option filemode or it will not work.
    filemode={circular|append|single}
    Ex:
    netsh trace start persistent=yes capture=yes tracefile=stuff maxsize=0  filemode=append
  • persistent : Keep on logging after a reboot
    persistent = no (default)

Stopping capturing

netsh trace stop
Correlating traces ... done
Merging traces ... done
Generating data collection ...

Don't look at me like that; you guessed what the stop command was while I was typing this. Anyway, we end up with two documents:

Mode                LastWriteTime         Length Name
----                -------------         ------ ----
-a----        4/24/2017   3:59 PM         650033 pickles.cab
-a----        4/24/2017   3:58 PM         524288 pickles.etl

But for now we only care about the .etl one.

Converting file to something less proprietary

Note:I deleted my original capture after I posted this and forgot to put screen captures. So I had to add the images later.

Unfortunately, this time we will need to download something called Microsoft Message Analyzer. On my defense, it is (currently) a free Microsoft product. You will need to install it as admin because otherwise it will not allow everyone on the machine to run it as the error message states:

Thing is, I would rather have only me able to run it but I am not given an option as implied in the above image. But I digress.

So do install it and then run it. It will take some time to load everything up and be ready for business.

The way MessageAnayzer shows packets is different than Wireshark, which does not mean it is bad. But all I want is to convert it, so we open the file pickles.etl.

As I said, it does look different than wireshark. But I know wireshark better so let's do some exporting: Hit File->Save As and you then will be able to export it:

Save it as a .pcap file and then wireshark will be happy. Yes I know it required to install an extra program in the end but this can be done in our desktop, not on the machine we did the packet acquisition. Would you agree we have a working solution that met our requirements?

Tuesday, March 07, 2017

Setting up zabbix using official instructions and repo: Step 1 we ain't there yet

So I am installing Zabbix. Why, well, you probably know. If not, we can talk about that in a different article. Yes I am testing my ansible playbook in a docker container, but right now that too is not important. The How I did It will be in a different article. This article is the Everything That Went Wrong and How I Got Around That one. Think of it as an insight of how I deal with me being clueless; laughing at my expense is acceptable and maybe even recommended.

I want to install latest version of Zabbix in a CentOS 7 host, as a result I will be using the official zabbix 3.2 install docs, which are the most current when I wrote this article. For now I will be lazy and use the mysql version since it is faster to setup; we can revisit that later.

Dependencies

  1. Need the repo. Per the official Zabbix instructions, I am using the official Zabbix repo, which as of the time of this writing can be obtained by

    rpm -ivh http://repo.zabbix.com/zabbix/3.2/rhel/7/x86_64/zabbix-release-3.2-1.el7.noarch.rpm

    I did write a script to get the latest rpm, but it is not important right now. Now, if you are curious, here is the repo config file:

    [root@zabbix ~]# cat /etc/yum.repos.d/zabbix.repo 
    [zabbix]
    name=Zabbix Official Repository - $basearch
    baseurl=http://repo.zabbix.com/zabbix/3.2/rhel/7/$basearch/
    enabled=1
    gpgcheck=1
    gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-ZABBIX-A14FE591
    
    [zabbix-non-supported]
    name=Zabbix Official Repository non-supported - $basearch 
    baseurl=http://repo.zabbix.com/non-supported/rhel/7/$basearch/
    enabled=0
    gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-ZABBIX
    gpgcheck=1
    [root@zabbix ~]# 

    Before you ask, I am making a point to accidentally post it here for a reason, which will become clearer later.

  2. The database server. Thanks to irc user leManu enlightening me, we are not supposed to install mysql (or whatever db) server on the machine that will run zabbix server. With that said, the line

    mysql> grant all privileges on zabbix.* to zabbix@localhost identified by '';

    in the official docs has a very localhost feel to it.

    We better build the db server first, and then go there and create the zabbix user, tying it to the IP for the zabbix server. I used mariadb and then grabbed the required -- FQDN, port, zabbix password -- info and came back to the zabbix server.

  3. Packages. We have the repo setup, and database server info on standby. We might as well start installing zabbix itself, right?

    [root@zabbix ~]# yum install zabbix-server-mysql zabbix-web-mysql 
    [...]
    --> Finished Dependency Resolution
    Error: Package: zabbix-server-mysql-3.2.4-2.el7.x86_64 (zabbix)
               Requires: fping
    Error: Package: zabbix-server-mysql-3.2.4-2.el7.x86_64 (zabbix)
               Requires: libiksemel.so.3()(64bit)
     You could try using --skip-broken to work around the problem
     You could try running: rpm -Va --nofiles --nodigest
    [root@zabbix ~]# 

    Bummer. Why didn't it grab them from the normal centos repo? I guess maybe it does not have them and we will need to fetch them from another repo. But, before we add another repo, do you remember the file /etc/yum.repos.d/zabbix.repo, whose contents we pasted earlier? It has a zabbix-non-supported; how about if we take a quick look there?

    [root@zabbix ~]# yum whatprovides */fping --enablerepo=zabbix-non-supported
    Loaded plugins: fastestmirror, ovl
    Loading mirror speeds from cached hostfile
     * base: mirrors.gigenet.com
     * extras: mirror.keystealth.org
     * updates: mirror.umd.edu
    fping-3.10-1.el7.x86_64 : Scriptable, parallelized ping-like utility
    Repo        : zabbix-non-supported
    Matched from:
    Filename    : /usr/sbin/fping
    
    
    
    fping-3.10-1.el7.x86_64 : Scriptable, parallelized ping-like utility
    Repo        : @zabbix-non-supported
    Matched from:
    Filename    : /usr/sbin/fping
    
    
    
    [root@zabbix ~]# 

    Short version: grab the two packages we need from it already:

    yum install fping iksemel --enablerepo=zabbix-non-supported
  4. Missing setup file (thanks Yum!). Per the docs, we are now supposed to grab a file called /usr/share/doc/zabbix-server-mysql-3.2.4/create.sql.gz and use it to initially populate the zabbix database. Thing is, I can't find that (/usr/share/doc/zabbix-server-mysql-3.2.4) directory, much less the file:

    [root@zabbix ~]# ls  /usr/share/doc/
    coreutils-8.22    pam-1.1.8             python-pycurl-7.19.0
    gnupg2-2.0.22     pygpgme-0.3           unixODBC-2.3.1
    krb5-libs-1.13.2  python-kitchen-1.1.1  zabbix-release-3.2
    [root@zabbix ~]# 

    Maybe /usr/share/doc/zabbix-release-3.2/ is the directory and the docs were off? I will have to expertly crush your hopes:

    [root@zabbix ~]# ls  /usr/share/doc/zabbix-release-3.2/
    GPL
    [root@zabbix ~]# ls -l  /usr/share/doc/zabbix-release-3.2/
    total 20
    -rw-r--r-- 1 root root 18385 Feb 15  2016 GPL
    [root@zabbix ~]# head -10  /usr/share/doc/zabbix-release-3.2/GPL 
    *****************************************************************************
    The following copyright applies to the Red Hat Linux compilation and any 
    portions of Red Hat Linux it does not conflict with. Whenever this
    policy does conflict with the copyright of any individual portion of Red Hat 
    Linux, it does not apply.
    
    *****************************************************************************
    
                        GNU GENERAL PUBLIC LICENSE
                           Version 2, June 1991
    [root@zabbix ~]# 

    Maybe it is somewhere else? Nope.

    [root@zabbix ~]# find / -name create.sql.gz -print
    [root@zabbix ~]# 

    So, where's it? Hey, don't look at me like that. I too have no idea. Let's grab the rpm and then take a look at it:

    root@zabbix:/tmp$ rpm -qlp zabbix-server-mysql-3.2.4-2.el7.x86_64.rpm | grep cre
    ate.sql.gz 
    warning: zabbix-server-mysql-3.2.4-2.el7.x86_64.rpm: Header V4 DSA/SHA1 Signatur
    e, key ID 79ea5ed4: NOKEY
    /usr/share/doc/zabbix-server-mysql-3.2.4/create.sql.gz
    root@zabbix:/tmp$ 

    That is the version we installed, right?

    [root@zabbix ~]# rpm -q zabbix-server-mysql
    zabbix-server-mysql-3.2.4-2.el7.x86_64
    [root@zabbix ~]#

    Looks like it. And yum's log, /var/log/yum.log file think so too:

    Mar 07 14:30:20 Installed: zabbix-web-mysql-3.2.4-2.el7.noarch
    Mar 07 14:30:21 Installed: zabbix-web-3.2.4-2.el7.noarch
    Mar 07 14:30:22 Installed: zabbix-server-mysql-3.2.4-2.el7.x86_64

    This really does not make sense. Let me look again at the contents of the installed package, not at the rpm:

    [root@zabbix ~]# rpm -qlv zabbix-server-mysql
    -rw-r--r--    1 root    root                      132 Mar  2 14:55 /etc/logrotate.d/zabbix-server
    -rw-r-----    1 root    zabbix                  14876 Mar  2 14:55 /etc/zabbix/zabbix_server.conf
    -rw-r--r--    1 root    root                      415 Mar  2 14:29 /usr/lib/systemd/system/zabbix-server.service
    -rw-r--r--    1 root    root                       35 Mar  2 14:29 /usr/lib/tmpfiles.d/zabbix-server.conf
    drwxr-xr-x    2 root    root                        0 Mar  2 14:55 /usr/lib/zabbix/alertscripts
    drwxr-xr-x    2 root    root                        0 Mar  2 14:55 /usr/lib/zabbix/externalscripts
    -rwxr-xr-x    1 root    root                  2220064 Mar  2 14:55 /usr/sbin/zabbix_server_mysql
    drwxr-xr-x    2 root    root                        0 Mar  2 14:55 /usr/share/doc/zabbix-server-mysql-3.2.4
    -rw-r--r--    1 root    root                       98 Feb 27 09:22 /usr/share/doc/zabbix-server-mysql-3.2.4/AUTHORS
    -rw-r--r--    1 root    root                    17990 Feb 27 09:23 /usr/share/doc/zabbix-server-mysql-3.2.4/COPYING
    -rw-r--r--    1 root    root                   742520 Feb 27 09:22 /usr/share/doc/zabbix-server-mysql-3.2.4/ChangeLog
    -rw-r--r--    1 root    root                       52 Feb 27 09:24 /usr/share/doc/zabbix-server-mysql-3.2.4/NEWS
    -rw-r--r--    1 root    root                      188 Feb 27 09:22 /usr/share/doc/zabbix-server-mysql-3.2.4/README
    -rw-r--r--    1 root    root                  1161488 Mar  2 14:49 /usr/share/doc/zabbix-server-mysql-3.2.4/create.sql.gz
    -rw-r--r--    1 root    root                      881 Mar  2 14:55 /usr/share/man/man8/zabbix_server.8.gz
    drwxr-xr-x    2 zabbix  zabbix                      0 Mar  2 14:55 /var/log/zabbix
    drwxr-xr-x    2 zabbix  zabbix                      0 Mar  2 14:55 /var/run/zabbix
    [root@zabbix ~]# ls /usr/share/doc/zabbix-server-mysql-3.2.4/create.sql.gz
    ls: cannot access /usr/share/doc/zabbix-server-mysql-3.2.4/create.sql.gz: No such file or directory
    [root@zabbix ~]# 

    It turns out (kudos to irc user TrevorH for pointing that out) that yum is configured not to install docs

    [root@zabbix ~]# grep -ir tsflags /etc/yum.*
    /etc/yum.conf:tsflags=nodocs
    [root@zabbix ~]# 

    Let's comment it out then and try again

    [root@zabbix ~]# sed -i -e 's/^tsflags=nodocs/#tsflags=nodocs/' /etc/yum.conf
    [root@zabbix ~]# yum reinstall zabbix-server-mysql zabbix-web-mysql --enablerepo=zabbix
    Loaded plugins: fastestmirror, ovl
    Loading mirror speeds from cached hostfile
     * base: dist1.800hosting.com
     * extras: mirror.eboundhost.com
     * updates: mirror.es.its.nyu.edu
    Resolving Dependencies
    --> Running transaction check
    ---> Package zabbix-server-mysql.x86_64 0:3.2.4-2.el7 will be reinstalled
    ---> Package zabbix-web-mysql.noarch 0:3.2.4-2.el7 will be reinstalled
    --> Finished Dependency Resolution
    
    Dependencies Resolved
    
    ================================================================================
     Package                   Arch         Version              Repository    Size
    ================================================================================
    Reinstalling:
     zabbix-server-mysql       x86_64       3.2.4-2.el7          zabbix       1.8 M
     zabbix-web-mysql          noarch       3.2.4-2.el7          zabbix       5.1 k
    
    Transaction Summary
    ================================================================================
    Reinstall  2 Packages
    
    Total download size: 1.8 M
    Installed size: 4.0 M
    Is this ok [y/d/N]: y
    Downloading packages:
    (1/2): zabbix-web-mysql-3.2.4-2.el7.noarch.rpm             | 5.1 kB   00:00     
    (2/2): zabbix-server-mysql-3.2.4-2.el7.x86_64.rpm          | 1.8 MB   00:01     
    --------------------------------------------------------------------------------
    Total                                              1.7 MB/s | 1.8 MB  00:01     
    Running transaction check
    Running transaction test
    Transaction test succeeded
    Running transaction
      Installing : zabbix-web-mysql-3.2.4-2.el7.noarch                          1/2 
      Installing : zabbix-server-mysql-3.2.4-2.el7.x86_64                       2/2 
      Verifying  : zabbix-server-mysql-3.2.4-2.el7.x86_64                       1/2 
      Verifying  : zabbix-web-mysql-3.2.4-2.el7.noarch                          2/2 
    
    Installed:
      zabbix-server-mysql.x86_64 0:3.2.4-2.el7                                      
      zabbix-web-mysql.noarch 0:3.2.4-2.el7                                         
    
    Complete!
    [root@zabbix ~]# ls /usr/share/doc/
    coreutils-8.22    pygpgme-0.3           zabbix-release-3.2
    gnupg2-2.0.22     python-kitchen-1.1.1  zabbix-server-mysql-3.2.4
    krb5-libs-1.13.2  python-pycurl-7.19.0
    pam-1.1.8         unixODBC-2.3.1
    [root@zabbix ~]# ls /usr/share/doc/zabbix-server-mysql-3.2.4/
    AUTHORS  ChangeLog  COPYING  create.sql.gz  NEWS  README
    [root@zabbix ~]# 

    Success at last!

I think that is enough for one article. If you expect this to have any closure or redeeming message, I have news for you sunshine. Just hope that the next zabbix article will talk about actually getting it installed and configured and running. But, I make no guarantees.

Sunday, March 05, 2017

Checking if you are running redhat or centos or ubuntu or neither

So I wanted to make a script that would behave differently if we are running RedHat, CentOS, or Ubuntu. The findings here probably can be applied to other distros, but we need to start somewhere.

  1. Using lsb_release. I have been told before that the proper way to detect the OS/distro version is to use lsb_release. So, something like

    distro=$(lsb_release -i | awk '{ print $3}' | tr 'A-Z' 'a-z')

    Should do the trick. Of course that would imply it is installed, which might not be the case depending on how barebones is your install (less is more in my book). So, for our next trick, let's assume we do not have it installed.

  2. Without lsb_release. It might come as a shock to some but it is possible to find a linux install without it... and also without word processor and games and even web browsers. Like in servers. How would we find out which distro we have?

    1. RedHat and derivatives have the /etc/redhat-release file. It is easy to say if it is redhat or centos because it is written in the file itself.

      distro=$([ -f /etc/redhat-release ] && echo rhel )
      distro=$(grep -qi "redhat" /etc/redhat-release && echo rhel || echo centos )

      But Ubuntu does not have that file. Back to the drawing board.

    2. uname -v works on ubuntu

      raub@desktop:/tmp$ uname -v
      #83-Ubuntu SMP Wed Jan 18 14:10:15 UTC 2017
      raub@desktop:/tmp$

      But not on centos or redhat

      [raub@vmguest ~]$ uname -v
      #1 SMP Tue Aug 23 19:58:13 UTC 2016
      [raub@vmguest ~]$

      Come on! We can do better than that!

    3. /etc/issue seems to have the most potential

      [raub@vmguest ~]$ cat /etc/issue
      CentOS release 6.8 (Final)
      Kernel \r on an \m
      
      [raub@vmguest ~]$

      and on ubuntu

      raub@desktop:/tmp$ cat /etc/issue
      Ubuntu 16.04.1 LTS \n \l
      
      raub@desktop:/tmp$

      I think we got our winner

Monday, February 27, 2017

Create output filename based on input filename using powershell

Here's a situation that happened to me many times in Linux: let's say we create a script which expects the user to enter the input and output filenames. Now, what if the user forgets to enter the output filename? Should we bark or come up with a filename based on the input filename? This of course depends on the situation, but when I decided to create the output filename I would tack today's date to the input filename so they would be different.

But that was Linux and bash and python and this is Windows with powershell. And, yes, we could keep on writing in Bash using cygwin, but that owuld be cheating. Give the constraint of only running what came in Windows 7 and above (I am dating myself), let's see what we can do:

  1. Date. The date formats I like are (using Linux here, so focus on the output not the command)

    raub@desktop:~$ date +%F
    2017-01-30
    raub@desktop:~$ 
    and
    raub@desktop:~$ date +%Y%m%d
    20170130
    raub@desktop:~$ 

    Both write year using 4 digits followed by 2 digits for the month and two for the day. I know some people will cry and moan and demand the traditional US format, day/month/year, but the format I like makes sorting in a directory much easier as we are putting what changes the fastest on the end of the filename. But we are talking about powershell, not Bash or Bourne shell. That is true but now we know what we want to accomplish.

    To make it easier, I will pick one of the two formats -- YYYYMMDD -- and run with it; you can later modify the code to use the other one as exercise. The equivalent in powershell is:

    PS C:\Users\raub> get-date -format "yyyyMMdd"
    20170130
    PS C:\Users\raub> 

    Looks just like what we did above in Linux.

  2. Create filename if not given. We are reading the input filename into the script in some way or fashion. How we are doing it depends on the script and whether we should be passing options with or without flags to identify them. For now, we are going to be lazy and do the simplest thing possible: using param() on the beginning of the script.

    param($inputFile, $outputFile)

    If we have just one argument, it shall be the inputfile. If two, the second one is the outputfile. What if no arguments are passed? We can just send an error message and get out.

    function Usage
    {
       Write-Host "Usage:", $MyInvocation.MyCommand.Name, "inputfile [outputfile]"
       exit
    }
    
    if (!$inputfile)
    {
       Usage
    }

    The $MyInvocation.MyCommand.Name is a lazy way to for the script to get its own name by itself.

  3. Do something if there is no $outputFile. This is a variation of the same test we did to see if we had an $inputFile:

    function LazyOutputFilename($ifile)
    {
       $ofile = (Get-Item $ifile ).DirectoryName + '\' +  `
                (Get-Item $ifile ).BaseName + `
                '_' + (get-date -format "yyyyMMdd") + `
                (Get-Item $ifile ).Extension
       return $ofile
    }
    
    function GetOutputFilename($ifile, $ofile)
    {
       # $ofile cannot be $ifile
       # Create a $ofile if one was not given
       if (( [string]::IsNullOrEmpty($ofile) ) -or ( $ofile -eq $ifile ))
       {  
          $ofile = LazyOutputFilename $ifile
       }
    
       return $ofile
    }
    
    $outputFile = GetOutputFilename $inputFile $outputFile
    • In LazyOutputFilename() we are creating the output filename. We are putting it in the same directory as the input filename and then adding the formatted date right before the file extension.

    • The ( [string]::IsNullOrEmpty($ofile) ) checks is the output file, called $ofile inside this function, is empty. The reason we also wants to make sure the output file is not the input file is because we might be reading the input file a chunk at a time (line by line if text) so the script can handle large files without using up all the memory. If we are reading it line by line and then write right back to it, bad things might happen.

    • And, yes, we are overwriting the output filename if it gets changed in GetOutputFilename().

  4. Put everyone together.

    param($inputFile, $outputFile)
    
    function Usage
    {
       Write-Host "Usage:", $MyInvocation.MyCommand.Name, "inputfile [outputfile]"
       exit
    }
    
    <#
     Create output filename based on the full path of the input filename +
     today's date appended somewhere
     #>
    function LazyOutputFilename($ifile)
    {
       $ofile = (Get-Item $ifile ).DirectoryName + '\' +  `
                (Get-Item $ifile ).BaseName + `
                '_' + (get-date -format "yyyyMMdd") + `
                (Get-Item $ifile ).Extension
       return $ofile
    }
    
    function GetOutputFilename($ifile, $ofile)
    {
       # $ofile cannot be $ifile
       # Create a $ofile if one was not given
       if (( [string]::IsNullOrEmpty($ofile) ) -or ( $ofile -eq $ifile ))
       {  
          $ofile = LazyOutputFilename $ifile
       }
    
       return $ofile
    }
    
    if (!$inputfile)
    {
       Usage
    }
    
    $outputFile = GetOutputFilename $inputFile $outputFile

Monday, January 30, 2017

Using tcpdump to see vlan traffic in xenserver

Short version: it is a bit convoluted, bordering into the Rube Goldberg domain.

If you are in a hurry, you can now move onto something more interesting. If you instead want to hear me ranting and typing annoying commands, read on.

If we talking about building servers to virtualize hosts, we will end up talking about multiple networks organized in vlans which then need to be fed to this vm server. Of course running a 802.1q trunk sometimes does not work perfectly, so we need to be prepared to look behind the curtain. If the vm server is running Linux, like if it was running KVM or Xen, we can unleash tcpdump just like we did when trying to diagnose some trunking issues with a router. What about the Xenserver you mentioned in the title of this article? I thought it was Linux based. Good question. Very good question. I started this assuming it would be just Linux business as usual. You know what they say about assuming.

What I found out is that you cannot just use eth0.12 if you want to look for vlan 12 like you would do in KVM. In fact, it does not use /proc/net/vlan. It just ain't there

[root@thexen ~]# ls /proc/net/
anycast6      ip6_flowlabel        netfilter            route         tcp
arp           ip_conntrack         netlink              rpc           tcp6
dev           ip_conntrack_expect  netstat              rt6_stats     udp
dev_mcast     ip_mr_cache          nf_conntrack         rt_acct       udp6
dev_snmp6     ip_mr_vif            nf_conntrack_expect  rt_cache      udplite
fib_trie      ip_tables_matches    packet               snmp          udplite6
fib_triestat  ip_tables_names      protocols            snmp6         unix
icmp          ip_tables_targets    psched               sockstat
if_inet6      ipv6_route           ptype                sockstat6
igmp          mcfilter             raw                  softnet_stat
igmp6         mcfilter6            raw6                 stat
[root@thexen ~]#

You see, there is no eth0.12 defined here; by default xenserver will try to configure all available network cards (NICs for those craving for acronyms) in a managed (by xenserver) mode. Once they are added, it creates bridges, called xapiN, and then associates them with each network. And how do we find out which of those bridges is being used by our vlan? Er, it requires a few steps using the xen commands (xe something-or-another) which I have not found out how to automate yet.

  1. We begin by finding out which vlans are defined in this server. And that can be done using xe pif-list:
    root@thexen ~]# xe pif-list
    uuid ( RO)                  : 540f3b24-0606-6380-c10c-c2f8c2f4c2ce
                    device ( RO): eth1
        currently-attached ( RO): true
                      VLAN ( RO): 2
              network-uuid ( RO): a874cb50-1c87-0bde-390d-66d0a4e1576c
    
    
    uuid ( RO)                  : 8684e63f-3d1c-241b-8e75-3b2e37f8c859
                    device ( RO): eth0
        currently-attached ( RO): true
                      VLAN ( RO): -1
              network-uuid ( RO): ed2325c5-1f3b-7f25-6104-61902a13d3ac
    
    
    uuid ( RO)                  : 82b9dee1-52db-a6ae-cb42-11ae7f6d3d25
                    device ( RO): eth1
        currently-attached ( RO): true
                      VLAN ( RO): -1
              network-uuid ( RO): 993c9237-5961-9808-36cd-729827e005d8
    
    uid ( RO)                  : 592339bf-cc03-4048-9075-946f5bcc47fb
                    device ( RO): eth1
        currently-attached ( RO): true
                      VLAN ( RO): 12
              network-uuid ( RO): 0d28f847-3da6-11f3-3600-8a033435168c
    
    
    uuid ( RO)                  : 3d60399c-bb8d-5e5a-e01b-8986b8808f12
                    device ( RO): eth0
        currently-attached ( RO): true
                      VLAN ( RO): 3
              network-uuid ( RO): c8726e09-a0a5-b026-013e-2c5edd5062b3
    
    
    uuid ( RO)                  : 19f1fe37-16d1-6fcd-4bbd-4e566abc74c4
                    device ( RO): eth0
        currently-attached ( RO): true
                      VLAN ( RO): 8
              network-uuid ( RO): 2f94b1c8-be16-14d5-a149-90ae35528c22
    
    
    uuid ( RO)                  : 6df7b741-9cef-d34f-e487-fa2abe422068
                    device ( RO): eth1
        currently-attached ( RO): true
                      VLAN ( RO): 100
              network-uuid ( RO): 9ec62435-ec2a-2bfc-9f29-2ea5c9756971
    
    
    [root@thexen ~]#

    What can we gather from this output:

    • This machine has 2 physical interfaces, eth0 and eth1. And each of them have a few vlans groing through them. So, there are two 802.1q trunks. Deal with it.
    • We have two uuid entries per interface uuid (the one after "uuid ( RO) ") and a network-uuid.
    • If we only wanted to see the VLAN number, both the uuids, and the physical NIC/device each virtual interface is using, we could have instead said
      xe pif-list params=device,VLAN,network-uuid,uuid

      But if we wanted to know everything about each virtual interface,

      xe pif-list params=all
    • To get more info on a giver interface (or bridge) you need the uuid associated with uuid ( RO). So if you wanted to know everything about VLAN 100, you could say
      xe pif-list uuid=6df7b741-9cef-d34f-e487-fa2abe422068 params=all
    • The ones with VLAN ( RO): -1 are the untagged networks; we have one per interface even if we do not have it defined.
OK< smart guy, how do we go from this to this crazy xapiN interface? Oh, you mean what xenserver calls a bridge? We shall use the xe network-list command. If you run that, it will give back which xapiN is associated with which vlan. It will also show which bridge is being used for the console to this xenserver, which usually is an untagged vlan. There are ways to make that a tagged vlan but that will be for another episode. What is important is the uuid being shown is the network-uuid we got using xe pif-list. And, we can feed it to the network-list command if we just care about, say, VLAN 12:

[root@thexen ~]# xe network-list uuid=0d28f847-3da6-11f3-3600-8a033435168c params=bridge,name-label
name-label ( RW)    : vlan 12
        bridge ( RO): xapi4


[root@thexen ~]#

We finally found out that vlan 12 is attached to xapi4. Time for some tcpdumping:

[root@thexen ~]# tcpdump -i xapi4 -e
tcpdump: WARNING: xapi4: no IPv4 address assigned
tcpdump: verbose output suppressed, use -v or -vv for full protocol decode
listening on xapi4, link-type EN10MB (Ethernet), capture size 65535 bytes
^C
0 packets captured
0 packets received by filter
0 packets dropped by kernel
[root@thexen ~]#

Crickets

What is going on here? I let it run for 10s; that should have been enough to fill the screen. Let's try again, letting it run for longer 94 minutes?) while we do, say, tracepath to the gateway.

[root@thexen ~]# tcpdump -i xapi4 -e -n
tcpdump: WARNING: xapi4: no IPv4 address assigned
tcpdump: verbose output suppressed, use -v or -vv for full protocol decode
listening on xapi4, link-type EN10MB (Ethernet), capture size 65535 bytes
10:46:10.696176 c0:ff:ee:70:63:a4 > Broadcast, ethertype ARP (0x0806), length 42
: Request who-has 192.168.12.241 tell 192.168.12.242, length 28
10:46:11.698310 c0:ff:ee:70:63:a4 > Broadcast, ethertype ARP (0x0806), length 42
: Request who-has 192.168.12.241 tell 192.168.12.242, length 28
10:46:12.700355 c0:ff:ee:70:63:a4 > Broadcast, ethertype ARP (0x0806), length 42
: Request who-has 192.168.12.241 tell 192.168.12.242, length 28
10:46:13.705644 c0:ff:ee:70:63:a4 > Broadcast, ethertype ARP (0x0806), length 42
: Request who-has 192.168.12.241 tell 192.168.12.242, length 28
10:46:14.706364 c0:ff:ee:70:63:a4 > Broadcast, ethertype ARP (0x0806), length 42
: Request who-has 192.168.12.241 tell 192.168.12.242, length 28
10:46:15.708375 c0:ff:ee:70:63:a4 > Broadcast, ethertype ARP (0x0806), length 42
: Request who-has 192.168.12.241 tell 192.168.12.242, length 28
10:46:18.710913 c0:ff:ee:70:63:a4 > Broadcast, ethertype ARP (0x0806), length 42
: Request who-has 192.168.12.241 tell 192.168.12.242, length 28
[...]
: Request who-has 192.168.12.241 tell 192.168.12.242, length 28
10:46:30.724414 c0:ff:ee:70:63:a4 > Broadcast, ethertype ARP (0x0806), length 42
: Request who-has 192.168.12.241 tell 192.168.12.242, length 28

^C
15 packets captured
15 packets received by filter
0 packets dropped by kernel
[root@thexen ~]#

Fifteen packets in four minutes? I could have done that by hand! What is going on here?

But it really does not work as well as it should

Do I sound bitter? I am just being factual. I can't use tcpdump when this bridge thing is only giving me like 4 packets a minute. Even if there was no talkign between servers, just the ARP requests should have been more often. So, we need to rething this.

We did the proper thing so far. Now it's time to cheat.

We know the network associated with vlan 12 is 192.168.12.0/24, so why not tell tcpdump to look at eth1 for anything that matches that?

tcpdump -i eth1 -e -n | grep '192.168.12'

I will not bother to show the output of that but it will look much more like what we would have expected. Of course, it will only get traffic in that wire matching that pattern so if you have a host trying to reach out for a dhcp server in that network you will not detect that. Nor it would find IPv6 traffic (you would need to feed the proper pattern). But, it is better than using xapi4.

Wednesday, January 11, 2017

Creating extended ASCII file in bash and maybe powershell

Does anyone remember extended ASCII (as opposite to UTF-8)? If you never heard of them, we are not talking about a proper character list that supports Russian or Japanese languages. All we are dealing with here is iso-8859-1, whose table can be found here.

I have a document in that format I need to convert to something else; if this reminds you of some 8bitmime issues we talked about, well, let's just say we could have used this to create the test file. With that said, the current situation is that I wrote a script to manipulate it which is not preserving that format; that we can talk about in a future post.

Bottom line is I need to create a small test file that I can later throw it and the script output on hexdump.

My test file will have only 3 lines,

Olivenöl
Bayerstraße 22
München

Nothing fancy; just enough to use one extended ASCII character per line. Now let's try to create the little file. Just to be different, instead of starting on Linux we will do most of the attempts in OSX. Once we have a working system, we can see if it also works on Linux.

Attempt #1

How about if we do the lazy thing and jus cut-n-paste the 3 lines above into a text file we opened usng vim, notepad++, or some pico clone? Done. Now let's see how it looks like

bash-3.2$ cat /tmp/chartest 
Olivenöl
Bayerstraße 22
München
bash-3.2$ 

That looks very promising. In fact, this might end up being a very short article. Before I publish it, should we see what hexdump thinks of it?

bash-3.2$ hexdump -Cv /tmp/chartest 
00000000  4f 6c 69 76 65 6e c3 b6  6c 0d 0a 42 61 79 65 72  |Oliven..l..Bayer|
00000010  73 74 72 61 c3 9f 65 20  32 32 0d 0a 4d c3 bc 6e  |stra..e 22..M..n|
00000020  63 68 65 6e 0d 0a                                 |chen..|
00000026
bash-3.2$ 

Correct me if I am wrong but it seems each extended ASCII character is taking two characters to be represented instead of just one single character. For instance ö is being represented by two characters, 0xC3B6. That sounds more like UTF-8/Unicode/whatever (if you want to know what to look for, they all start with a 0xC3) but not extended ASCII. Also, it is using carriage return (CR, 0x0D in hexadecimal) and line feed (LF, 0x0A) characters to separate the lines. But this is very Windowsy, not OSX/Linux style, where lines are separated by the line feed (0x0A) character only.

Attempt #2

What if we paste the lines onto the terminal and use echo to write that to the test file? Well, let's make a single line test file and see what happens

bash-3.2$ echo "Olivenöl" > /tmp/chartest 
bash-3.2$ hexdump -Cv /tmp/chartest 
00000000  4f 6c 69 76 65 6e c3 b6  6c 0a                    |Oliven..l.|
0000000a
bash-3.2$ 

Still using two characters to represent ö; at least it did not add a CR. Now this is getting annoying; is there anything else we can try?

Attempt #3

It turns out there is and we do not need any extra stuff. You see, echo has this -e option that allows you to pass a character by its hexadecimal code. From the extended ASCII table we know that ö= 0xF6 and ß= 0xDF (just to pick two examples). We also know that CRFL = 0x0D0A. I know I whined about that before, but the reason is I want to be able to decide when I want to use those characters and when I do not want, as opposite to having some program or script making a choice for me.

Let's try again, this time passing the extended ASCII characters explicitly:

bash-3.2$ echo -e "Oliven\xf6l\x0d\x0aBayerstra\xdfe 22\x0d\x0aM\xfcnchen\x0d" > /tmp/chartest
bash-3.2$ 
bash-3.2$ cat /tmp/chartest 
Oliven�l
Bayerstra�e 22
M�nchen
bash-3.2$ hexdump -Cv /tmp/chartest 
00000000  4f 6c 69 76 65 6e f6 6c  0d 0a 42 61 79 65 72 73  |Oliven.l..Bayers|
00000010  74 72 61 df 65 20 32 32  0d 0a 4d fc 6e 63 68 65  |tra.e 22..M.nche|
00000020  6e 0d 0a                                          |n..|
00000023
bash-3.2$ 

That's more like it: only one character is used to represent each character in the file. Isn't it interesting when we cat the file it is replacing the extended characters with ? But, if hexdump says they are there that is good enough for me.

What about powershell?

Even though the name of this blog implied Unix, we use enough powershell we might as well see if we can do the same. But, we have to accept we start with a bit of a handicap: powershell really really wants to write UTF-8 or unicode instead of extended ascii/iso-8859-1. Let me show you what I mean by trying to create a small file with just one single word on it, Olivenöl. As we seen before, ö = 0xF6 = 246. And that should still be true in powershell; let's find out:

PS > 'Oliven' + [char]246 + 'l'
Olivenöl
PS >

Looks like we are getting somewhere, right? For our next trick, we will save that to a file (| out-file .\chartest.txt is equivalent to doing > .\chartest.txt.

PS > 'Oliven' + [char]246 + 'l' | out-file .\chartest.txt
PS > cat .\chartest.txt
Olivenöl
PS >

Hey chief! It seems to be working fine? Why are you make this huge drama about this? That is a very good question. I will let dear old hexdump do the talking:

$ hexdump -Cv chartest.txt
00000000  ff fe 4f 00 6c 00 69 00  76 00 65 00 6e 00 f6 00  |..O.l.i.v.e.n...|
00000010  6c 00 0d 00 0a 00                                 |l.....|
00000016

$

Each character is now represented by 2 characters. Smells like Unicode, right? ok, smart guy. Now just force it to save as ASCII then. Will do:

PS > 'Oliven' + [char]246 + 'l' | out-file -encoding ASCII .\chartest.txt
PS > cat .\chartest.txt
Oliven?l
PS >

And hexdump

$ hexdump -Cv chartest.txt
00000000  4f 6c 69 76 65 6e 3f 6c  0d 0a                    |Oliven?l..|
0000000a

$

It converted the characters into ?. Helpful, isn't it? The Microsoft Scripting Guys forum pretty much tells you should save file as unicode or UTF-8 and then convert it somehow. Far from me to disagree with them, at least in this article since it makes for a great cliffhanger. In a future article we will talk about how to get extended ASCII properly in powershell just like we did in bash. It will be a bit longer but doable.