Sunday, December 13, 2020

Yet another post on the demise of CentOS and the ascension of CentOS Stream

A friend of mine has been told me that 2020 is the year you expect one catastophe each month: we had Australian wildfires, pandemic (which still is going on), murder hornets, and so on. And now it is December!

Excluding those who might have been inside a cave, or have been otherwise distracted with real life events, it is fair to assume you know that CentOS as it is was (i.e. from before its 2014 aquisition by RedHat until now) will cease to exist in 2021, it will be replaced with CentOS Stream. I will not comment on the reason and whether that was to be expected after the IBM aquisition of RedHat; there is already a lot of people doing that, the comment session of that centos.org blog entry included. Instead, let's talk about the few things I know. From what I have gathered, its relationship with RHES has changed a bit. Before, changes were made to RHES and then applied to CentOS, but things have changed. For instance

What to do?

I think that depends on how you feel about this and how much you have tied into CentOS.

  1. Stick with CentOS Stream. This may or not work; it probably would be wise to wait until, say, the middle of 2021 to see how this works out.
  2. Upgrade to a RHES subscription. In their defense, my experience with their paid technical support was very good. I do not know if my experience was unique.
  3. If you have a single private computer running CentOS, you can get a dev license for RHES.
  4. Switch to another RHES port. There are a few options here, which in a certain way behave like CentOS of old.
  5. Switch to another distro. If you do not want to have any further business with a Red Hat distro, there are options for server (Arch linux: please step away from the line even though you have excellents docs.) duties:
    Linux
    • Debian, a very stable operating system. Its project leader, Jonathan Carter, wrote a blog about the CentOS demise.
    • Ubuntu, which it seems to be the platform of choice of many new developments. Just check where code for GPUs and FPGAs is first written on.
    • Open SuSe. It may not be as popular as Debian and RedHat based distros but it is a good contender.
    NOT Linux
    • FreeBSD. Very stable UNIX operating system. Package list is smaller than Linux though, but this is a proper server operating system. If you like ZFS, you may want to investigate it, or at least FreeNAS (Now called TrueNAS).
    • OpenBSD. From the same people who brought us openssl and openssh. When was the last time you heard of people breaking into an OpenBSD box?

And that is all for now. If you expected a nice closing argument, there is none. This is just a change. Think of CentOS as a cheese: it moved; now you have to choose if you are going to follow it or look for another cheese.

Thursday, July 30, 2020

Variable expansion and searching for packages that contain a file using yum/dnf

I know some articles in this blog are rather clever, but this one is here to remind me (learn from my mistakes!) that understanding how a command thinks is important. I was having some issues with cryptsetup and was told (by Matthew Heon: let me make sure to recognize him for throwing a searchlight at my problem. Thanks!) I the file /usr/share/cracklib/pw_dict.pwd.gz was missing. Fine, this is a CentOS 8 docker container. I can use yum (until they remove it completely) or its replacement, dnf, to look for it. I will be using yum in this discussion knowing that they are interchangeable within the limits of this article.

If the file in in the directory /usr/share/cracklib, chances are it belongs to the cracklib package, so let's begin by seeing what we have matching that:

[root@moe /]# yum search cracklib
Failed to set locale, defaulting to C.UTF-8
========================== Name Exactly Matched: cracklib ==========================
cracklib.x86_64 : A password-checking library
cracklib.i686 : A password-checking library
========================= Name & Summary Matched: cracklib =========================
cracklib-dicts.x86_64 : The standard CrackLib dictionaries
[root@moe /]#

Oh, there are more than one, so we need to be more specific. That is a great job for the whatprovides option; it allows us to find all the packages that contain a given file.

[root@moe /]# yum whatprovides pw_dict.pwd.gz
Failed to set locale, defaulting to C.UTF-8
Last metadata expiration check: 0:00:27 ago on Tue Jul 28 20:32:27 2020.
Error: No Matches found
[root@moe /]#
I did learn that sometimes looking for a package by just providing the filename of a file that belongs to it does not work well, but if you make it look like you are giving a path will work. And this path can begin with a * so it can expand the path to any path in the system. So, let's try that and hope for the best:
[root@moe /]# yum whatprovides */pw_dict.pwd.gz
Failed to set locale, defaulting to C.UTF-8
Last metadata expiration check: 17:43:12 ago on Tue Jul 28 20:32:27 2020.
Error: No Matches found
[root@moe /]#
What is going on? Well, let's up on a limb: of the three cracklib-related files, cracklib-dicts seems to be the one with the most potential because the file we want is a dictionary. And then see what lurks in /usr/share/cracklib/:
[root@moe /]# yum install cracklib-dicts
[...]
[root@moe /]# ls /usr/share/cracklib/
cracklib-small.hwm  cracklib-small.pwi  pw_dict.hwm  pw_dict.pwi
cracklib-small.pwd  cracklib.magic      pw_dict.pwd
[root@moe /]#

A candle lights over my heard, indicating I was enlightened: it is called pw_dict.pwd, not pw_dict.pwd.gz! I did not account for it to be in a different format (gzipped in this case)! Well duh!

With that in mind, we should see if we could have saved some aggravation. We expanded the search path by entering */pw_dict.pwd.gz before; would that work for the filename? Let's find out:

[root@moe /]# yum whatprovides */pw_dict.pwd
Failed to set locale, defaulting to C.UTF-8
Last metadata expiration check: 17:43:45 ago on Tue Jul 28 20:32:27 2020.
cracklib-dicts-2.9.6-15.el8.x86_64 : The standard CrackLib dictionaries
Repo        : @System
Matched from:
Filename    : /usr/share/cracklib/pw_dict.pwd

cracklib-dicts-2.9.6-15.el8.x86_64 : The standard CrackLib dictionaries
Repo        : BaseOS
Matched from:
Filename    : /usr/share/cracklib/pw_dict.pwd

[root@moe /]# yum whatprovides pw_dict.*
Failed to set locale, defaulting to C.UTF-8
Last metadata expiration check: 17:46:32 ago on Tue Jul 28 20:32:27 2020.
Error: No Matches found
[root@moe /]#
[root@moe /]# yum whatprovides */pw_dict.*
Failed to set locale, defaulting to C.UTF-8
Last metadata expiration check: 17:44:42 ago on Tue Jul 28 20:32:27 2020.
cracklib-dicts-2.9.6-15.el8.x86_64 : The standard CrackLib dictionaries
Repo        : @System
Matched from:
Filename    : /usr/share/cracklib/pw_dict.hwm
Filename    : /usr/share/cracklib/pw_dict.pwd
Filename    : /usr/share/cracklib/pw_dict.pwi

cracklib-dicts-2.9.6-15.el8.x86_64 : The standard CrackLib dictionaries
Repo        : BaseOS
Matched from:
Filename    : /usr/share/cracklib/pw_dict.hwm
Filename    : /usr/share/cracklib/pw_dict.pwd
Filename    : /usr/share/cracklib/pw_dict.pwi

[root@moe /]# yum whatprovides */*_dict.pwd
Failed to set locale, defaulting to C.UTF-8
cracklib-dicts-2.9.6-15.el8.x86_64 : The standard CrackLib dictionaries
Repo        : BaseOS
Matched from:
Filename    : /usr/lib64/cracklib_dict.pwd
Filename    : /usr/share/cracklib/pw_dict.pwd

[root@moe /]#

Interesting that we really do not need to tack a * to the end of the search pattern. So, what we learned from this article is that if searching for a package a given file belongs to does not work, we can broaden the search by replacing part of the filename in question with a *. And that we do not need that if the bit of the filename we are taking is at the end.

Learning something useful in this blog: who would've thought?

Tuesday, January 14, 2020

Updating only the latest/default Linux kernel boot arguments in CentOS/RedHat/Fedora

I think from the title you know what I have in mind. My direct application is adding intel_iommu=on to the kernel in the KVM server I built to replace my VMWare ESXi one:

[root@vmhost2 ~]# virt-host-validate
  QEMU: Checking for hardware virtualization                                 : PASS
  QEMU: Checking if device /dev/kvm exists                                   : PASS
  QEMU: Checking if device /dev/kvm is accessible                            : PASS
  QEMU: Checking if device /dev/vhost-net exists                             : PASS
  QEMU: Checking if device /dev/net/tun exists                               : PASS
  QEMU: Checking for cgroup 'memory' controller support                      : PASS
  QEMU: Checking for cgroup 'memory' controller mount-point                  : PASS
  QEMU: Checking for cgroup 'cpu' controller support                         : PASS
  QEMU: Checking for cgroup 'cpu' controller mount-point                     : PASS
  QEMU: Checking for cgroup 'cpuacct' controller support                     : PASS
  QEMU: Checking for cgroup 'cpuacct' controller mount-point                 : PASS
  QEMU: Checking for cgroup 'cpuset' controller support                      : PASS
  QEMU: Checking for cgroup 'cpuset' controller mount-point                  : PASS
  QEMU: Checking for cgroup 'devices' controller support                     : PASS
  QEMU: Checking for cgroup 'devices' controller mount-point                 : PASS
  QEMU: Checking for cgroup 'blkio' controller support                       : PASS
  QEMU: Checking for cgroup 'blkio' controller mount-point                   : PASS
  QEMU: Checking for device assignment IOMMU support                         : PASS
  QEMU: Checking if IOMMU is enabled by kernel                               : WARN (IOMMU 
appears to be disabled in kernel. Add intel_iommu=on to kernel cmdline arguments)
[root@vmhost2 ~]#
The official docs would state the right way to do it is to use grub-mkconfig (might require grub-install first):

echo 'GRUB_CMDLINE_LINUX_DEFAULT="intel_iommu=on"' >> /etc/default/grub
grub-mkconfig -o "$(readlink -f /etc/grub2.cfg)"

And then reboot. Problem with that is it applies intel_iommu=on to every single kernel listed in the grub menu. What if I just want to do that to one of the kernels (in my case the latest)? This way if something goes boink I can boot to the grub menu, select the last one, and continue booting.

Well, in previous CentOS version like 6 and 7, I would

  1. Open the grub.cfg
  2. Find the latest kernel menu entry (the top one)
  3. Find the line that tells which kernel to load for that version
  4. Append the option I wanted to add, say the intel_iommu=on from above:
    linux16 /boot/vmlinuz-3.10.0-957.12.2.el7.x86_64
    root=UUID=1a4cb560-eade-47cd-b1a5-57f8e0f53b8f ro console=tty0 crashkernel=auto
    console=ttyS0,115200 intel_iommu=on
  5. Reboot.
But I can no longer do that since I cannot find the lines identifying the path to their respective kernels.

Enter the Grubby

Yes, this is a Today I Learned (TIL) event, and as you might have guessed, we are talking about grubby (if you click on the link you will go to the Red Hat official github repo for it), a command line tool to edit the boot config. I usually try to avoid commands that are but wrappers hiding what is really going on, but this one does seem to be useful (given I am no longer able to just edit the config file) and does not seem to require a lot of extra packages. It also came with both the Fedora31 and CentOS8 installs I have done. However when I saw which distros have prebuilt packages, none of the Debian-derived (ubuntu, mint, etc) are listed. It seems Ubuntu prefers update-grub, which is a wrapper around grub2-mkconfig, meaning it updates all the listed kernels.

Let's see what we can break:

  1. The man page says that to append arguments to a given kernel, we should run
    grubby --update-kernel=the_kernel --args="kernel_args"
    where the_kernel is the path to the kernel we want to edit. So, where are the kernel hiding and how to find out which one is the latest? For the first question, the kernel files are the ones starting with vmlinuz in the /boot directory:
    [root@vmhost2 ~]# ls /boot/
    config-4.18.0-80.11.2.el8_0.x86_64
    config-4.18.0-80.el8.x86_64
    efi
    grub2
    initramfs-0-rescue-133a53b45d2b47168497d47a34dd932f.img
    initramfs-4.18.0-80.11.2.el8_0.x86_64.img
    initramfs-4.18.0-80.11.2.el8_0.x86_64kdump.img
    initramfs-4.18.0-80.el8.x86_64.img
    initramfs-4.18.0-80.el8.x86_64kdump.img
    loader
    lost+found
    System.map-4.18.0-80.11.2.el8_0.x86_64
    System.map-4.18.0-80.el8.x86_64
    vmlinuz-0-rescue-133a53b45d2b47168497d47a34dd932f
    vmlinuz-4.18.0-80.11.2.el8_0.x86_64
    vmlinuz-4.18.0-80.el8.x86_64
    [root@vmhost2 ~]#
  2. Get the path of the latest kernel. From the previous step we know where they are, but now we need a way to identify the lastest one. We could write a script... or find in the man page that grubby has an option, --default-kernel,
    [root@vmhost2 ~]# grubby --default-kernel
    /boot/vmlinuz-4.18.0-80.11.2.el8_0.x86_64
    [root@vmhost2 ~]#
    but is this default kernel the same as the latest one (currently in use)? Let's ask the host what is the current kernel (I did boot up with the latest one installed in vmhost2):
    [root@vmhost2 ~]# uname -a
    Linux vmhost2 4.18.0-80.11.2.el8_0.x86_64 #1 SMP Tue Sep 24 11:32:19 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux
    [root@vmhost2 ~]#
    Looks to be the case.
  3. Apply the above to update the kernel. Now we know how to identify the latest kernel, we can run
    grubby --update-kernel=$(grubby --default-kernel) --args="kernel_args"
    to add new arguments to the latest kernel. kernel_args is the space-separated list of the arguments we want to feed to it. The format should be the same you would feed to the kernel if booting msnuslly. For my vmhost2 kvm server, that would be intel_iommu=on. So,
    grubby --update-kernel=$(grubby --default-kernel) --args="intel_iommu=on"
    followed by a reboot should be just fine.
  4. We can do it better. If we are only updating the latest/default kernel, passing DEFAULT as the_kernel does exactly what we want but with less effort from out part:
    grubby --update-kernel DEFAULT --args="intel_iommu=on"

After we reboot, we can log back in and and see if it intel_iommu=on has been added:

[root@vmhost2 ~]# grubby --info DEFAULT
index=0
kernel="/boot/vmlinuz-4.18.0-80.11.2.el8_0.x86_64"
args="ro crashkernel=auto rd.lvm.lv=vmhost/root rd.lvm.lv=vmhost/usr rhgb quiet $tuned_params "intel_iommu=on""
root="/dev/mapper/vmhost-root"
initrd="/boot/initramfs-4.18.0-80.11.2.el8_0.x86_64.img $tuned_initrd"
title="CentOS Linux (4.18.0-80.11.2.el8_0.x86_64) 8 (Core)"
id="133a53b45d2b47168497d47a34dd932f-4.18.0-80.11.2.el8_0.x86_64"
[root@vmhost2 ~]#

Fine, but did it only change the latest kernel? Let's find out by picking another kernel and asking what's up (I have only two kernels listed so we pick the other one):

[root@vmhost2 ~]# grubby --info vmlinuz-4.18.0-80.el8.x86_64
index=1
kernel="/boot/vmlinuz-4.18.0-80.el8.x86_64"
args="ro crashkernel=auto rd.lvm.lv=vmhost/root rd.lvm.lv=vmhost/usr rhgb quiet $tuned_params"
root="/dev/mapper/vmhost-root"
initrd="/boot/initramfs-4.18.0-80.el8.x86_64.img $tuned_initrd"
title="CentOS Linux (4.18.0-80.el8.x86_64) 8 (Core)"
id="133a53b45d2b47168497d47a34dd932f-4.18.0-80.el8.x86_64"
[root@vmhost2 ~]#

And now kvm is happy since IOMMU is enabled:

[root@vmhost2 ~]# virt-host-validate
  QEMU: Checking for hardware virtualization                                 : PASS
  QEMU: Checking if device /dev/kvm exists                                   : PASS
  QEMU: Checking if device /dev/kvm is accessible                            : PASS
  QEMU: Checking if device /dev/vhost-net exists                             : PASS
  QEMU: Checking if device /dev/net/tun exists                               : PASS
  QEMU: Checking for cgroup 'memory' controller support                      : PASS
  QEMU: Checking for cgroup 'memory' controller mount-point                  : PASS
  QEMU: Checking for cgroup 'cpu' controller support                         : PASS
  QEMU: Checking for cgroup 'cpu' controller mount-point                     : PASS
  QEMU: Checking for cgroup 'cpuacct' controller support                     : PASS
  QEMU: Checking for cgroup 'cpuacct' controller mount-point                 : PASS
  QEMU: Checking for cgroup 'cpuset' controller support                      : PASS
  QEMU: Checking for cgroup 'cpuset' controller mount-point                  : PASS
  QEMU: Checking for cgroup 'devices' controller support                     : PASS
  QEMU: Checking for cgroup 'devices' controller mount-point                 : PASS
  QEMU: Checking for cgroup 'blkio' controller support                       : PASS
  QEMU: Checking for cgroup 'blkio' controller mount-point                   : PASS
  QEMU: Checking for device assignment IOMMU support                         : PASS
  QEMU: Checking if IOMMU is enabled by kernel                               : PASS
[root@vmhost2 ~]#

Final thoughts

It seems they (don't you always wonder who "they" are?) are deprecating/phasing grubby out. And it is not available in Debian/Ubuntu/derivatives. So next time I play with kernel boot options, which will be soon, I will see about using a more generic solution.