Notes about pens

And now for something completely different, let's talk about pens. I'm a big fan of stationery, however I strongly refrain from buying lots of fancy pens and writing pads because they usually remain unused and taking lots of space.

I wanted to use this post to document some of the pens that work well for me so whenever I need to buy replacements, I don't spend lots of time figuring which one was good for what -- or even if I should buy a specific model at all.

Being left handed means I don't use many of the fountain pens at all - I haven't found a comfortable way of writing with them, plus the additional mess they make because ink is never dry enough when I put my hand on it to continue writing.

Anyway, my list of currently useful pens:

  • Uni-Ball Vision RT Black UBN-178. Writes well on my Moleskine; ink doesn't bleed and get dry fast enough so I can write without delays. With a 0.8 size, the stroke creates a ticker line.

  • Uni-Ball UB-150. With a thinner stroke (0.5) usually writes well on any paper, including Moleskine.

  • Pilot Hi-Tecpoint V5 Grip. Again a 0.5 ball creates a thin stroke. The ink in this case usually faints a bit after drying out, and produces a lighter blue - which is to be taken into account if you really want your text to strike out.

  • Pilot G2 07. Now a gel pen with a thicker stroke. I really like this pen however is probably the messiest one in terms of wet ink, so I should really not use this pen anymore

  • Pilot Hi-Tecpoint V5. An oldie, but goodie. I haven't used one of these in a while but they provided sharp lines with no traces of running ink.

I'll keep updating this page if I find new pens that I like -- I'm well stoked for the time being though!

Upgrading Satellite 6.11 from RHEL7 to RHEL8

Satellite 6.11 is the only version of Satellite capable of running both in RHEL7 and RHEL8. While Satellite 6.11 was published a few months ago, I didn't immediately upgrade to RHEL8, and here's a quick recap on what is needed to update a Satellite system in place to the next version of RHEL.

Preparations

You can review the official upgrade documentation in the Upgrading Satellite or Capsule to Red Hat Enterprise Linux 8 In-Place Using Leapp chapter of the documentation.

Prior to performing this upgrade, you should be on the latest Satellite 6.11 version, including the RHEL OS packages. You can upgrade to the latest version with a regular foreman-maintain upgrade run -y --target-version=6.11.z.

In addition to that, you need to enable the extras repository and install the leapp package:

# subscription-manager repos --enable rhel-7-server-extras-rpms
# satellite-maintain packages install leapp leapp-repository -y
Running install packages in unlocked session                                          
================================================================================
Confirm installer run is allowed:                                                                                                                                            

WARNING: This script runs satellite-installer after the yum execution          
to ensure the Satellite is in a consistent state.                                                                                                                            
As a result some of your services may be restarted.                                                                                                                          

Do you want to proceed?, [y(yes), q(quit)] y                                                                                                                                 
                                                                      [OK]                                                                                                  
--------------------------------------------------------------------------------                                                                                             
Unlock packages:                                                      [OK]                                                                                                   
--------------------------------------------------------------------------------                                                                                             
Install packages: Loaded plugins: product-id, search-disabled-repos, subscription-manager          
[...]
================================================================================
Install  2 Packages (+16 Dependent packages)

Total download size: 3.8 M
Installed size: 14 M
Is this ok [y/d/N] y

[...]
--------------------------------------------------------------------------------
Check status of version locking of packages: 
  Automatic locking of package versions is enabled in installer.
  Packages are locked.                                                [OK]
--------------------------------------------------------------------------------
# 

You will also need to perform this workaround to pass through a known caveat in the upgrade process documented in https://access.redhat.com/solutions/6966647 .

# subscription-manager repo-override --repo=satellite-6.11-for-rhel-8-x86_64-rpms --add=module_hotfixes:1 
Repository 'satellite-6.11-for-rhel-8-x86_64-rpms' does not currently exist, but the override has been added.

Running leapp preupgrade

LEAPP can be run to assess the system and prepare the upgrade process.

# time leapp preupgrade
==> Processing phase `configuration_phase`
====> * ipu_workflow_config
        IPU workflow config actor
==> Processing phase `FactsCollection`
====> * tcp_wrappers_config_read
        Parse tcp_wrappers configuration files /etc/hosts.{allow,deny}.
====> * grubdevname
        Get name of block device where GRUB is located
====> * scanmemory
        Scan Memory of the machine.
====> * scan_subscription_manager_info
        Scans the current system for subscription manager information
====> * scan_files_for_target_userspace
        Scan the source system and identify files that will be copied into the target userspace when it is created.
====> * sssd_facts
        Check SSSD configuration for changes in RHEL8 and report them in model.
====> * network_manager_read_config
        Provides data about NetworkManager configuration.
====> * scan_kernel_cmdline
        No documentation has been provided for the scan_kernel_cmdline actor.
====> * storage_scanner
        Provides data about storage settings.
====> * load_device_driver_deprecation_data
        Loads deprecation data for drivers and devices (PCI & CPU)
====> * register_yum_adjustment
        Registers a workaround which will adjust the yum directories during the upgrade.
====> * udevadm_info
        Produces data exported by the "udevadm info" command.
====> * scan_sap_hana
        Gathers information related to SAP HANA instances on the system.
====> * pci_devices_scanner
        Provides data about existing PCI Devices.
====> * authselect_scanner
        Detect what authselect configuration should be suggested to administrator.
====> * persistentnetnames
        Get network interface information for physical ethernet interfaces of the original system.
====> * common_leapp_dracut_modules
        Influences the generation of the initram disk
====> * persistentnetnamesdisable
        Disable systemd-udevd persistent network naming on machine with single eth0 NIC
====> * system_facts
        Provides data about many facts from system.
====> * read_openssh_config
        Collect information about the OpenSSH configuration.
====> * repository_mapping
        Produces message containing repository mapping based on provided file.
====> * xfs_info_scanner
        This actor scans all mounted mountpoints for XFS information
====> * sctp_read_status
        Determines whether or not the SCTP kernel module might be wanted.
====> * source_boot_loader_scanner
        Scans the boot loader configuration on the source system.
====> * scan_custom_repofile
        Scan the custom /etc/leapp/files/leapp_upgrade_repositories.repo repo file.
====> * biosdevname
        Enable biosdevname on the target RHEL system if all interfaces on the source RHEL
====> * rpm_scanner
        Provides data about installed RPM Packages.
Loaded plugins: foreman-protector, product-id, subscription-manager

WARNING: Excluding 13038 packages due to foreman-protector. 
Use foreman-maintain packages install/update <package> 
to safely install packages without restrictions.
Use foreman-maintain upgrade run for full upgrade.

====> * transaction_workarounds
        Provides additional RPM transaction tasks based on bundled RPM packages.
====> * scan_pkg_manager
        Provides data about package manager (yum/dnf)
====> * check_kde_apps
        Actor checks which KDE apps are installed.
====> * root_scanner
        Scan the system root directory and produce a message containing
====> * firewalld_facts_actor
        Provide data about firewalld
====> * scanclienablerepo
        Produce CustomTargetRepository based on the LEAPP_ENABLE_REPOS in config.
====> * pam_modules_scanner
        Scan the pam directory for services and modules used in them
====> * selinuxcontentscanner
        Scan the system for any SELinux customizations
====> * scandasd
        In case of s390x architecture, check whether DASD is used.
====> * scancpu
        Scan CPUs of the machine.
====> * removed_pam_modules_scanner
        Scan PAM configuration for modules that are not available in RHEL-8.
====> * satellite_upgrade_facts
        Report which Satellite packages require updates and how to handle PostgreSQL data
====> * get_enabled_modules
        Provides data about which module streams are enabled on the source system.
====> * repositories_blacklist
        Exclude target repositories provided by Red Hat without support.
====> * detect_kernel_drivers
        Matches all currently loaded kernel drivers against known deprecated and removed drivers.
====> * get_installed_desktops
        Actor checks if kde or gnome desktop environments
====> * checkrhui
        Check if system is using RHUI infrastructure (on public cloud) and send messages to
====> * red_hat_signed_rpm_scanner
        Provide data about installed RPM Packages signed by Red Hat.
====> * quagga_daemons
        Active quagga daemons check.
====> * ipa_scanner
        Scan system for ipa-client and ipa-server status
====> * rpm_transaction_config_tasks_collector
        Provides additional RPM transaction tasks from /etc/leapp/transaction.
====> * used_repository_scanner
        Scan used enabled repositories
====> * cups_scanner
        Gather facts about CUPS features which needs to be migrated
====> * spamassassin_config_read
        Reads spamc configuration (/etc/mail/spamassassin/spamc.conf), the
====> * pes_events_scanner
        Provides data about package events from Package Evolution Service.
====> * vsftpd_config_read
        Reads vsftpd configuration files (/etc/vsftpd/*.conf) and extracts necessary information.
====> * multipath_conf_read
        Read multipath configuration files and extract the necessary informaton
====> * setuptargetrepos
        Produces list of repositories that should be available to be used by Upgrade process.
==> Processing phase `Checks`
====> * check_luks_and_inhibit
        Check if any encrypted partitions is in use. If yes, inhibit the upgrade process.
====> * check_memcached
        Check for incompatible changes in memcached configuration.
====> * check_os_release
        Check if the current RHEL minor version is supported. If not, inhibit the upgrade process.
====> * authselect_check
        Confirm suggested authselect call from AuthselectScanner.
====> * checkacpid
        Check if acpid is installed. If yes, write information about non-compatible changes.
====> * tcp_wrappers_check
        Check the list of packages previously compiled with TCP wrappers support
====> * postgresql_check
        Actor checking for presence of PostgreSQL installation.
====> * check_root_symlinks
        Check if the symlinks /bin and /lib are relative, not absolute.
====> * check_kde_gnome
        Checks whether KDE is installed
====> * check_non_mount_boot_s390
        Inhibits on s390 when /boot is NOT on a separate partition.
====> * check_btrfs
        Check if Btrfs filesystem is in use. If yes, inhibit the upgrade process.
====> * check_se_linux
        Check SELinux status and produce decision messages for further action.
====> * check_rhsmsku
        Ensure the system is subscribed to the subscription manager
====> * check_sendmail
        Check if sendmail is installed, check whether configuration update is needed, inhibit upgrade if TCP wrappers
====> * open_ssh_deprecated_directives_check
        Check for any deprecated directives in the OpenSSH configuration.
====> * check_ipa_server
        Check for ipa-server and inhibit upgrade
====> * check_skipped_repositories
        Produces a report if any repositories enabled on the system are going to be skipped.
====> * check_ntp
        Check if ntp and/or ntpdate configuration needs to be migrated.
====> * check_chrony
        Check for incompatible changes in chrony configuration.
====> * check_firewalld
        Check for certain firewalld configuration that may prevent an upgrade.
====> * check_docker
        Checks if Docker is installed and warns about its deprecation in RHEL8.
====> * open_ssh_algorithms
        OpenSSH configuration does not contain any unsupported cryptographic algorithms.
====> * checkdosfstools
        Check if dosfstools is installed. If yes, write information about non-compatible changes.
====> * check_brltty
        Check if brltty is installed, check whether configuration update is needed.
====> * cups_check
        Reports changes in configuration between CUPS 1.6.3 and 2.2.6
====> * checktargetrepos
        Check whether target yum repositories are specified.
====> * check_sap_hana
        If SAP HANA has been detected, several checks are performed to ensure a successful upgrade.
====> * check_removed_envvars
        Check for usage of removed environment variables and inhibit the upgrade
====> * zipl_check_boot_entries
        Inhibits the upgrade if a problematic Zipl configuration is detected on the system.
====> * checkhybridimage
        Check if the system is using Azure hybrid image.
====> * quagga_report
        Checking for babeld on RHEL-7.
====> * unsupported_upgrade_check
        Checks enviroment variables and produces a warning report if the upgrade is unsupported.
====> * checkfstabxfsoptions
        Check the FSTAB file for the deprecated / removed XFS mount options.
====> * check_boot_avail_space
        Check if at least 100Mib of available space on /boot. If not, inhibit the upgrade process.
====> * python_inform_user
        This actor informs the user of differences in Python version and support in RHEL 8.
====> * check_system_arch
        Check if system is running at a supported architecture. If no, inhibit the upgrade process.
====> * check_etc_releasever
        Check releasever info and provide a guidance based on the facts
====> * removed_pam_modules
        Check for modules that are not available in RHEL 8 anymore
====> * check_cifs
        Check if CIFS filesystem is in use. If yes, inhibit the upgrade process.
====> * open_ssh_protocol
        Protocol configuration option was removed.
====> * check_nfs
        Check if NFS filesystem is in use. If yes, inhibit the upgrade process.
====> * check_postfix
        Check if postfix is installed, check whether configuration update is needed.
====> * multipath_conf_check
        Checks whether the multipath configuration can be updated to RHEL-8 and
====> * check_fips
        Inhibit upgrade if FIPS is detected as enabled.
====> * powertop
        Check if PowerTOP is installed. If yes, write information about non-compatible changes.
====> * check_installed_debug_kernels
        Inhibit IPU (in-place upgrade) when multiple debug kernels are installed.
====> * sctp_checks
        Parses collected SCTP information and take necessary actions.
====> * check_wireshark
        Report a couple of changes in tshark usage
====> * sssd_check
        Check SSSD configuration for changes in RHEL8 and report them.
====> * checkgrep
        Check if Grep is installed. If yes, write information about non-compatible changes.
====> * efi_check_boot
        Adjust EFI boot entry for first reboot
====> * check_bind
        Actor parsing BIND configuration and checking for known issues in it.
====> * vsftpd_config_check
        Checks whether the vsftpd configuration is supported in RHEL-8. Namely checks that
====> * checkmemory
        The actor check the size of RAM against RHEL8 minimal hardware requirements
====> * check_installed_devel_kernels
        Inhibit IPU (in-place upgrade) when multiple devel kernels are installed.
====> * check_detected_devices_and_drivers
        Checks whether or not detected devices and drivers are usable on the target system.
====> * red_hat_signed_rpm_check
        Check if there are packages not signed by Red Hat in use. If yes, warn user about it.
====> * check_ha_cluster
        Check if HA Cluster is in use. If yes, inhibit the upgrade process.
====> * spamassassin_config_check
        Reports changes in spamassassin between RHEL-7 and RHEL-8
====> * multiple_package_versions
        Check for problematic 32bit packages installed together with 64bit ones.
====> * satellite_upgrade_check
        Check state of Satellite system before upgrade
====> * check_rpm_transaction_events
        Filter RPM transaction events based on installed RPM packages
====> * removed_pam_modules_check
        Check if it is all right to disable PAM modules that are not in RHEL-8.
====> * detect_grub_config_error
        Check grub configuration for syntax error in GRUB_CMDLINE_LINUX value.
====> * open_ssh_use_privilege_separation
        UsePrivilegeSeparation configuration option was removed.
====> * checkirssi
        Check if irssi is installed. If yes, write information about non-compatible changes.
====> * openssh_permit_root_login
        OpenSSH no longer allows root logins with password.
====> * yum_config_scanner
        Scans the configuration of the YUM package manager.
====> * check_installed_kernels
        Inhibit IPU (in-place upgrade) when installed kernels conflict with a safe upgrade.
====> * check_grub_core
        Check whether we are on legacy (BIOS) system and instruct Leapp to upgrade GRUB core
====> * check_yum_plugins_enabled
        Checks that the required yum plugins are enabled.
====> * check_skip_phase
        Skip all the subsequent phases until the report phase.
==> Processing phase `Reports`
====> * verify_check_results
        Check all dialogs and notify that user needs to make some choices.
====> * verify_check_results
        Check all generated results messages and notify user about them.

============================================================
                     UPGRADE INHIBITED                      
============================================================

Upgrade has been inhibited due to the following problems:
    1. Inhibitor: Use of NFS detected. Upgrade can't proceed
    2. Inhibitor: Leapp detected loaded kernel drivers which have been removed in RHEL 8. Upgrade cannot proceed.
    3. Inhibitor: Newest installed kernel not in use
    4. Inhibitor: Missing required answers in the answer file
Consult the pre-upgrade report for details and possible remediation.

============================================================
                     UPGRADE INHIBITED                      
============================================================


Debug output written to /var/log/leapp/leapp-preupgrade.log

============================================================
                           REPORT                           
============================================================

A report has been generated at /var/log/leapp/leapp-report.json
A report has been generated at /var/log/leapp/leapp-report.txt

============================================================
                       END OF REPORT                        
============================================================

Answerfile has been generated at /var/log/leapp/answerfile

real    3m26.738s
user    2m48.344s
sys 0m11.675s

The output of the report can be reviewed at :

📋 : /var/log/leapp/leapp-report.txt

Answering update questions and amending configurations

LEAPP will probably point a number of blocker issues (inhibitors) that prevent RHEL from being directly upgraded into the next version. The most typical ones are:

  • Deprecated drivers (eg: floppy)
  • Multiple NICs following the naming standard (eg: eth0 and eth1).
  • Not running the latest installed kernel
  • NFS mountpoints
  • Changes in configuration.

Deprecated drivers can be removed online with a simple modprobe -r command, eg:

# modprobe -r floppy
# modprobe -r pata_acpi

If you need to rename your NICs prior to upgrade, review your satellite configuration to ensure no service depends on such nic names. This can be achieved looking at the current configuration:

# satellite-installer --scenario satellite -h | grep eth

This command will provide the output of any service using 'eth' .

Finally, you'll need to answer any pending questions at /var/log/leapp/answerfile. They can be answered by editing the file, or programatically with:

# leapp answer --section remove_pam_pkcs11_module_check.confirm=True

Performing the upgrade

You will need about:

  • 15-30 min to download the new RHEL8 RPMs.
  • 15-30 min to power down the system, start in single user and perform the RHEL upgrade (done automatically by leapp).
  • 15-30 min post-reboot, once in RHEL8, for the leapp-upgrade process to run satellite-installer once again.

Launching the upgrade

Once the prerequisites have been sorted out, you can launch the actual install phase with:

# time leapp upgrade --reboot 
==> Processing phase `configuration_phase`
====> * ipu_workflow_config
        IPU workflow config actor
==> Processing phase `FactsCollection`
====> * source_boot_loader_scanner
        Scans the boot loader configuration on the source system.
[...]
====> * target_userspace_creator
        Initializes a directory to be populated as a minimal environment to run binaries from the target system.
Red Hat Enterprise Linux 8 for x86_64 - AppStre  33 MB/s |  47 MB     00:01    
Red Hat Enterprise Linux 8 for x86_64 - BaseOS   34 MB/s |  53 MB     00:01    
[...]
 rpm-plugin-systemd-inhibit    x86_644.14.3-24.el8_6                rhel-8-for-x86_64-baseos-rpms         79 k
 kpartx                        x86_640.8.4-22.el8_6.2               rhel-8-for-x86_64-baseos-rpms         115 k

Transaction Summary
================================================================================
Install  199 Packages

Total download size: 111 M
Installed size: 707 M
Downloading Packages:
(1/199): pinentry-1.1.0-2.el8.x86_64.rpm        376 kB/s | 100 kB     00:00    
(2/199): libxkbcommon-0.9.1-1.el8.x86_64.rpm    295 kB/s | 116 kB     00:00    
[...]
Complete!
==> Processing phase `TargetTransactionCheck`
====> * tmp_actor_to_satisfy_sanity_checks
        The actor does NOTHING but satisfy static sanity checks
====> * local_repos_inhibit
        Inhibits the upgrade if local repositories were found.
====> * report_set_target_release
        Reports information related to the release set in the subscription-manager after the upgrade.
====> * dnf_transaction_check
        This actor tries to solve the RPM transaction to verify the all package dependencies can be successfully resolved.
Applying transaction workaround - yum config fix

Applying transaction workaround - PostgreSQL symlink fix

Last metadata expiration check: 0:01:05 ago on Sun Oct 30 05:49:50 2022.
Package foreman-installer-katello-1:3.1.2.8-1.el7sat.noarch is already installed.
Package rubygem-foreman_maintain-1:1.0.18-1.el7sat.noarch is already installed.
Package tfm-rubygem-smart_proxy_ansible-3.3.1-4.el7sat.noarch is already installed.
Package satellite-installer-6.11.0.7-1.el7sat.noarch is already installed.
Package katello-4.3.0-3.el7sat.noarch is already installed.
Package foreman-installer-1:3.1.2.8-1.el7sat.noarch is already installed.
[...]
Transaction Summary
====================================================================================================================================================================
Install    796 Packages
Upgrade    520 Packages
Remove     501 Packages
Downgrade   11 Packages

Total size: 1.1 G
Total download size: 1.0 G
DNF will only download packages, install gpg keys, and check the transaction.
Downloading Packages:
[SKIPPED] libcroco-0.6.12-4.el8_2.1.x86_64.rpm: Already downloaded          
[...]
(1323/1324): glib2-devel-2.56.4-158.el8_6.1.x86 1.9 MB/s | 425 kB     00:00    
(1324/1324): linux-firmware-20220210-108.git634  43 MB/s | 196 MB     00:04    
--------------------------------------------------------------------------------
Total                                           7.7 MB/s | 1.0 GB     02:12     
Running transaction check
Transaction check succeeded.
Running transaction test
Transaction test succeeded.
Complete!
The downloaded packages were saved in cache until the next successful transaction.
You can remove cached packages by executing 'dnf clean packages'.
==> Processing phase `InterimPreparation`
====> * upgrade_initramfs_generator
        Creates the upgrade initramfs
[...]
Transaction test succeeded.
Complete!
====> * add_upgrade_boot_entry
        Add new boot entry for Leapp provided initramfs.
====> * efi_interim_fix
        Adjust EFI boot entry for first reboot
Connection to sat.example.org closed by remote host.

When the system reboots, it will automatically enter the upgrade phase. Progress can be seen on the server console (if it has one), or similarly in the serial console:

[    0.000000] Linux version 4.18.0-372.32.1.el8_6.x86_64 (mockbuild@x86-vm-08.build.eng.bos.redhat.com) (gcc version 8.5.0 20210514 (Red Hat 8.5.0-10) (GCC)) #1 SMP Fri Oct 7 12:35:10 EDT 2022
[    0.000000] Command line: BOOT_IMAGE=/boot/vmlinuz-upgrade.x86_64 root=UUID=989ac477-64f2-449f-8415-25b1a5f7d47f ro console=tty0 console=ttyS0,115200n8 no_timer_check net.ifnames=0 crashkernel=auto LANG=en_US.UTF-8 enforcing=0 rd.plymouth=0 plymouth.enable=0
[...]
[  OK  ] Reached target System Upgrade.
         Starting System Upgrade...
[    3.091330] upgrade[564]: starting upgrade hook
[    3.100141] upgrade[564]: /bin/upgrade: line 19: /sysroot/var/tmp/system-upgrade.state: Read-only file system
[    3.108025] upgrade[567]:   WARNING: locking_type (4) is deprecated, using --sysinit --readonly.
[    3.139068] upgrade[577]: Spawning container sysroot on /sysroot.
[    3.140429] upgrade[577]: Press ^] three times within 1s to kill container.
[    3.158634] upgrade[578]: Host and machine ids are equal (e6a3f27a614a4bafbce01f024fffa4fa): refusing to link journals
[   19.039129] upgrade[581]: ==> Processing phase `InitRamStart`
[   19.040185] upgrade[581]: ====> * remove_upgrade_boot_entry
[   19.041240] upgrade[581]:         Remove boot entry for Leapp provided initramfs.
[   20.201161] upgrade[581]: ==> Processing phase `LateTests`
[   20.202092] upgrade[581]: ====> * persistentnetnamesinitramfs
[   20.203015] upgrade[581]:         Get network interface information for physical ethernet interfaces with the new kernel in initramfs.
[   20.368520] upgrade[581]: ==> Processing phase `Preparation`
[   20.369458] upgrade[581]: ====> * applytransactionworkarounds
[   20.370454] upgrade[581]:         Executes registered workaround scripts on the system before the upgrade transaction
[   20.687348] upgrade[1127]: Applying transaction workaround - yum config fix
[   20.688443] upgrade[1127]: Applying transaction workaround - PostgreSQL symlink fix
[   20.713928] upgrade[581]: ====> * zipl_convert_to_blscfg
[   20.714805] upgrade[581]:         Convert the zipl boot loader configuration to the the boot loader specification on s390x systems.
[   20.810979] upgrade[581]: ====> * update_etc_sysconfig_kernel
[   20.811979] upgrade[581]:         Update /etc/sysconfig/kernel file.
[   20.928661] upgrade[581]: ====> * removed_pam_modules_apply
[   20.929962] upgrade[581]:         Remove old PAM modules that are no longer available in RHEL-8 from
[   21.008238] upgrade[581]: ====> * remove_boot_files
[   21.009385] upgrade[581]:         Remove Leapp provided initramfs from boot partition.
[   21.079783] upgrade[581]: ====> * bind_update
[   21.080723] upgrade[581]:         Actor parsing facts found in configuration and modifing configuration.
[   21.702416] upgrade[581]: ====> * selinuxprepare
[   21.703252] upgrade[581]:         Remove selinux policy customizations before updating selinux-policy* packages
[   37.974629] upgrade[581]: ==> Processing phase `RPMUpgrade`
[   37.975625] upgrade[581]: ====> * dnf_upgrade_transaction
[   37.976681] upgrade[581]:         Setup and call DNF upgrade command
[   56.672687] upgrade[1508]: Last metadata expiration check: 0:09:28 ago on Sun Oct 30 05:49:50 2022.
[   56.674187] upgrade[1508]: Package foreman-installer-katello-1:3.1.2.8-1.el7sat.noarch is already installed.
[   56.675740] upgrade[1508]: Package rubygem-foreman_maintain-1:1.0.18-1.el7sat.noarch is already installed.
[   56.677163] upgrade[1508]: Package tfm-rubygem-smart_proxy_ansible-3.3.1-4.el7sat.noarch is already installed.
[   56.678745] upgrade[1508]: Package satellite-installer-6.11.0.7-1.el7sat.noarch is already installed.
[   56.680170] upgrade[1508]: Package katello-4.3.0-3.el7sat.noarch is already installed.
[   56.681422] upgrade[1508]: Package foreman-installer-1:3.1.2.8-1.el7sat.noarch is already installed.
[   56.682846] upgrade[1508]: Dependencies resolved.
...
[  629.502384] upgrade[1508]:   Cleanup          : libffi-3.0.13-19.el7.x86_64                      2358/2377
[  629.504355] upgrade[1508]:   Running scriptlet: libffi-3.0.13-19.el7.x86_64                      2358/2377
[  629.506308] upgrade[1508]:   Cleanup          : libattr-2.4.46-13.el7.x86_64                     2359/2377
[  629.508318] upgrade[1508]:   Running scriptlet: libattr-2.4.46-13.el7.x86_64                     2359/2377
[  629.510145] upgrade[1508]:   Cleanup          : glibc-common-2.17-326.el7_9.x86_64               2360/2377
[  629.512042] upgrade[1508]:   Cleanup          : libselinux-2.5-15.el7.x86_64                     2361/2377
...
[  767.787262] upgrade[1508]:   yum-rhn-plugin-2.0.1-10.el7.noarch
[  767.789242] upgrade[1508]: Complete!
[  767.826797] upgrade[581]: ====> * scan_installed_target_kernel_version
[  767.828424] upgrade[581]:         Scan for the version of the newly installed kernel
[  768.206418] upgrade[581]: ====> * update_grub_core
[  768.208251] upgrade[581]:         On legacy (BIOS) systems, GRUB core (located in the gap between the MBR and the
[  769.967896] upgrade[581]: ====> * prepare_python_workround
[  769.969822] upgrade[581]:         Prepare environment to be able to run leapp with Python3 in initrd.
[  770.059375] upgrade[581]: ====> * check_leftover_packages
[  770.061088] upgrade[581]:         Check if there are any RHEL 7 packages present after upgrade.
[  785.130946] upgrade[581]: ====> * report_leftover_packages
[  785.132304] upgrade[581]:         Collect messages about leftover el7 packages and generate report for users.
[  785.479332] upgrade[581]: Debug output written to /var/log/leapp/leapp-upgrade.log
[  785.482295] upgrade[581]: ============================================================
[  785.485510] upgrade[581]:                            REPORT
[  785.488046] upgrade[581]: ============================================================
[  785.491356] upgrade[581]: A report has been generated at /var/log/leapp/leapp-report.json
[  785.494664] upgrade[581]: A report has been generated at /var/log/leapp/leapp-report.txt
[  785.497093] upgrade[581]: ============================================================
[  785.499437] upgrade[581]:                        END OF REPORT
[  785.501382] upgrade[581]: ============================================================
[  785.503777] upgrade[581]: Answerfile has been generated at /var/log/leapp/answerfile
[  785.541018] upgrade[577]: Container sysroot exited successfully.
[  785.569372] upgrade[23665]: Spawning container sysroot on /sysroot.
[  785.571293] upgrade[23665]: Press ^] three times within 1s to kill container.
[  785.589754] upgrade[23666]: Host and machine ids are equal (e6a3f27a614a4bafbce01f024fffa4fa): refusing to link journals
[  800.976473] upgrade[23669]: ==> Processing phase `Applications`
[  800.978106] upgrade[23669]: ====> * persistentnetnamesconfig
[  800.979944] upgrade[23669]:         Generate udev persistent network naming configuration
[  801.100341] upgrade[23669]: ====> * satellite_upgrade_data_migration
[  801.101788] upgrade[23669]:         Reconfigure Satellite services and migrate PostgreSQL data
[  801.200412] upgrade[23669]: ====> * sctp_config_update
[  801.201875] upgrade[23669]:         This actor updates SCTP configuration for RHEL8.
[  801.453848] upgrade[23669]: ====> * migrate_ntp
[  801.455196] upgrade[23669]:         Migrate ntp and/or ntpdate configuration to chrony.
[  801.570183] upgrade[23669]: ====> * cups_migrate
[  801.571466] upgrade[23669]:         cups_migrate actor
[  801.677384] upgrade[23669]: ====> * spamassassin_config_update
[  801.679294] upgrade[23669]:         This actor performs several modifications to spamassassin configuration
[  801.813341] upgrade[23669]: ====> * network_manager_update_config
[  801.814771] upgrade[23669]:         Updates NetworkManager configuration for Red Hat Enterprise Linux 8.
[  801.929467] upgrade[23669]: ====> * authselect_apply
[  801.930932] upgrade[23669]:         Apply changes suggested by AuthselectScanner.
[  802.030986] upgrade[23669]: ====> * firewalld_update_lockdown_whitelist
[  802.032780] upgrade[23669]:         Update the firewalld Lockdown Whitelist.
[  802.168557] upgrade[23669]: ====> * sanebackends_migrate
[  802.170137] upgrade[23669]:         Actor for migrating sane-backends configuration files.
[  802.369631] upgrade[23669]: ====> * migrate_sendmail
[  802.371174] upgrade[23669]:         Migrate sendmail configuration files.
[  802.506474] upgrade[23669]: ====> * quagga_to_frr
[  802.507824] upgrade[23669]:         Edit frr configuration on the new system.
[  802.589137] upgrade[23669]: ====> * set_etc_releasever
[  802.590430] upgrade[23669]:         Release version in /etc/dnf/vars/releasever will be set to the current target release
[  802.686639] upgrade[23669]: ====> * vim_migrate
[  802.688214] upgrade[23669]:         Modify configuration files of Vim 8.0 and later to keep the same behavior
[  803.087132] upgrade[23669]: ====> * vsftpd_config_update
[  803.088369] upgrade[23669]:         Modifies vsftpd configuration files on the target RHEL-8 system so that the effective
[  803.224567] upgrade[23669]: ====> * migrate_brltty
[  803.226273] upgrade[23669]:         Migrate brltty configuration files.
[  803.306197] upgrade[23669]: ====> * selinuxapplycustom
[  803.307482] upgrade[23669]:         Re-apply SELinux customizations from the original RHEL installation
[  813.958361] upgrade[23669]: ====> * network_manager_update_service
[  813.960107] upgrade[23669]:         Updates NetworkManager services status.
[  814.226576] upgrade[23669]: ====> * multipath_conf_update
[  814.227993] upgrade[23669]:         Modifies multipath configuration files on the target RHEL-8 system so that
[  814.343563] upgrade[23669]: ====> * cupsfilters_migrate
[  814.345277] upgrade[23669]:         Actor for migrating package cups-filters.
[  814.739751] upgrade[23669]: ==> Processing phase `ThirdPartyApplications`
[  814.741239] upgrade[23669]: ==> Processing phase `Finalization`
[  814.742808] upgrade[23669]: ====> * schedule_se_linux_relabelling
[  814.744659] upgrade[23669]:         Schedule SELinux relabelling.
[  814.884644] upgrade[23669]: ====> * grubenvtofile
[  814.886180] upgrade[23669]:         Convert "grubenv" symlink to a regular file on Azure hybrid images using BIOS.
[  814.959727] upgrade[23669]: ====> * kernelcmdlineconfig
[  814.961317] upgrade[23669]:         Append extra arguments to the target RHEL kernel command line
[  815.278592] upgrade[23669]: ====> * efi_finalization_fix
[  815.280153] upgrade[23669]:         Adjust EFI boot entry for final reboot
[  815.376693] upgrade[23669]: ====> * force_default_boot_to_target_kernel_version
[  815.378281] upgrade[23669]:         Ensure the default boot entry is set to the new target kernel
[  816.031702] upgrade[23669]: ====> * create_systemd_service
[  816.033670] upgrade[23669]:         Add a systemd service to launch Leapp.
[  816.169112] upgrade[23669]: ====> * target_initramfs_generator
[  816.170874] upgrade[23669]:         Regenerate the target RHEL major version initrd and include files produced by other actors
[  816.263190] upgrade[23669]: ====> * set_permissive_se_linux
[  816.264610] upgrade[23669]:         Set SELinux mode.
[  816.424797] upgrade[25854]: Running in chroot, ignoring request.
[  816.586634] upgrade[23669]: Debug output written to /var/log/leapp/leapp-upgrade.log
[  816.588318] upgrade[23669]: ============================================================
[  816.590201] upgrade[23669]:                            REPORT
[  816.591874] upgrade[23669]: ============================================================
[  816.594323] upgrade[23669]: A report has been generated at /var/log/leapp/leapp-report.json
[  816.596922] upgrade[23669]: A report has been generated at /var/log/leapp/leapp-report.txt
[  816.599435] upgrade[23669]: ============================================================
[  816.601852] upgrade[23669]:                        END OF REPORT
[  816.603864] upgrade[23669]: ============================================================
[  816.605804] upgrade[23669]: Answerfile has been generated at /var/log/leapp/answerfile
[  816.645563] upgrade[23665]: Container sysroot exited successfully.
[  816.650618] upgrade[564]: writing logs to disk and rebooting
[  816.784097] upgrade[25871]: Spawning container sysroot on /sysroot.
[  816.785843] upgrade[25871]: Press ^] three times within 1s to kill container.
[  816.805312] upgrade[25872]: Host and machine ids are equal (e6a3f27a614a4bafbce01f024fffa4fa): refusing to link journals
[  816.825354] upgrade[25871]: Container sysroot exited successfully.
[  817.100972] upgrade[564]: /bin/upgrade: line 19: /sysroot/var/tmp/system-upgrade.state: Read-only file system
[  OK  ] Stopped target Timers.
[  OK  ] Stopped target Remote File Systems (Pre).
...
[  817.883505] reboot: Restarting system
[  817.884817] reboot: machine restart

Now the system will restart and start a SELinux relabeling process:

[   29.079373] selinux-autorelabel[817]: Warning: Skipping the following R/O filesystems:
[   29.081344] selinux-autorelabel[817]: /sys/fs/cgroup
[   29.082834] selinux-autorelabel[817]: Relabeling / /dev /dev/hugepages /dev/mqueue /dev/pts /dev/shm /run /sys /sys/fs/cgroup/blkio /sys/fs/cgroup/cpu,cpuacct /sys/fs/cgroup/cpuset /sys/fs/cgroup/devices /sys/fs/cgroup/freezer /sys/fs/cgroup/hugetlb /sys/fs/cgroup/memory /sys/fs/cgroup/net_cls,net_prio /sys/fs/cgroup/perf_event /sys/fs/cgroup/pids /sys/fs/cgroup/rdma /sys/fs/cgroup/systemd /sys/fs/pstore /sys/kernel/debug /sys/kernel/tracing
...
[  181.063734] reboot: Restarting system
[  181.064470] reboot: machine restart

At this moment, the system restarts with RHEL8 and initiates the last part of the configuration and upgrade process. The progress can be followed logging in via SSH and following the leapp-upgrade systemd unit, and the /var/log/foreman-installer/satellite.log :

# journalctl -u leapp_resume.service  -f
-- Logs begin at Sun 2022-10-30 06:15:40 EDT. --
Oct 30 06:15:46 sat.example.org systemd[1]: Starting Temporary Leapp service which resumes execution after reboot...
Oct 30 06:16:08 sat.example.org leapp3[1348]: ==> Processing phase `FirstBoot`
Oct 30 06:16:08 sat.example.org leapp3[1348]: ====> * network_manager_update_connections
Oct 30 06:16:08 sat.example.org leapp3[1348]:         Update NetworkManager connections.
Oct 30 06:16:08 sat.example.org leapp3[1348]: ====> * enable_rhsm_target_repos
Oct 30 06:16:08 sat.example.org leapp3[1348]:         On the upgraded target system, set release and enable repositories that were used during the upgrade
Oct 30 06:17:23 sat.example.org leapp3[1348]: ====> * satellite_upgrader
Oct 30 06:17:23 sat.example.org leapp3[1348]:         Execute installer in the freshly booted system, to finalize Satellite configuration
Oct 30 06:36:00 sat.example.org leapp3[6544]: Running the installer. This can take a while.
Oct 30 06:36:00 sat.example.org leapp3[1348]: ====> * remove_systemd_resume_service
Oct 30 06:36:00 sat.example.org leapp3[1348]:         Remove systemd service to launch Leapp.
Oct 30 06:36:01 sat.example.org leapp3[1348]: Debug output written to /var/log/leapp/leapp-upgrade.log
Oct 30 06:36:01 sat.example.org leapp3[1348]: ============================================================
Oct 30 06:36:01 sat.example.org leapp3[1348]:                            REPORT
Oct 30 06:36:01 sat.example.org leapp3[1348]: ============================================================
Oct 30 06:36:01 sat.example.org leapp3[1348]: A report has been generated at /var/log/leapp/leapp-report.json
Oct 30 06:36:01 sat.example.org leapp3[1348]: A report has been generated at /var/log/leapp/leapp-report.txt
Oct 30 06:36:01 sat.example.org leapp3[1348]: ============================================================
Oct 30 06:36:01 sat.example.org leapp3[1348]:                        END OF REPORT
Oct 30 06:36:01 sat.example.org leapp3[1348]: ============================================================
Oct 30 06:36:01 sat.example.org leapp3[1348]: Answerfile has been generated at /var/log/leapp/answerfile
Oct 30 06:36:01 sat.example.org systemd[1]: leapp_resume.service: Succeeded.
Oct 30 06:36:01 sat.example.org systemd[1]: Started Temporary Leapp service which resumes execution after reboot.

The satellite should be now up and running in the latest version!

We can verify with foreman-maintain, as usual:

# foreman-maintain service status
...
\ All services are running                                            [OK]      
--------------------------------------------------------------------------------

Post upgrade tasks

Set SELinux in enforcing mode

As you folks are running ALL your systems with SELinux in enforcing mode 😉 , you'll need to reenable it with:

# vim /etc/selinux/config   # (and set it to enforcing)
# dnf reinstall foreman-selinux katello-selinux --disableplugin=foreman-protector -y && reboot

Remove the package locks in /etc/yum.conf

Edit /etc/yum.conf so no packages are listed in the exclude section. The leapp process leaves the following configuration, which must be removed:

# grep exclude /etc/yum.conf
exclude=python2-leapp,snactor,leapp-upgrade-el7toel8,leapp

Remove the leapp package

As a part of the upgrade, the leapp package is not automatically removed and this can create issues.

You can remove the leapp package with:

# dnf remove leapp leapp-deps-el8 leapp-repository-deps-el8 leapp-upgrade-el7toel8  python2-leapp  --disableplugin=foreman-protector  -y

Optionally perform an update to the latest Satellite version

As a verification of the previous steps, we can perform an optional update of Satellite to ensure we didn't forget any relevant step. The update should do nothing (packages are already in the latest version), and we can quickly confirm no problems will occur on future updates.

# foreman-maintain upgrade run --target-version=6.11.z -y

Conclusion

All in all, great work of the LEAPP team creating a tool that will provide the framework to perform in-place upgrades of RHEL operating systems for the years to come!

Upgrading Ansible Tower to Ansible Automation Platform

It's been quite a while since I last touched Ansible Tower, and I'm glad to report that the latest Ansible Automation Platform introduces several enhancements that makes it a really attractive product.

An strategy to perform upgrades

The Ansible team at Red Hat has published a number of documents on how to perform the upgrade, as this upgrade changes some of the concepts traditionally used in Tower. Namely, virtual environments are replaced by a container-based technology named Execution Environments.

The guide is available here:

Performing the upgrade

In this case, I'll what I did to upgrade an existing clustered Ansible Tower installation from 3.8.x to Ansible Automation Platform 2.2.x, and enable the new features provided by the product (Automation Hub), and the SaaS service provided by Red Hat at console.redhat.com .

Review source environment

In this steps, you'll be noting how the source environment was configured infrastructure-wise, with things like:

  • Check how servers are currently configured, including:
    • Filesystems and sizes
    • Networks
    • Operating system tuning
    • Operating system hardening
  • Check your Ansible Tower installation:
    • Exact version
    • Database Schema status
    • Inventory file used for installation
  • Firewall rules to required resources, such as:
    • Internet proxies
    • SCMs (Git, etc)
    • Authentication (AD/LDAP)
    • CMDB / dynamic inventory sources
    • Red Hat Satellite
    • Other shared resources

Perform a dry-run migration

It is possible to perform a mock upgrade in a separate system, starting from an Ansible Tower backup of the "old" system, even if the old system is a clustered one.

This can be accomplished by performing a backup on the source Tower system, and a fresh Tower install + restore process in the test system.

root@tower-old ~/ansible-tower-setup-bundle-3.8.6-2 # ./setup -b
(transfer backup to test system) 

Then you can create an inventory in the test system, and run the installer as if it was a new system, with a blank config. Then restore the database dump on it.

root@tower-test ~/ansible-automation-platform-setup-bundle-1.2.7-2 # ./setup.sh
root@tower-test ~/ansible-automation-platform-setup-bundle-1.2.7-2 # ./setup.sh -r -e 'restore_backup_file=/tmp/tower-backup.tar.gz'

In this case, you'll be looking to ensure your database schema is migrated successfully prior to engaging into the next upgrade step (eg, Tower 3.8.x to AAP 1.2.latest, to AAP 2.1.latest, finally to AAP 2.2.latest).

In my case, migrating from Tower 3.8.3 to AAP 1.2 (or Tower 3.8.latest) failed silently. The Ansible Tower update process (setup.sh) finished successfully, but the web page itself was showing a maintenance page.

This was solved by checking the database schema:

root@tower-test ~/ansible-automation-platform-setup-bundle-1.2.7-2 # awx-manage  showmigrations | grep -v [X]
auth
 [ ] 0012_alter_user_first_name_max_length
conf
contenttypes
main
oauth2_provider
 [ ] 0002_auto_20190406_1805
 [ ] 0003_auto_20201211_1314
sessions
sites
social_django
 [ ] 0009_auto_20191118_0520
 [ ] 0010_uid_db_index
sso
taggit
 [ ] 0004_alter_taggeditem_content_type_alter_taggeditem_tag

Re-running setup.sh fixed the issue, and further updates could be done successfully.

After this snag was fixed, the upgrade to 2.1 and 2.2 went smoothly.

Post upgrade tasks

Once your environment is upgraded to Ansible Automation Platform 2.2.x, you can also review the following settings:

Default Execution environment

Virtual Envs are deprecated in AAP 2.x, so you should move to Execution Environments (EEs) and probably create your own EEs based on the supported EEs shipped with AAP.

root@tower ~ # awx-manage list_custom_venvs 
· Discovered Virtual Environments:
/var/lib/awx/venv/myvenv
  • To export the contents of a (deprecated) virtual environment, run the following command while supplying the path as an argument: awx-manage export_custom_venv /path/to/venv

  • To view the connections a (deprecated) virtual environment had in the database, run the following command while supplying the path as an argument: awx-manage custom_venv_associations /path/to/venv

root@tower ~ # awx-manage custom_venv_associations  /var/lib/awx/venv/myvenv -q
inventory_sources: []
job_templates: []
organizations: []
projects: []

Integration with Automation Analytics

Red Hat provides Automation Analytics included in the Ansible Automation Platform, and can be enabled by:

In case a proxy is required, you can configure it in the AAP Job settings menu, then immediately trigger a sync:

# automation-controller-service restart

# awx-manage gather_analytics --ship                                        
/tmp/48627e92-4cfd-4f8d-86f2-c180adcaef42-2022-06-11-000448+0000-0.tar.gz   
/tmp/48627e92-4cfd-4f8d-86f2-c180adcaef42-2022-06-11-000448+0000-1.tar.gz  

Cleaning up instances

You might end up in a state of having leftover instances in your environment.

They can be purged in this way:

# awx-manage list_instances                                                
[controlplane capacity=178 policy=100%]
        localhost capacity=0 node_type=hybrid version=4.2.0
        aap.example.org capacity=178 node_type=hybrid version=4.2.0 heartbeat="2022-06-09 08:12:18"

[default capacity=178 policy=100%]
        localhost capacity=0 node_type=hybrid version=4.2.0
        aap.example.org capacity=178 node_type=hybrid version=4.2.0 heartbeat="2022-06-09 08:12:18"


# awx-manage remove_from_queue --hostname=localhost --queuename=controlplane

# awx-manage remove_from_queue --hostname=localhost --queuename=default

# awx-manage deprovision_instance --hostname localhost
Instance Removed
Successfully deprovisioned localhost
(changed: True)

Enabling the Private Automation Hub

Once your AAP control plane is up and running, you can add your Private Automation Hub by adding the new system into the inventory and re-running setup.sh .

Interesting links

Red Hat has put together a number of resources on this new Ansible Automation Platform, available here:

... and support notes

  • https://access.redhat.com/articles/6239891 - Ansible Automation Platform 2 Migration Strategy Considerations
  • https://access.redhat.com/articles/6185641 - AAP 2 Migration Considerations Checklist https://access.redhat.com/articles/4098921 - What are the Recommended Upgrade Paths for Ansible Tower/Ansible Automation Platform?
  • https://access.redhat.com/solutions/6740441 - How Do I Perform Security Patching / OS Package Upgrades On Ansible Automation Platform Nodes Without Breaking Any Ansible Automation Platform Functionality?
  • https://access.redhat.com/solutions/6834291 - May I only update one of the components I want on Ansible Tower or Ansible Automation Controller? https://access.redhat.com/solutions/4308791 - How Can I Bypass "noexec" Permission Issue On "/tmp" and "/var/tmp" During Ansible Tower and Ansible Automation Platform installation?
  • https://access.redhat.com/articles/6177982 - What’s new with Ansible Automation Platform 2.0: Developing with ansible-builder and Automation execution environments.
  • https://access.redhat.com/solutions/5115431 - How to configure Ansible Tower to use a proxy for Automation Analytics
  • https://access.redhat.com/solutions/5519041 - Why Is The Manual Data Uploading To Red Hat Automation Analytics Failing With Status 401 In Ansible Tower?
  • https://access.redhat.com/solutions/6446711 - How do I Replace All Execution Environments in Ansible Automation Platform using Private Images from Private Automation Hub?
  • https://access.redhat.com/solutions/6539431 - How Do I Install Ansible Automation Platform 2.0 in a Disconnected Environment from the Internet?
  • https://access.redhat.com/solutions/6635021 - How Do I Install Ansible Automation Platform 2.1 in a Disconnected Environment from the Internet in a Single Node?
  • https://access.redhat.com/solutions/6219021 - In Ansible Automation Controller, How Do I Set a Proxy Just for Ansible Galaxy And Not Globally?
  • https://access.redhat.com/solutions/3127941 - How do I Specify HTTP/HTTPS_PROXY using Ansible Tower?
  • https://access.redhat.com/solutions/4798321 - How to Activate Ansible Tower License with Red Hat Customer Credentials under a Proxy Environment? (edit /etc/supervisord.conf file)

Other interesting resources

Porting guides

  • https://docs.ansible.com/ansible/devel/porting_guides/porting_guides.html
  • https://docs.ansible.com/ansible/devel/porting_guides/porting_guide_2.10.html
  • https://docs.ansible.com/ansible/devel/porting_guides/porting_guide_3.html

Ansible lint

https://ansible-lint.readthedocs.io/en/latest/

AWX cli

https://github.com/ansible/awx/blob/devel/INSTALL.md#installing-the-awx-cli

Lifecycle

  • https://access.redhat.com/support/policy/update_policies/
  • https://access.redhat.com/support/policy/updates/ansible-automation-platform

... happy hacking!

RHV 4.4 SP1 released

Red Hat has released RHV 4.4 SP1, the latest version based on upstream oVirt 4.5.x series. Major changes include support for RHEL 8.6 hypervisors, and a new workflow to renew hypervisor certificates. Internal certificates changed validity from 5 years to 13 months during the 4.4 series, and this version rolls back these changes to allow a more convenient way of managing the platform.

Previous to performing an upgrade, the following documents are relevant:

Upgrading RHV-M to the latest version

First I enabled the right repositories for RHV 4.4, which now include some Ceph repositories:

subscription-manager repos \
    --disable='*' \
    --enable=rhel-8-for-x86_64-baseos-rpms \
    --enable=rhel-8-for-x86_64-appstream-rpms \
    --enable=rhv-4.4-manager-for-rhel-8-x86_64-rpms \
    --enable=fast-datapath-for-rhel-8-x86_64-rpms \
    --enable=jb-eap-7.4-for-rhel-8-x86_64-rpms \
    --enable=openstack-16.2-cinderlib-for-rhel-8-x86_64-rpms \
    --enable=rhceph-4-tools-for-rhel-8-x86_64-rpms

In my lab environment, I found the following snags while upgrading:

Unsupported package manager

Prior to launching engine-setup to upgrade the Manager, I manually upgraded the yum and rpm packages to avoid an issue with the RHV-M installer (yum upgrade 'yum*' 'rpm*') .

I was originally runing RHV-M 4.4.5 based on RHEL 8.3, so quite an old release. After upgrading those packages, the upgrade progressed until I found the following issue:

2022-05-27 09:20:35,463+0200 DEBUG otopi.context context._executeMethod:127 Stage setup METHOD otopi.plugins.ovirt_engine_setup.ovirt_engine_common.distro-rpm.packages.Plugin._setup
2022-05-27 09:20:35,465+0200 DEBUG otopi.context context._executeMethod:145 method exception
Traceback (most recent call last):
  File "/usr/share/ovirt-engine/setup/ovirt_engine_setup/util.py", line 305, in getPackageManager
    from otopi import minidnf
  File "/usr/lib/python3.6/site-packages/otopi/minidnf.py", line 25, in <module>
    import dnf.transaction_sr
ModuleNotFoundError: No module named 'dnf.transaction_sr'

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "/usr/share/ovirt-engine/setup/ovirt_engine_setup/util.py", line 312, in getPackageManager
    from otopi import miniyum
  File "/usr/lib/python3.6/site-packages/otopi/miniyum.py", line 17, in <module>
    import rpmUtils.miscutils
ModuleNotFoundError: No module named 'rpmUtils'

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "/usr/lib/python3.6/site-packages/otopi/context.py", line 132, in _executeMethod
    method['method']()
  File "/usr/share/ovirt-engine/setup/bin/../plugins/ovirt-engine-setup/ovirt-engine-common/distro-rpm/packages.py", line 293, in _setup
    osetuputil.getPackageManager(self.logger)
  File "/usr/share/ovirt-engine/setup/ovirt_engine_setup/util.py", line 322, in getPackageManager
    'No supported package manager found in your system'
RuntimeError: No supported package manager found in your system
2022-05-27 09:20:35,467+0200 ERROR otopi.context context._executeMethod:154 Failed to execute stage 'Environment setup': No supported package manager found in your system

The installation was automatically rolled-back, so no issues there, so by just updating the yum and rpm packages, the issue was solved.

Unable to upgrade database schema

Another of the issues I found is that the upgrade process wasn't working due to engine-setup being unable to refresh the database schema.

# view /var/log/ovirt-engine/setup/ovirt-engine-setup-20220527092805-eci7jy.log 
 255732 CONTEXT:  SQL statement "ALTER TABLE vdc_options ALTER COLUMN default_value SET NOT NULL"
 255733 PL/pgSQL function fn_db_change_column_null(character varying,character varying,boolean) line 10 at EXECUTE
 255734 FATAL: Cannot execute sql command: --file=/usr/share/ovirt-engine/dbscripts/upgrade/pre_upgrade/0000_config.sql
 255735 
 255736 2022-05-27 09:36:22,230+0200 ERROR otopi.plugins.ovirt_engine_setup.ovirt_engine.db.schema schema._misc:530 schema.sh: FATAL: Cannot execute sql command: --file=/usr        /share/ovirt-engine/dbscripts/upgrade/pre_upgrade/0000_config.sql
 255737 2022-05-27 09:36:22,231+0200 DEBUG otopi.context context._executeMethod:145 method exception
 255738 Traceback (most recent call last):
 255739   File "/usr/lib/python3.6/site-packages/otopi/context.py", line 132, in _executeMethod
 255740     method['method']()
 255741   File "/usr/share/ovirt-engine/setup/bin/../plugins/ovirt-engine-setup/ovirt-engine/db/schema.py", line 532, in _misc
 255742     raise RuntimeError(_('Engine schema refresh failed'))
 255743 RuntimeError: Engine schema refresh failed
 255744 2022-05-27 09:36:22,232+0200 ERROR otopi.context context._executeMethod:154 Failed to execute stage 'Misc configuration': Engine schema refresh failed

This is covered in Bugzilla 2077387#c4, and is easily fixed by updating the database schema

root@rhevm ~ # /usr/share/ovirt-engine/dbscripts/engine-psql.sh -c "select *  from vdc_options where default_value is null ;"
 option_id |          option_name          |                        option_value                         | version | default_value 
-----------+-------------------------------+-------------------------------------------------------------+---------+---------------
       472 | ConfigDir                     | /etc/ovirt-engine                                           | general | 
       473 | AdminDomain                   | internal                                                    | general | 
       474 | AllowDuplicateMacAddresses    | false                                                       | general | 
       475 | DefaultWorkgroup              | WORKGROUP                                                   | general | 
       476 | KeystoneAuthUrl               |                                                             | general | 
       477 | LicenseCertificateFingerPrint | 5f 38 41 89 b1 33 49 0c 24 13 6b b3 e5 ba 9e c7 fd 83 80 3b | general | 
       478 | MacPoolRanges                 | 00:1A:4A:16:01:51-00:1A:4A:16:01:e6                         | general | 
       479 | MaxMacsCountInPool            | 100000                                                      | general | 
       482 | VdsFenceOptions               |                                                             | general | 
       483 | GlusterTunedProfile           | rhs-high-throughput,rhs-virtualization                      | 3.0     | 
       484 | GlusterTunedProfile           | rhs-high-throughput,rhs-virtualization                      | 3.1     | 
       485 | GlusterTunedProfile           | rhs-high-throughput,rhs-virtualization                      | 3.2     | 
       486 | GlusterTunedProfile           | rhs-high-throughput,rhs-virtualization                      | 3.3     | 
       487 | GlusterTunedProfile           | rhs-high-throughput,rhs-virtualization                      | 3.4     | 
       488 | GlusterTunedProfile           | rhs-high-throughput,rhs-virtualization                      | 3.5     | 
       462 | SupportBridgesReportByVDSM    | true                                                        | 3.1     | 
       716 | GlusterTunedProfile           | virtual-host,rhgs-sequential-io,rhgs-random-io              | 4.2     | 
(17 rows)

root@rhevm ~ # /usr/share/ovirt-engine/dbscripts/engine-psql.sh -c "UPDATE vdc_options SET default_value=option_value WHERE default_value IS NULL AND option_value IS NOT NULL;"
UPDATE 17

root@rhevm ~ #  /usr/share/ovirt-engine/dbscripts/engine-psql.sh -c "UPDATE vdc_options SET default_value='' WHERE default_value IS NULL AND option_value IS NULL;"
UPDATE 0

Finally, the engine-setup process finishes OK and after running a yum upgrade -y && systemctl restart ovirt-engine the Web UI is available again.

Upgrading RHEL hypervisors

My hypervisors where also running RHEL 8.3, and some minor RPM problems were found. It is expected that RHV-H insallations (RHV host) do not find such issues.

After enabling the repositories:

subscription-manager repos \
    --disable='*' \
    --enable=rhel-8-for-x86_64-baseos-rpms \
    --enable=rhel-8-for-x86_64-appstream-rpms \
    --enable=rhv-4-mgmt-agent-for-rhel-8-x86_64-rpms \
    --enable=fast-datapath-for-rhel-8-x86_64-rpms \
    --enable=advanced-virt-for-rhel-8-x86_64-rpms \
    --enable=openstack-16.2-cinderlib-for-rhel-8-x86_64-rpms \
    --enable=rhceph-4-tools-for-rhel-8-x86_64-rpms

When using the integrated Cluster Upgrade assistant in the WebUI, package resolution problems were found, and could be trivially fixed by removing the rpm -e network-scripts-openvswitch2.11.

Certificate validation

KCS 6865861 provides a detail explanation on how the process to renew certificates works at the moment, and provides a nifty script to check overall certificate validity of both RHVM and hypervisors (cert_date.sh).

A sample run shows:

root@rhevm ~ # ./cert_date_0.sh 
This script will check certificate expiration dates

Checking RHV-M Certificates...
=================================================
  /etc/pki/ovirt-engine/ca.pem:                          Feb 27 07:27:16 2028 GMT
  /etc/pki/ovirt-engine/certs/apache.cer:                Jun 11 11:38:13 2023 GMT
  /etc/pki/ovirt-engine/certs/engine.cer:                Jun 11 11:38:12 2023 GMT
  /etc/pki/ovirt-engine/qemu-ca.pem                      Aug  5 19:07:11 2030 GMT
  /etc/pki/ovirt-engine/certs/websocket-proxy.cer        Jun 11 11:38:13 2023 GMT
  /etc/pki/ovirt-engine/certs/jboss.cer                  Jun 11 11:38:12 2023 GMT
  /etc/pki/ovirt-engine/certs/ovirt-provider-ovn         May 18 16:01:35 2023 GMT
  /etc/pki/ovirt-engine/certs/ovn-ndb.cer                May 18 16:01:35 2023 GMT
  /etc/pki/ovirt-engine/certs/ovn-sdb.cer                May 18 16:01:35 2023 GMT
  /etc/pki/ovirt-engine/certs/vmconsole-proxy-helper.cer Feb  3 07:28:10 2023 GMT
  /etc/pki/ovirt-engine/certs/vmconsole-proxy-host.cer   Feb  3 07:28:10 2023 GMT
  /etc/pki/ovirt-engine/certs/vmconsole-proxy-user.cer   Feb  3 07:28:10 2023 GMT


Checking Host Certificates...

Host: rhevh1
=================================================
  /etc/pki/vdsm/certs/vdsmcert.pem:              May 30 02:55:03 2027 GMT
  /etc/pki/vdsm/libvirt-spice/server-cert.pem:   May 30 02:55:03 2027 GMT
  /etc/pki/vdsm/libvirt-vnc/server-cert.pem:     May 30 02:55:03 2027 GMT
  /etc/pki/libvirt/clientcert.pem:               May 30 02:55:03 2027 GMT
  /etc/pki/vdsm/libvirt-migrate/server-cert.pem: May 30 02:55:04 2027 GMT


Host: rhevh2
=================================================
  /etc/pki/vdsm/certs/vdsmcert.pem:              May 30 03:19:59 2027 GMT
  /etc/pki/vdsm/libvirt-spice/server-cert.pem:   May 30 03:19:59 2027 GMT
  /etc/pki/vdsm/libvirt-vnc/server-cert.pem:     May 30 03:19:59 2027 GMT
  /etc/pki/libvirt/clientcert.pem:               May 30 03:19:59 2027 GMT
  /etc/pki/vdsm/libvirt-migrate/server-cert.pem: May 30 03:19:59 2027 GMT

Wrap-up

All in all, some minor snags during the upgrade that should be fixed in newer releases to having a smoother experience.

Happy hacking!

Satellite 6.10 released

Red Hat Satellite version 6.10 has been released! This is a preparatory release for the upcoming Satellite 7.0, where major migrations take place for the new software version.

The official information available in:

Preparing an update

The following steps needs to be taken before upgrading to Satellite 6.10:

  • Ensure you are in the latest Satellite 6.9.z release (6.9.7). This is important as this release relies on having the latest packages to make pulp2 to pulp3 migration feasible.

  • Ensure you have plenty of space in /var/lib/pulp/published . This is where metadata of each content view is kept (namely, repository metadata). This information needs to be renegated by pulp3 so at some point both versions of the information exists at the same time. If you keep lots of content view versions, it is recommended to purge them prior to starting the process in order to save space (and to generally speed up Satellite operations).

  • You can review the pulp migration summary once you are in Satellite 6.9.7 with the following command foreman-maintain content migration-stats :

# foreman-maintain content migration-stats
Running Retrieve Pulp 2 to Pulp 3 migration statistics
================================================================================
Retrieve Pulp 2 to Pulp 3 migration statistics: 
API controllers newer than Apipie cache! Run apipie:cache rake task to regenerate cache.
============Migration Summary================
Migrated/Total RPMs: 111437/111456
Migrated/Total errata: 41998/41998                                                                       
Migrated/Total repositories: 115/115               
Estimated migration time based on yum content: fewer than 5 minutes

Note: ensure there is sufficient storage space for /var/lib/pulp/published to double in size before starting the migration process.
Check the size of /var/lib/pulp/published with 'du -sh /var/lib/pulp/published/'

Note: ensure there is sufficient storage space for postgresql.
You will need additional space for your postgresql database.  The partition holding '/var/opt/rh/rh-postgresql12/lib/pgsql/data/'
   will need additional free space equivalent to the size of your Mongo db database (/var/lib/mongodb/).

In case of problems with missing or broken RPMs, they will be detected as well:

============Missing/Corrupted Content Summary================
WARNING: MISSING OR CORRUPTED CONTENT DETECTED
Corrupted or Missing Rpm: 19/111456
Corrupted or missing content has been detected, you can examine the list of content in /tmp/unmigratable_content-20211117-32242-1m0sghx and take action by either:
1. Performing a 'Verify Checksum' sync under Advanced Sync Options, let it complete, and re-running the migration
2. Deleting/disabling the affected repositories and running orphan cleanup (foreman-rake katello:delete_orphaned_content) and re-running the migration
3. Manually correcting files on the filesystem in /var/lib/pulp/content/ and re-running the migration
4. Mark currently corrupted or missing content as skipped (foreman-rake katello:approve_corrupted_migration_content).  This will skip migration of missing or corrupted content.

                                                                      [OK]
--------------------------------------------------------------------------------

In my test lab, I just ingored those errors as they were some minor issues with some kernel packages.

It is also good to review the sizes of the current MongoDB and PostgreSQL databases. As MongoDB is finally removed, data will be migrated to Postgres and its filesystem should have enough space.

# du -scm /var/lib/mongodb/
7196    /var/lib/mongodb/
7196    total

 du -scm /var/opt/rh/rh-postgresql12/lib/pgsql/data/
10205   /var/opt/rh/rh-postgresql12/lib/pgsql/data/
10205   total

Note that you might also need to remove the following legacy RPMs prior to upgrading to Satellite 6.10 . My Satellite was installed in the 6.3 timeframe and for some reason the packages have been lingering around since then. If the packages are present, the installer will issue a message regarding yum being unable to properly resolve dependencies.

yum erase tfm-rubygem-ethon tfm-rubygem-qpid_messaging tfm-rubygem-typhoeus tfm-rubygem-zest tfm-rubygem-typhoeus tfm-rubygem-fog-xenserver tfm-rubygem-pulp_docker_client tfm-ruby gem-awesome_print tfm-rubygem-trollop

Upgrading the Satellite version

The upgrade process itself doesn't change much from earlier. It will just take more time to accomodate the data migration.

# time foreman-maintain upgrade run  --target-version=6.10 -y
Checking for new version of satellite-maintain...                                                  
Security: kernel-3.10.0-1160.45.1.el7.x86_64 is an installed security update                       
Security: kernel-3.10.0-1160.42.2.el7.x86_64 is the currently running version                      
Loaded plugins: foreman-protector, product-id, subscription-manager                                
Unable to upload Enabled Repositories Report                                                       
Nothing to update, can't find new version of satellite-maintain.                                   
Running preparation steps required to run the next scenarios                                       
================================================================================                   
Check whether system has any non Red Hat repositories (e.g.: EPEL) enabled:                        
| Checking repositories enabled on the systemUnable to upload Enabled Repositories Report          
| Checking repositories enabled on the system                         [OK]                         
--------------------------------------------------------------------------------                   


Running Checks before upgrading to Satellite 6.10                                                  
================================================================================                   
Warn about Puppet content removal prior to 6.10 upgrade:              [OK]                         
--------------------------------------------------------------------------------                   
Check for newer packages and optionally ask for confirmation if not found.:                        
Confirm that you are running the latest minor release of Satellite 6.9 (assuming yes)              
                                                                      [OK]                         
--------------------------------------------------------------------------------                   
Check for HTTPS proxies from the database:                            [OK]                         
--------------------------------------------------------------------------------                   
Clean old Kernel and initramfs files from tftp-boot:                  [OK]                         
--------------------------------------------------------------------------------                      
Check number of fact names in database:                               [OK]               
--------------------------------------------------------------------------------                         
Check for verifying syntax for ISP DHCP configurations:               [OK]                               
--------------------------------------------------------------------------------                         
Check whether all services are running:                               [OK]                               
--------------------------------------------------------------------------------                         
Check whether all services are running using the ping call:           [OK]                               
--------------------------------------------------------------------------------                         
Check for paused tasks:                                               [OK]                               
--------------------------------------------------------------------------------                         
Check to verify no empty CA cert requests exist:                      [OK]                               
--------------------------------------------------------------------------------                         
Check whether system is self-registered or not:                       [OK]                               
--------------------------------------------------------------------------------                         
Check to make sure root(/) partition has enough space:                [OK]                               
--------------------------------------------------------------------------------                         
Check to make sure /var/lib/candlepin has enough space:               [OK]                               
--------------------------------------------------------------------------------                         
Check to validate candlepin database:                                 [OK]                               
--------------------------------------------------------------------------------                         
Check for running tasks:                                              [OK]                               
--------------------------------------------------------------------------------                         
Check for old tasks in paused/stopped state:                          [OK]                               
--------------------------------------------------------------------------------                         
Check for pending tasks which are safe to delete:                     [OK]                               
--------------------------------------------------------------------------------                         
Check for tasks in planning state:                                    [OK]                 
--------------------------------------------------------------------------------                         
Check to verify if any hotfix installed on system:                                                       
- Checking for presence of hotfix(es). It may take some time to verify.                                  
                                                                      [OK]                               
--------------------------------------------------------------------------------                         
Check whether system has any non Red Hat repositories (e.g.: EPEL) enabled:                              
/ Checking repositories enabled on the systemUnable to upload Enabled Repositories Report                 
/ Checking repositories enabled on the system                         [OK]                               
--------------------------------------------------------------------------------                         
Check if TMOUT environment variable is set:                           [OK]                               
--------------------------------------------------------------------------------                         
Check if any upstream repositories are enabled on system:                                                
\ Checking for presence of upstream repositories                      [OK]                               
--------------------------------------------------------------------------------                         
Check for roles that have filters with multiple resources attached:   [OK]                               
--------------------------------------------------------------------------------                         
Check for duplicate permissions from database:                        [OK]                               
--------------------------------------------------------------------------------                         
Check if system has any non Red Hat RPMs installed (e.g.: Fedora):    [OK]                               
--------------------------------------------------------------------------------                         
Check whether reports have correct associations:                      [OK]                               
--------------------------------------------------------------------------------                         
Check to validate yum configuration before upgrade:                   [OK]                               
--------------------------------------------------------------------------------                         
Check if checkpoint_segments configuration exists on the system:      [OK]                               
--------------------------------------------------------------------------------                         
--------------------------------------------------------------------------------        
Validate availability of repositories:              
/ Validating availability of repositories for 6.10                    [OK]                               
--------------------------------------------------------------------------------                         


The pre-upgrade checks indicate that the system is ready for upgrade.                                    
It's recommended to perform a backup at this stage.                                                      
Confirm to continue with the modification part of the upgrade (assuming yes)                             
Running Procedures before migrating to Satellite 6.10                                                    
================================================================================                         
disable active sync plans:                          
\ Total 0 sync plans are now disabled.                                [OK]                               
--------------------------------------------------------------------------------                         
Add maintenance_mode chain to iptables:                               [OK]                               
--------------------------------------------------------------------------------                         
Stop applicable services:                           

Stopping the following service(s):                  
rh-mongodb34-mongod, rh-redis5-redis, postgresql, qdrouterd, qpidd, squid, pulp_celerybeat, pulp_resource_manager, pulp_streamer, pulp_workers, smart_proxy_dynflow_core, tomcat, dynflow-sidekiq@orchestrator, foreman, httpd, puppetserver, foreman.socket, dynflow-sidekiq@worker, dynflow-sidekiq@worker-hosts-queue, foreman-proxy
\ All services stopped                                                [OK]                               
--------------------------------------------------------------------------------                         


Running preparation steps required to run the next scenarios                                             
================================================================================        
Check if tooling for package locking is installed:                    [OK]                               
--------------------------------------------------------------------------------                         


Running Migration scripts to Satellite 6.10                                                              
================================================================================                         
Enable applicable services:                         

Enabling the following service(s):                  
pulpcore-api, pulpcore-content, pulpcore-resource-manager, pulpcore-worker@1, pulpcore-worker@2, pulpcore-worker@3, pulpcore-worker@4                                                                              
| enabling pulpcore-resource-manager                                                                     
Created symlink from /etc/systemd/system/multi-user.target.wants/pulpcore-api.service to /etc/systemd/system/pulpcore-api.service.                                                                                 

Created symlink from /etc/systemd/system/multi-user.target.wants/pulpcore-content.service to /etc/systemd/system/pulpcore-content.service.                                                                         

Created symlink from /etc/systemd/system/multi-user.target.wants/pulpcore-resource-manager.service to /etc/systemd/system/pulpcore-resource-manager.service.
\ enabling pulpcore-worker@4                                                                             
Created symlink from /etc/systemd/system/multi-user.target.wants/pulpcore-worker@1.service to /etc/systemd/system/pulpcore-worker@.service.                                                                        
Created symlink from /etc/systemd/system/multi-user.target.wants/pulpcore-worker@2.service to /etc/systemd/system/pulpcore-worker@.service.                                                                        
Created symlink from /etc/systemd/system/multi-user.target.wants/pulpcore-worker@3.service to /etc/systemd/system/pulpcore-worker@.service.                                                                        
Created symlink from /etc/systemd/system/multi-user.target.wants/pulpcore-worker@4.service to /etc/systemd/system/pulpcore-worker@.service.                                                                        
| All services enabled                                                [OK]                               
--------------------------------------------------------------------------------                         

Start applicable services:

Starting the following service(s):
rh-mongodb34-mongod, rh-redis5-redis, postgresql, pulpcore-api, pulpcore-content, pulpcore-resource-manager, qdrouterd, qpidd, squid, pulp_celerybeat, pulp_resource_manager, pulp_streamer, pulp_workers, pulpcore
-worker@1.service, pulpcore-worker@2.service, pulpcore-worker@3.service, pulpcore-worker@4.service, smart_proxy_dynflow_core, tomcat, dynflow-sidekiq@orchestrator, foreman, httpd, puppetserver, dynflow-sidekiq@w
orker, dynflow-sidekiq@worker-hosts-queue, foreman-proxy
\ All services started                                                [OK]
--------------------------------------------------------------------------------
Switch support for certain content from Pulp 2 to Pulp 3:
Performing final content migration before switching content           [OK]
Print pulp 2 removal instructions:
======================================================
Migration of content from Pulp 2 to Pulp3 is complete 

After verifying accessibility of content from lients, 
it is strongly recommend to run "foreman-maintain content remove-pulp2"
This will remove Pulp 2, MongoDB, and all pulp2 content in /var/lib/pulp/ontent/
======================================================                [OK]                                                                                                   
--------------------------------------------------------------------------------                                                                                             


--------------------------------------------------------------------------------                                                                                             
Upgrade finished.                                                                     

The whole upgrade process took about 2.5h for a Satellite system with RHEL7 and RHEL8 main repos and about 10 content view versions. Note that this migration time is severly affected by amount of RAM, CPU and storage performance.

Cleaning up

Once Satellite 6.10 has been fully migrated and verified, the old pulp2 content should be removed with the following command:

# time foreman-maintain content remove-pulp2 ; time foreman-maintain upgrade run  --target-version=6.10.z -y  
Running Remove Pulp2 and mongodb packages and data
================================================================================
Remove pulp2: 

WARNING: All pulp2 packages will be removed with the following commands:

# rpm -e pulp-docker-plugins  pulp-ostree-plugins  pulp-puppet-plugins  pulp-puppet-tools  pulp-rpm-plugins  pulp-selinux  pulp-server  python-bson  python-mongoengine  python-nectar  python-pulp-common  python-pulp-docker-common  python-pulp-integrity  python-pulp-oid_validation  python-pulp-ostree-common  python-pulp-puppet-common  python-pulp-repoauth  python-pulp-rpm-common  python-pulp-streamer  python-pymongo  python-pymongo-gridfs  python2-amqp  python2-billiard  python2-celery  python2-django  python2-kombu  python2-solv  python2-vine  pulp-katello  pulp-maintenance  python3-pulp-2to3-migration
# yum remove rh-mongodb34-*
# yum remove squid mod_wsgi

All pulp2 data will be removed.

# rm -rf /var/lib/pulp/published
# rm -rf /var/lib/pulp/content
# rm -rf /var/lib/pulp/importers
# rm -rf /var/lib/pulp/uploads
# rm -rf /var/lib/mongodb/
# rm -rf /var/cache/pulp

Do you want to proceed?, [y(yes), q(quit)] y
- Removing pulp2 packages                        
- Removing mongo packages                                                       
| Removing additional packages                                                  
- Dropping migration tables                                                     
| Dropping migrations                                                           
\ Done deleting pulp2 data directories                                [OK]      
--------------------------------------------------------------------------------


real    2m46.147s
user    1m32.814s
sys     0m17.502s

Happy upgrading!

Upgrading to Fedora 34

Fedora 34 was released a few weeks ago, and now I took some time to update my work machine to the new release. Here are some tips I found interesting:

How to upgrade via command line

Upgrades can be applied via CLI with

sudo dnf upgrade -y && \
sudo dnf system-upgrade download --refresh --releasever=34 --nogpgcheck  --allowerasing -y && \
sudo dnf system-upgrade reboot -y

Note that this will take care of removing unneeded or conflicting RPMs. It can be a little too eager removing packages, so you can inspect later what was done via dnf history list and dnf history info X.

Fixing the horizontal dock

I'm a fan of the old way the dock was handled (vertically on the left). Moving the mouse to the top left corner to activate the 'Activities' button, then moving the mouse to the bottom to choose the right application I want to launch seems like a lot of mouse travel.

Fortunately there are extensions that fix that behaviour and revert to the old one.

This extension needs to be used in conjunction to Dash-to-dock, which is available from the linked repo, or installed from RPM with dnf install gnome-shell-extension-dash-to-dock .

Ta-da!

... and that was it for me. Pretty uneventful upgrade as everything seems to work ok.

Happy hacking!

Notes on upgrading RHV 4.3 to RHV 4.4

Reciently Red Hat has published the latest RHV 4.4 version. This introduces some major changes in the underlying operating system (migration from RHEL7 to RHEL8 in both hypervisors and Engine / Self Hosted Engine), and a bunch of new features.

There are extensive notes on how to perform the upgrade, especially for the Self-hosted Engine-type of deployments.

I upgraded a small 2-node lab environment and besides the notes already mentioned in the docs above, I also found relevant:

Before you start

  • Understand the NIC naming differences between RHEL7 and RHEL8.
    • Your hypervisor NICs will probably be renamed.
  • Jot down your hypervisors' NIC to MAC-addresses mappings prior to attempting an upgrade.
    • This will ease understanding what NIC is what after installing RHEL8.
  • When using shared storage (FC), beware of unmapping it while you reinstall each host, or ensure your kickstart does NOT clear the shared disks.
    • Otherwise this might lead into data loss!!

Prerequistes

  • One spare hypervisor, feshly installed with RHEL8/RHVH8 and NOT added to the manager.
  • One additional LUN / NFS share for the new SHE 4.4 deployment.

    • The installer does not upgrade the old SHE in-place, so a new lun is required.
    • This eases the rollback, as the original SHE LUN is untouched.
  • Ensure the new hypervisor has all the configuration to access all required networks prior to starting the upgrade.

    • IP configuration for the ovirtmgmt network (obvious).
    • IP configuration for any NFS/iSCSI networks, if required.
    • Shared FC storage, if required.
    • This is critical as the restore process does not prompt to configure/fix network settings when deploying the upgraded manager.
  • Extra steps

    • Collect your RHV-M details:
      • IP address and netmask
      • FQDN
      • Mac-address if using DHCP.
      • Extra software and additional RPMs (eg: AD/IDM/ldap integration, etc)
      • Existing /etc/hosts details in case you use hosts instead of DNS (bad bad bad!!!).
      • Same for hypervisors!
    • Optionally: Mark your networks within the cluster as non-Required . This might be useful until BZ #1867198 is addressed.

Deploying and registering the hypervisors.

The RHEL8/RHVH8 can be deployed as usual with Foreman / Red Hat Satellite.

Ensure the hypervisors are registered and have access to the repositories as below:

RHEL8 Host repositories

POOLID=`subscription-manager list --available --matches "Red Hat Virtualization"  --pool-only | head -n 1`
subscription-manager attach --pool=$POOLID
subscription-manager repos \
    --disable='*' \
    --enable=rhel-8-for-x86_64-baseos-rpms \
    --enable=rhel-8-for-x86_64-appstream-rpms \
    --enable=rhv-4-mgmt-agent-for-rhel-8-x86_64-rpms \
    --enable=fast-datapath-for-rhel-8-x86_64-rpms \
    --enable=ansible-2.9-for-rhel-8-x86_64-rpms \
    --enable=advanced-virt-for-rhel-8-x86_64-rpms

yum module reset -y virt
yum module enable -y virt:8.2
systemctl enable --now firewalld
yum install -y rhevm-appliance ovirt-hosted-engine-setup

RHVH8 Host repositories

POOLID=$(subscription-manager list --available --matches "Red Hat Virtualization"  --pool-only | head -n 1)
subscription-manager attach --pool=$POOLID
subscription-manager repos \
    --disable='*' \
    --enable=rhvh-4-for-rhel-8-x86_64-rpms
systemctl enable --now firewalld
yum install -y rhevm-appliance

Powering off RHV 4.3 manager

  • Set the Manager in global maintenance mode.
  • OPTIONAL: Mark your networks within the cluster as non-Required . This might be useful until BZ #1867198 is addressed.
  • Stop the ovirt-engine service.
  • Backup the RHV 4.3 database and save in a shared space.

Performing the RHV-M upgrade

  • Copy the database backup into the RHEL8 hypervisor.
  • Launch the restore process with hosted-engine --deploy --restore-from-file=backup.tar.bz2

The process has changed significantly in the last RHV releases and it now performs the new SHE rollout or restore in two phases :

  • Phase 1: it tries to roll it out in the hypervisor local storage.

    • Gather FQDN, IP details of the Manager.
    • Gather other configuration.
  • Phase 2: migrate to shared storage.

    • If Phase1 is successful, this takes care of gathering shared storage details (LUN ID or NFS defails).
    • Copy the bootstrap manager into the shared storage.
    • Configure the ovirt-ha-broker and ovirt-ha-agent in the hypervisor to monitor and ensure the SHE is started.

Phase 1 details

[root@rhevh2 rhev]# time  hosted-engine --deploy --restore-from-file=engine-backup-rhevm-20200807_1536.tar.bz2
[ INFO  ] Stage: Initializing
[ INFO  ] Stage: Environment setup
          During customization use CTRL-D to abort.
          Continuing will configure this host for serving as hypervisor and will create a local VM with a running engine.
          The provided engine backup file will be restored there,
          it's strongly recommended to run this tool on an host that wasn't part of the environment going to be restored.
          If a reference to this host is already contained in the backup file, it will be filtered out at restore time.
          The locally running engine will be used to configure a new storage domain and create a VM there.
          At the end the disk of the local VM will be moved to the shared storage.
          The old hosted-engine storage domain will be renamed, after checking that everything is correctly working you can manually remove it.
          Other hosted-engine hosts have to be reinstalled from the engine to update their hosted-engine configuration.
          Are you sure you want to continue? (Yes, No)[Yes]: yes
          It has been detected that this program is executed through an SSH connection without using tmux.
          Continuing with the installation may lead to broken installation if the network connection fails.
          It is highly recommended to abort the installation and run it inside a tmux session using command "tmux".
          Do you want to continue anyway? (Yes, No)[No]: yes
          Configuration files: 
          Log file: /var/log/ovirt-hosted-engine-setup/ovirt-hosted-engine-setup-20200807155111-5blcva.log
          Version: otopi-1.9.2 (otopi-1.9.2-1.el8ev)
[ INFO  ] Stage: Environment packages setup
[ INFO  ] Stage: Programs detection
[ INFO  ] Stage: Environment setup (late)
[ INFO  ] Stage: Environment customization

          --== STORAGE CONFIGURATION ==--


          --== HOST NETWORK CONFIGURATION ==--

          Please indicate the gateway IP address [10.48.0.100]: 
[ INFO  ] TASK [ovirt.hosted_engine_setup : Execute just a specific set of steps]
[ INFO  ] ok: [localhost]
[ INFO  ] TASK [ovirt.hosted_engine_setup : Force facts gathering]
[ INFO  ] ok: [localhost]
[ INFO  ] TASK [ovirt.hosted_engine_setup : Detecting interface on existing management bridge]
[ INFO  ] TASK [ovirt.hosted_engine_setup : Get all active network interfaces]
[ INFO  ] ok: [localhost]
[ INFO  ] TASK [ovirt.hosted_engine_setup : Filter bonds with bad naming]
[ INFO  ] ok: [localhost]
[ INFO  ] TASK [ovirt.hosted_engine_setup : Generate output list]
[ INFO  ] ok: [localhost]
[ INFO  ] TASK [ovirt.hosted_engine_setup : Collect interface types]
[ INFO  ] changed: [localhost]
[ INFO  ] TASK [ovirt.hosted_engine_setup : Check for Team devices]
[ INFO  ] TASK [ovirt.hosted_engine_setup : Get list of Team devices]
[ INFO  ] ok: [localhost]
[ INFO  ] TASK [ovirt.hosted_engine_setup : Filter unsupported interface types]
[ INFO  ] ok: [localhost]
[ INFO  ] TASK [ovirt.hosted_engine_setup : Failed if only teaming devices are availible]
[ INFO  ] skipping: [localhost]
[ INFO  ] TASK [ovirt.hosted_engine_setup : Validate selected bridge interface if management bridge does not exist]
[ INFO  ] skipping: [localhost]
         Please indicate a nic to set ovirtmgmt bridge on: (eth4.100, ens15.200) [ens15.200]: eth4.100
          Please specify which way the network connectivity should be checked (ping, dns, tcp, none) [dns]: 

          --== VM CONFIGURATION ==--

          Please enter the name of the datacenter where you want to deploy this hosted-engine host. Please note that if you are restoring a backup that contains info about other hosted-engine hosts,
          this value should exactly match the value used in the environment you are going to restore. [Default]: 
          Please enter the name of the cluster where you want to deploy this hosted-engine host. Please note that if you are restoring a backup that contains info about other hosted-engine hosts,
          this value should exactly match the value used in the environment you are going to restore. [Default]: 
          Renew engine CA on restore if needed? Please notice that if you choose Yes, all hosts will have to be later manually reinstalled from the engine. (Yes, No)[No]: 
          Pause the execution after adding this host to the engine?
          You will be able to iteratively connect to the restored engine in order to manually review and remediate its configuration before proceeding with the deployment:
          please ensure that all the datacenter hosts and storage domain are listed as up or in maintenance mode before proceeding.
          This is normally not required when restoring an up to date and coherent backup. (Yes, No)[No]: 
          If you want to deploy with a custom engine appliance image,
          please specify the path to the OVA archive you would like to use
          (leave it empty to skip, the setup will use rhvm-appliance rpm installing it if missing): 
          Please specify the number of virtual CPUs for the VM (Defaults to appliance OVF value): [4]: 
          Please specify the memory size of the VM in MB (Defaults to appliance OVF value): [16384]: 
[ INFO  ] Detecting host timezone.
          Please provide the FQDN you would like to use for the engine.
          Note: This will be the FQDN of the engine VM you are now going to launch,
          it should not point to the base host or to any other existing machine.
         Engine VM FQDN:  []: rhevm.example.org
          Please provide the domain name you would like to use for the engine appliance.
          Engine VM domain: [example.org]
          Enter root password that will be used for the engine appliance: 
          Confirm appliance root password: 
          Enter ssh public key for the root user that will be used for the engine appliance (leave it empty to skip): 
          Do you want to enable ssh access for the root user (yes, no, without-password) [yes]: 
          Do you want to apply a default OpenSCAP security profile (Yes, No) [No]: 
          You may specify a unicast MAC address for the VM or accept a randomly generated default [00:16:3e:03:ec:35]: 
          How should the engine VM network be configured (DHCP, Static)[DHCP]? static
          Please enter the IP address to be used for the engine VM []: 10.48.0.4
[ INFO  ] The engine VM will be configured to use 10.48.0.4/24
          Please provide a comma-separated list (max 3) of IP addresses of domain name servers for the engine VM
          Engine VM DNS (leave it empty to skip) [10.48.0.100]: 
          Add lines for the appliance itself and for this host to /etc/hosts on the engine VM?
          Note: ensuring that this host could resolve the engine VM hostname is still up to you
          (Yes, No)[No] 

          --== HOSTED ENGINE CONFIGURATION ==--

          Please provide the name of the SMTP server through which we will send notifications [localhost]: 
          Please provide the TCP port number of the SMTP server [25]: 
          Please provide the email address from which notifications will be sent [root@localhost]: 
          Please provide a comma-separated list of email addresses which will get notifications [root@localhost]: 
          Enter engine admin password: 
          Confirm engine admin password: 
[ INFO  ] Stage: Setup validation
          Please provide the hostname of this host on the management network [rhevh2]: rhevh2.example.org
[ INFO  ] Stage: Transaction setup
[ INFO  ] Stage: Misc configuration (early)
[ INFO  ] Stage: Package installation
[ INFO  ] Stage: Misc configuration
[ INFO  ] Stage: Transaction commit
[ INFO  ] Stage: Closing up
[ INFO  ] Cleaning previous attempts
[ INFO  ] TASK [ovirt.hosted_engine_setup : Execute just a specific set of steps]
[ INFO  ] ok: [localhost]
[ INFO  ] TASK [ovirt.hosted_engine_setup : Force facts gathering]
[ INFO  ] ok: [localhost]
[ INFO  ] TASK [ovirt.hosted_engine_setup : Install oVirt Hosted Engine packages]
[ INFO  ] ok: [localhost]

[... snip ...]

The manager is now being deployed and made available via the hypervisor at a later stage:

[ INFO  ] TASK [ovirt.hosted_engine_setup : Adding new SSO_ALTERNATE_ENGINE_FQDNS line]
[ INFO  ] changed: [localhost -> rhevm.example.org]
[ INFO  ] TASK [ovirt.hosted_engine_setup : Restart ovirt-engine service for changed OVF Update configuration and LibgfApi support]
[ INFO  ] changed: [localhost -> rhevm.example.org]
[ INFO  ] TASK [ovirt.hosted_engine_setup : Mask cloud-init services to speed up future boot]
[ INFO  ] changed: [localhost]
[ INFO  ] TASK [ovirt.hosted_engine_setup : Wait for ovirt-engine service to start]
[ INFO  ] ok: [localhost]
[ INFO  ] TASK [ovirt.hosted_engine_setup : Open a port on firewalld]
[ INFO  ] changed: [localhost]
[ INFO  ] TASK [ovirt.hosted_engine_setup : Expose engine VM webui over a local port via ssh port forwarding]
[ INFO  ] changed: [localhost]
[ INFO  ] TASK [ovirt.hosted_engine_setup : Evaluate temporary bootstrap engine URL]
[ INFO  ] ok: [localhost]
[ INFO  ] The bootstrap engine is temporary accessible over https://rhevh2.example.org:6900/ovirt-engine/ 
[ INFO  ] TASK [ovirt.hosted_engine_setup : Detect VLAN ID]
[ INFO  ] changed: [localhost]
[ INFO  ] TASK [ovirt.hosted_engine_setup : Set Engine public key as authorized key without validating the TLS/SSL certificates]
[ INFO  ] changed: [localhost]
[...]
[ INFO  ] TASK [ovirt.hosted_engine_setup : include_tasks]
[ INFO  ] skipping: [localhost]
[ INFO  ] TASK [ovirt.hosted_engine_setup : include_tasks]
[ INFO  ] ok: [localhost]
[ INFO  ] TASK [ovirt.hosted_engine_setup : Always revoke the SSO token]
[ INFO  ] TASK [ovirt.hosted_engine_setup : include_tasks]
[ INFO  ] ok: [localhost]
[ INFO  ] TASK [ovirt.hosted_engine_setup : Obtain SSO token using username/password credentials]
[ INFO  ] ok: [localhost]
[ INFO  ] TASK [ovirt.hosted_engine_setup : Wait for the host to be up]

The bootstrap manager is available at https://hypervisor.example.org:6900/ovirt-engine/ and the installer tries to add the current host under the Manager management. (It waits for the host to be in the 'Up' state. This is why is important to have all the storage and network prerequisites prepared/available).

And to finish up :

[ INFO  ] ok: [localhost]
[ INFO  ] TASK [ovirt.hosted_engine_setup : Destroy local storage-pool localvm7imrhb7u]
[ INFO  ] TASK [ovirt.hosted_engine_setup : Undefine local storage-pool localvm7imrhb7u]
[ INFO  ] TASK [ovirt.hosted_engine_setup : Destroy local storage-pool {{ local_vm_disk_path.split('/')[5] }}]
[ INFO  ] TASK [ovirt.hosted_engine_setup : Undefine local storage-pool {{ local_vm_disk_path.split('/')[5] }}]
[ INFO  ] Generating answer file '/var/lib/ovirt-hosted-engine-setup/answers/answers-20200807193709.conf'
[ INFO  ] Generating answer file '/etc/ovirt-hosted-engine/answers.conf'
[ INFO  ] Stage: Pre-termination
[ INFO  ] Stage: Termination
[ INFO  ] Hosted Engine successfully deployed
[ INFO  ] Other hosted-engine hosts have to be reinstalled in order to update their storage configuration. From the engine, host by host, please set maintenance mode and then click on reinstall button ensuring you choose DEPLOY in hosted engine tab.
[ INFO  ] Please note that the engine VM ssh keys have changed. Please remove the engine VM entry in ssh known_hosts on your clients.

real    45m1,768s
user    18m4,639s
sys     1m9,271s

After finishing the upgrade it is also recommended to register the RHV-Manager virtual machine and upgrade to the latest RPMs available in the Red Hat CDN.

Set the Hosted Engine in Global Maintenance mode and:

POOLID=`subscription-manager list --available --matches "Red Hat Virtualization"  --pool-only | head -n 1`
subscription-manager attach --pool=$POOLID

subscription-manager repos \
    --disable='*' \
    --enable=rhel-8-for-x86_64-baseos-rpms \
    --enable=rhel-8-for-x86_64-appstream-rpms \
    --enable=rhv-4.4-manager-for-rhel-8-x86_64-rpms \
    --enable=fast-datapath-for-rhel-8-x86_64-rpms \
    --enable=ansible-2.9-for-rhel-8-x86_64-rpms \
    --enable=jb-eap-7.3-for-rhel-8-x86_64-rpms

yum module -y enable pki-deps
yum module -y enable postgresql:12
yum module reset -y virt
yum module enable -y virt:8.2

Performing the upgrade :

systemctl stop ovirt-engine
yum upgrade -y
engine-setup --accept-defaults 

Rolling back a failed upgrade

A rollback can be performed if the following applies:

  • The deployment or upgrade to the new RHV 4.4 Manager was not successfull.
  • No new instances have been created and/or VMs have not been altered (eg, added disks or nics, etc). If a rollback occurs those changes will be inconsistent with the old manager DB status and potentially imposible to reconciliate.

If so, the rollback can be performed by:

  • Powering off the new RHEL8/RHVH hypervisor and manager.
  • Powering on the old Manager in a RHEL7 hosts. They should be pointed to the old SHE LUN and storage.

Finalising the upgrade

At this point you should have a working manager under the regular https://FQDN/ovirt-engine/ address. Don't forget to clear cookies and browser cache as this might lead into strange WebUI issues.

At this point you can continue reinstalling your hypervisors. I'd suggest:

  • Starting with your SHE hypervisors first. This will ensure you have SHE HA asap.
  • Then the non-SHE hypervisors.
  • Then finalise with the rest of the task such as upgrading Cluster and DC compatibility, rebooting the guest VMs, etc.

Happy hacking!

Provisioning RHV 4.1 hypervisors using Satellite 6.2

Overview

The RHV-H 4.1 installation documentation describes a method to provision RHV-H hypervisors using PXE and/or Satellite. This document covers all steps required to archieve such configuration in a repeatable manner.

Prerequisites

  • A RHV-H installation ISO file such as RHVH-4.0-20170308.0-RHVH-x86_64-dvd1.iso downloaded into Satellite/Capsules.
  • rhviso2media.sh script (see here)

Creating the installation media in Satellite and Capsules

  • Deploy the RHV-H iso in /var/lib/pulp/tmp
  • Run the rhviso2media.sh to populate installation media directories. It will make the following files available:
    • kernel and initrd files in tftp://host/ISOFILE/vmlinuz and tftp://host/ISOFILE/initrd
    • DVD installation media in /var/www/html/pub/ISOFILE directory
    • squashfs.img file in /var/www/html/pub/ISOFILE directory

Example:

# ./rhviso2media.sh RHVH-4.0-20170308.0-RHVH-x86_64-dvd1.iso
Mounting /var/lib/pulp/tmp/RHVH-4.0-20170308.0-RHVH-x86_64-dvd1.iso in /mnt/RHVH-4.0-20170308.0-RHVH-x86_64-dvd1.iso ...
mount: /dev/loop0 is write-protected, mounting read-only
Copying ISO contents to /var/www/html/pub/RHVH-4.0-20170308.0-RHVH-x86_64-dvd1.iso ...
Extracting redhat-virtualization-host-image-update ...
./usr/share/redhat-virtualization-host/image
./usr/share/redhat-virtualization-host/image/redhat-virtualization-host-4.0-20170307.1.el7_3.squashfs.img
./usr/share/redhat-virtualization-host/image/redhat-virtualization-host-4.0-20170307.1.el7_3.squashfs.img.meta
1169874 blocks
OK
Copying squash.img to public directory . Available as http://sat62.lab.local/pub/RHVH-4.0-20170308.0-RHVH-x86_64-dvd1.iso/squashfs.img ...
Copying /mnt/RHVH-4.0-20170308.0-RHVH-x86_64-dvd1.iso/images/pxeboot/vmlinuz /mnt/RHVH-4.0-20170308.0-RHVH-x86_64-dvd1.iso/images/pxeboot/initrd.img to /varlib/tftpboot/RHVH-4.0-20170308.0-RHVH-x86_64-dvd1.iso ...
OK
Unmounting /mnt/RHVH-4.0-20170308.0-RHVH-x86_64-dvd1.iso

Create Installation media

RHVH Installation media

http://fake.url

This is not used as it the kickstart will use a liveimg url, not a media url; however Satellite is stubborn and still requires it.

Create partition table

name : RHVH Kickstart default

<%#
kind: ptable
name: Kickstart default
%>
zerombr
clearpart --all --initlabel
autopart --type=thinp

Create pxelinux for RHV

Based on Kickstart default PXELinux

```erb
<%#
kind: PXELinux
name: RHVH Kickstart default PXELinux
%>
#
# This file was deployed via '<%= @template_name %>' template
#
# Supported host/hostgroup parameters:
#
# blacklist = module1, module2
#   Blacklisted kernel modules
#
<%
options = []
if @host.params['blacklist']
    options << "modprobe.blacklist=" + @host.params['blacklist'].gsub(' ', '')
end
options = options.join(' ')
-%>

DEFAULT rhvh

LABEL rhvh
    KERNEL <%= @host.params["rhvh_image"] %>/vmlinuz <%= @kernel %>
    APPEND initrd=<%= @host.params["rhvh_image"] %>/initrd.img inst.stage2=http://<%= @host.hostgroup.subnet.tftp.name %>/pub/<%= @host.params["rhvh_image"] %>/ ks=<%= foreman_url('provision') %> intel_iommu=on ssh_pwauth=1 local_boot_trigger=<%= foreman_url("built") %> <%= options %>
    IPAPPEND 2
```

Create Kickstart for RHV

File under Satellite Kickstart Default for RHVH .

Note that the @host.hostgroup.subnet.tftp.name variable is used to point to the capsule associated to this host, rather than the Satellite server itself.

<%#
kind: provision
name: Satellite Kickstart default
%>
<%
rhel_compatible = @host.operatingsystem.family == 'Redhat' && @host.operatingsystem.name != 'Fedora'
os_major = @host.operatingsystem.major.to_i
# safemode renderer does not support unary negation
pm_set = @host.puppetmaster.empty? ? false : true
puppet_enabled = pm_set || @host.params['force-puppet']
salt_enabled = @host.params['salt_master'] ? true : false
section_end = (rhel_compatible && os_major <= 5) ? '' : '%end'
%>
install
# not required # url --url=http://<%= @host.hostgroup.subnet.tftp.name %>/pub/<%= @host.params["rhvh_image"] %>
lang en_US.UTF-8
selinux --enforcing
keyboard es
skipx

<% subnet = @host.subnet -%>
<% if subnet.respond_to?(:dhcp_boot_mode?) -%>
<% dhcp = subnet.dhcp_boot_mode? && !@static -%>
<% else -%>
<% dhcp = !@static -%>
<% end -%>

network --bootproto <%= dhcp ? 'dhcp' : "static --ip=#{@host.ip} --netmask=#{subnet.mask} --gateway=#{subnet.gateway} --nameserver=#{[subnet.dns_primary, subnet.dns_secondary].select(&:present?).join(',')}" %> --hostname <%= @host %><%= os_major >= 6 ? " --device=#{@host.mac}" : '' -%>

rootpw --iscrypted <%= root_pass %>
firewall --<%= os_major >= 6 ? 'service=' : '' %>ssh
authconfig --useshadow --passalgo=sha256 --kickstart
timezone --utc <%= @host.params['time-zone'] || 'UTC' %>

<% if @host.operatingsystem.name == 'Fedora' and os_major <= 16 -%>
# Bootloader exception for Fedora 16:
bootloader --append="nofb quiet splash=quiet <%=ks_console%>" <%= grub_pass %>
part biosboot --fstype=biosboot --size=1
<% else -%>
bootloader --location=mbr --append="nofb quiet splash=quiet" <%= grub_pass %>
<% end -%>

<% if @dynamic -%>
%include /tmp/diskpart.cfg
<% else -%>
<%= @host.diskLayout %>
<% end -%>

text
reboot

liveimg --url=http://<%= foreman_server_fqdn %>/pub/<%= @host.params["rhvh_image"] %>/squashfs.img


%post --nochroot
exec < /dev/tty3 > /dev/tty3
#changing to VT 3 so that we can see whats going on....
/usr/bin/chvt 3
(
cp -va /etc/resolv.conf /mnt/sysimage/etc/resolv.conf
/usr/bin/chvt 1
) 2>&1 | tee /mnt/sysimage/root/install.postnochroot.log
<%= section_end -%>


%post
logger "Starting anaconda <%= @host %> postinstall"

nodectl init

exec < /dev/tty3 > /dev/tty3
#changing to VT 3 so that we can see whats going on....
/usr/bin/chvt 3
(
<% if subnet.respond_to?(:dhcp_boot_mode?) -%>
<%= snippet 'kickstart_networking_setup' %>
<% end -%>

#update local time
echo "updating system time"
/usr/sbin/ntpdate -sub <%= @host.params['ntp-server'] || '0.fedora.pool.ntp.org' %>
/usr/sbin/hwclock --systohc

<%= snippet "subscription_manager_registration" %>

<% if @host.info['parameters']['realm'] && @host.realm && @host.realm.realm_type == 'Red Hat Identity Management' -%>
<%= snippet "idm_register" %>
<% end -%>

# update all the base packages from the updates repository
#yum -t -y -e 0 update

<%= snippet('remote_execution_ssh_keys') %>

sync

<% if @provisioning_type == nil || @provisioning_type == 'host' -%>
# Inform the build system that we are done.
echo "Informing Foreman that we are built"
wget -q -O /dev/null --no-check-certificate <%= foreman_url %>
<% end -%>
) 2>&1 | tee /root/install.post.log
exit 0

<%= section_end -%>

Create new Operating system

  • Name: RHVH
  • Major Version: 7
  • Partition table: RHVH Kickstart default
  • Installation media: RHVH Installation media
  • Templates: "Kickstart default PXELinux for RHVH" and "Satellite kickstart default for RHVH"

Associate the previously-created provisioning templates with this OS.

Create a new hostgroup

Create a new host-group with a Global Parameter called rhvh_image . This parameter is used by the provisioning templates to generate the installation media paths as required.

eg:

rhvh_image = RHVH-4.0-20170308.0-RHVH-x86_64-dvd1.iso

rhviso2media

Final thoughts

Future Satellite versions of satellite might include better integration of RHV-H provisioning; however the method described above can be used in the meantime.

Happy hacking!

RHV 4.2: Using rhv-log-collector-analyzer to assess your virtualization environment

RHV 4.2 includes a tool that allows to quickly analyze your RHV environment. It bases its analysis in either a logcollector report (sosreport and others), or it can connect live to you environment and generate some nice JSON or HTML output.


NOTE : This article is deprecated and is only left for historical reasons.

rhv-log-collector-analyzer now only supports live reporting, so use the method rhv-log-collector-analyzer --live to grab a snapshot of your deployment and verify its status.


You'll find it already installed in RHV 4.2 , and gathering a report is as easy as:

# rhv-log-collector-analyzer --live
Generating reports:
===================
Generated analyzer_report.html

If you need to assess an existing logcollector report on a new system that never had a running RHV-Manager, things get a bit more complicated:

root@localhost # yum install -y ovirt-engine
root@localhost # su - postgres
postgres@localhost ~ # source scl_source enable rh-postgresql95
postgres@localhost ~ # cd /tmp
postgres@localhost /tmp # time rhv-log-collector-analyzer  /tmp/sosreport-LogCollector-20181106134555.tar.xz

Preparing environment:
======================
Temporary working directory is /tmp/tmp.do6qohRDhN
Unpacking postgres data. This can take up to several minutes.
sos-report extracted into: /tmp/tmp.do6qohRDhN/unpacked_sosreport
pgdump extracted into: /tmp/tmp.do6qohRDhN/pg_dump_dir
Welcome to unpackHostsSosReports script!
Extracting sosreport from hypervisor HYPERVISOR1 in /tmp/ovirt-log-collector-analyzer-hosts/HYPERVISOR1
Extracting sosreport from hypervisor HYPERVISOR2 in /tmp/ovirt-log-collector-analyzer-hosts/HYPERVISOR2
Extracting sosreport from hypervisor HYPERVISOR3 in /tmp/ovirt-log-collector-analyzer-hosts/HYPERVISOR3
Extracting sosreport from hypervisor HYPERVISOR4 in /tmp/ovirt-log-collector-analyzer-hosts/HYPERVISOR4

Creating a temporary database in /tmp/tmp.do6qohRDhN/postgresDb/pgdata. Log of initdb is in /tmp/tmp.do6qohRDhN/initdb.log

WARNING: enabling "trust" authentication for local connections
You can change this by editing pg_hba.conf or using the option -A, or
--auth-local and --auth-host, the next time you run initdb.
LOG:  redirecting log output to logging collector process
HINT:  Future log output will appear in directory "pg_log".
Importing the dump into a temporary database. Log of the restore process is in /tmp/tmp.do6qohRDhN/db-restore.log

Generating reports:
===================
Generated analyzer_report.html

Cleaning up:
============
Stopping temporary database
Removing temporary directory /tmp/tmp.do6qohRDhN

You'll find a analyzer_report.html file in your current working directory. It can be reviews with a text-only browser such as lynx/links , or opened with a proper full-blown browser.

Bonus track

Sometimes it can also be helpful to check the database dump that is included in the logcollector report. In order to do that, you can do something like:

Review pg_dump_dir in the log above: /tmp/tmp.do6qohRDhN/pg_dump_dir .

Initiate a new postgres instance as follows :

postgres@localhost $ source scl_source enable rh-postgresql95
postgres@localhost $ export PGDATA=/tmp/foo
postgres@localhost $ initdb -D ${PGDATA} 
postgres@localhost $ /opt/rh/rh-postgresql95/root/usr/libexec/postgresql-ctl start -D ${PGDATA} -s -w -t 30 &
postgres@localhost $ psql -c "create database testengine"
postgres@localhost $ psql -c "create schema testengine"
postgres@localhost $ psql testengine < /tmp/tmp.*/pg_dump_dir/restore.sql

Happy hacking!

Satellite 6: Upgrading to Satellite 6.3

We have a new and shiny Satellite 6.3.0 available as of now, so I just bit the bullet and upgraded my lab's Satellite.

The first thing to know if you now have the foreman-maintain tool to do some pre-flight checks, as well as drive the upgrade. You'll need to enable the repository (included as a part of RHEL product) :

subscription-manager repos --disable="*" --enable rhel-7-server-rpms --enable rhel-7-server-satellite-6.3-rpms --enable rhel-server-rhscl-7-rpms --enable rhel-7-server-satellite-maintenance-6-rpms

yum install -y rubygem-foreman_maintain

Check your Satellite health with :

# foreman-maintain health check
Running ForemanMaintain::Scenario::FilteredScenario
================================================================================
Check for verifying syntax for ISP DHCP configurations:               [FAIL]
undefined method `strip' for nil:NilClass
--------------------------------------------------------------------------------
Check for paused tasks:                                               [OK]
--------------------------------------------------------------------------------
Check whether all services are running using hammer ping:             [OK]
--------------------------------------------------------------------------------
Scenario [ForemanMaintain::Scenario::FilteredScenario] failed.

The following steps ended up in failing state:

  [foreman-proxy-verify-dhcp-config-syntax]

Resolve the failed steps and rerun
the command. In case the failures are false positives,
use --whitelist="foreman-proxy-verify-dhcp-config-syntax"

And finally perform the upgrade with :

# foreman-maintain upgrade  run  --target-version 6.3 --whitelist="foreman-proxy-verify-dhcp-config-syntax,disk-performance,repositories-setup"                          

Running Checks before upgrading to Satellite 6.3
================================================================================
Skipping pre_upgrade_checks phase as it was already run before.
To enforce to run the phase, use `upgrade run --phase pre_upgrade_checks`

Scenario [Checks before upgrading to Satellite 6.3] failed.

The following steps ended up in failing state:

 [foreman-proxy-verify-dhcp-config-syntax]

Resolve the failed steps and rerun
the command. In case the failures are false positives,
use --whitelist="foreman-proxy-verify-dhcp-config-syntax"



Running Procedures before migrating to Satellite 6.3
================================================================================
Skipping pre_migrations phase as it was already run before.
To enforce to run the phase, use `upgrade run --phase pre_migrations`


Running Migration scripts to Satellite 6.3
================================================================================
Setup repositories: 
- Configuring repositories for 6.3                                    [FAIL]    
Failed executing subscription-manager repos --enable=rhel-7-server-rpms --enable=rhel-server-rhscl-7-rpms --enable=rhel-7-server-satellite-maintenance-6-rpms --enable=rhel-7-server-satellite-tools-6.3-rpms --enable=rhel-7-server-satellite-6.3-rpms, exit status 1:
Error: 'rhel-7-server-satellite-6.3-rpms' does not match a valid repository ID. Use "subscription-manager repos --list" to see valid repositories.
Repository 'rhel-7-server-rpms' is enabled for this system.
Repository 'rhel-7-server-satellite-maintenance-6-rpms' is enabled for this system.
Repository 'rhel-7-server-satellite-tools-6.3-rpms' is enabled for this system.
Repository 'rhel-server-rhscl-7-rpms' is enabled for this system.
-------------------------------------------------------------------------------
Update package(s) : 
  (yum stuff)

                                                                        [OK]
 --------------------------------------------------------------------------------
Procedures::Installer::Upgrade: 
Upgrading, to monitor the progress on all related services, please do:
  foreman-tail | tee upgrade-$(date +%Y-%m-%d-%H%M).log
Upgrade Step: stop_services...
Upgrade Step: start_databases...
Upgrade Step: update_http_conf...
Upgrade Step: migrate_pulp...
Upgrade Step: mark_qpid_cert_for_update...
Marking certificate /root/ssl-build/satmaster.rhci.local/satmaster.rhci.local-qpid-broker for update
Upgrade Step: migrate_candlepin...
Upgrade Step: migrate_foreman...
Upgrade Step: Running installer...
Installing             Done                                               [100%] [............................................]
  The full log is at /var/log/foreman-installer/satellite.log
Upgrade Step: restart_services...
Upgrade Step: db_seed...
Upgrade Step: correct_repositories (this may take a while) ...
Upgrade Step: correct_puppet_environments (this may take a while) ...
Upgrade Step: clean_backend_objects (this may take a while) ...
Upgrade Step: remove_unused_products (this may take a while) ...
Upgrade Step: create_host_subscription_associations (this may take a while) ...
Upgrade Step: reindex_docker_tags (this may take a while) ...
Upgrade Step: republish_file_repos (this may take a while) ...
Upgrade completed!
                                                     [OK]
--------------------------------------------------------------------------------


Running Procedures after migrating to Satellite 6.3
================================================================================
katello-service start: 
- No katello service to start                                         [OK]      
--------------------------------------------------------------------------------
Turn off maintenance mode:                                            [OK]
--------------------------------------------------------------------------------
re-enable sync plans: 
- Total 4 sync plans are now enabled.                                 [OK]      
--------------------------------------------------------------------------------

Running Checks after upgrading to Satellite 6.3
================================================================================
Check for verifying syntax for ISP DHCP configurations:               [FAIL]
undefined method `strip' for nil:NilClass
--------------------------------------------------------------------------------
Check for paused tasks:                                               [OK]
--------------------------------------------------------------------------------
Check whether all services are running using hammer ping:             [OK]
--------------------------------------------------------------------------------

--------------------------------------------------------------------------------
Upgrade finished.

Happy hacking!