Provisioning RHV 4.1 hypervisors using Satellite 6.2

Overview

The RHV-H 4.1 installation documentation describes a method to provision RHV-H hypervisors using PXE and/or Satellite. This document covers all steps required to archieve such configuration in a repeatable manner.

Prerequisites

  • A RHV-H installation ISO file such as RHVH-4.0-20170308.0-RHVH-x86_64-dvd1.iso downloaded into Satellite/Capsules.
  • rhviso2media.sh script (see here)

Creating the installation media in Satellite and Capsules

  • Deploy the RHV-H iso in /var/lib/pulp/tmp
  • Run the rhviso2media.sh to populate installation media directories. It will make the following files available:
    • kernel and initrd files in tftp://host/ISOFILE/vmlinuz and tftp://host/ISOFILE/initrd
    • DVD installation media in /var/www/html/pub/ISOFILE directory
    • squashfs.img file in /var/www/html/pub/ISOFILE directory

Example:

# ./rhviso2media.sh RHVH-4.0-20170308.0-RHVH-x86_64-dvd1.iso
Mounting /var/lib/pulp/tmp/RHVH-4.0-20170308.0-RHVH-x86_64-dvd1.iso in /mnt/RHVH-4.0-20170308.0-RHVH-x86_64-dvd1.iso ...
mount: /dev/loop0 is write-protected, mounting read-only
Copying ISO contents to /var/www/html/pub/RHVH-4.0-20170308.0-RHVH-x86_64-dvd1.iso ...
Extracting redhat-virtualization-host-image-update ...
./usr/share/redhat-virtualization-host/image
./usr/share/redhat-virtualization-host/image/redhat-virtualization-host-4.0-20170307.1.el7_3.squashfs.img
./usr/share/redhat-virtualization-host/image/redhat-virtualization-host-4.0-20170307.1.el7_3.squashfs.img.meta
1169874 blocks
OK
Copying squash.img to public directory . Available as http://sat62.lab.local/pub/RHVH-4.0-20170308.0-RHVH-x86_64-dvd1.iso/squashfs.img ...
Copying /mnt/RHVH-4.0-20170308.0-RHVH-x86_64-dvd1.iso/images/pxeboot/vmlinuz /mnt/RHVH-4.0-20170308.0-RHVH-x86_64-dvd1.iso/images/pxeboot/initrd.img to /var/lib/tftpboot/RHVH-4.0-20170308.0-RHVH-x86_64-dvd1.iso ...
OK
Unmounting /mnt/RHVH-4.0-20170308.0-RHVH-x86_64-dvd1.iso

Create Installation media

RHVH Installation media

http://fake.url

This is not used as it the kickstart will use a liveimg url, not a media url; however Satellite is stubborn and still requires it.

Create partition table

name : RHVH Kickstart default

<%#
kind: ptable
name: Kickstart default
%>
zerombr
clearpart --all --initlabel
autopart --type=thinp

Create pxelinux for RHV

Based on Kickstart default PXELinux

<%#
kind: PXELinux
name: RHVH Kickstart default PXELinux
%>
#
# This file was deployed via '<%= @template_name %>' template
#
# Supported host/hostgroup parameters:
#
# blacklist = module1, module2
#   Blacklisted kernel modules
#
<%
options = []
if @host.params['blacklist']
    options << "modprobe.blacklist=" + @host.params['blacklist'].gsub(' ', '')
end
options = options.join(' ')
-%>

DEFAULT rhvh

LABEL rhvh
    KERNEL <%= @host.params["rhvh_image"] %>/vmlinuz <%= @kernel %>
    APPEND initrd=<%= @host.params["rhvh_image"] %>/initrd.img inst.stage2=http://<%= @host.hostgroup.subnet.tftp.name %>/pub/<%= @host.params["rhvh_image"] %>/ ks=<%= foreman_url('provision') %> intel_iommu=on ssh_pwauth=1 local_boot_trigger=<%= foreman_url("built") %> <%= options %>
    IPAPPEND 2

Create Kickstart for RHV

File under Satellite Kickstart Default for RHVH .

Note that the @host.hostgroup.subnet.tftp.name variable is used to point to the capsule associated to this host, rather than the Satellite server itself.

<%#
kind: provision
name: Satellite Kickstart default
%>
<%
rhel_compatible = @host.operatingsystem.family == 'Redhat' && @host.operatingsystem.name != 'Fedora'
os_major = @host.operatingsystem.major.to_i
# safemode renderer does not support unary negation
pm_set = @host.puppetmaster.empty? ? false : true
puppet_enabled = pm_set || @host.params['force-puppet']
salt_enabled = @host.params['salt_master'] ? true : false
section_end = (rhel_compatible && os_major <= 5) ? '' : '%end'
%>
install
# not required # url --url=http://<%= @host.hostgroup.subnet.tftp.name %>/pub/<%= @host.params["rhvh_image"] %>
lang en_US.UTF-8
selinux --enforcing
keyboard es
skipx

<% subnet = @host.subnet -%>
<% if subnet.respond_to?(:dhcp_boot_mode?) -%>
<% dhcp = subnet.dhcp_boot_mode? && !@static -%>
<% else -%>
<% dhcp = !@static -%>
<% end -%>

network --bootproto <%= dhcp ? 'dhcp' : "static --ip=#{@host.ip} --netmask=#{subnet.mask} --gateway=#{subnet.gateway} --nameserver=#{[subnet.dns_primary, subnet.dns_secondary].select(&:present?).join(',')}" %> --hostname <%= @host %><%= os_major >= 6 ? " --device=#{@host.mac}" : '' -%>

rootpw --iscrypted <%= root_pass %>
firewall --<%= os_major >= 6 ? 'service=' : '' %>ssh
authconfig --useshadow --passalgo=sha256 --kickstart
timezone --utc <%= @host.params['time-zone'] || 'UTC' %>

<% if @host.operatingsystem.name == 'Fedora' and os_major <= 16 -%>
# Bootloader exception for Fedora 16:
bootloader --append="nofb quiet splash=quiet <%=ks_console%>" <%= grub_pass %>
part biosboot --fstype=biosboot --size=1
<% else -%>
bootloader --location=mbr --append="nofb quiet splash=quiet" <%= grub_pass %>
<% end -%>

<% if @dynamic -%>
%include /tmp/diskpart.cfg
<% else -%>
<%= @host.diskLayout %>
<% end -%>

text
reboot

liveimg --url=http://<%= foreman_server_fqdn %>/pub/<%= @host.params["rhvh_image"] %>/squashfs.img


%post --nochroot
exec < /dev/tty3 > /dev/tty3
#changing to VT 3 so that we can see whats going on....
/usr/bin/chvt 3
(
cp -va /etc/resolv.conf /mnt/sysimage/etc/resolv.conf
/usr/bin/chvt 1
) 2>&1 | tee /mnt/sysimage/root/install.postnochroot.log
<%= section_end -%>


%post
logger "Starting anaconda <%= @host %> postinstall"

nodectl init

exec < /dev/tty3 > /dev/tty3
#changing to VT 3 so that we can see whats going on....
/usr/bin/chvt 3
(
<% if subnet.respond_to?(:dhcp_boot_mode?) -%>
<%= snippet 'kickstart_networking_setup' %>
<% end -%>

#update local time
echo "updating system time"
/usr/sbin/ntpdate -sub <%= @host.params['ntp-server'] || '0.fedora.pool.ntp.org' %>
/usr/sbin/hwclock --systohc

<%= snippet "subscription_manager_registration" %>

<% if @host.info['parameters']['realm'] && @host.realm && @host.realm.realm_type == 'Red Hat Identity Management' -%>
<%= snippet "idm_register" %>
<% end -%>

# update all the base packages from the updates repository
#yum -t -y -e 0 update

<%= snippet('remote_execution_ssh_keys') %>

sync

<% if @provisioning_type == nil || @provisioning_type == 'host' -%>
# Inform the build system that we are done.
echo "Informing Foreman that we are built"
wget -q -O /dev/null --no-check-certificate <%= foreman_url %>
<% end -%>
) 2>&1 | tee /root/install.post.log
exit 0

<%= section_end -%>

Create new Operating system

  • Name: RHVH
  • Major Version: 7
  • Partition table: RHVH Kickstart default
  • Installation media: RHVH Installation media
  • Templates: "Kickstart default PXELinux for RHVH" and "Satellite kickstart default for RHVH"

Associate the previously-created provisioning templates with this OS.

Create a new hostgroup

Create a new host-group with a Global Parameter called rhvh_image . This parameter is used by the provisioning templates to generate the installation media paths as required.

eg:

rhvh_image = RHVH-4.0-20170308.0-RHVH-x86_64-dvd1.iso

rhviso2media

Final thoughts

Future Satellite versions of satellite might include better integration of RHV-H provisioning; however the method described above can be used in the meantime.

Happy hacking!

RHV 4.2: Using rhv-log-collector-analyzer to assess your virtualization environment

RHV 4.2 includes a tool that allows to quickly analyze your RHV environment. It bases its analysis in either a logcollector report (sosreport and others), or it can connect live to you environment and generate some nice JSON or HTML output.

You'll find it already installed in RHV 4.2 , and gathering a report is as easy as:

# rhv-log-collector-analyzer --live
Generating reports:
===================
Generated analyzer_report.html

If you need to assess an existing logcollector report on a new system that never had a running RHV-Manager, things get a bit more complicated:

root@localhost # yum install -y ovirt-engine
root@localhost # su - postgres
postgres@localhost ~ # source scl_source enable rh-postgresql95
postgres@localhost ~ # cd /tmp
postgres@localhost /tmp # time rhv-log-collector-analyzer  /tmp/sosreport-LogCollector-20181106134555.tar.xz

Preparing environment:
======================
Temporary working directory is /tmp/tmp.do6qohRDhN
Unpacking postgres data. This can take up to several minutes.
sos-report extracted into: /tmp/tmp.do6qohRDhN/unpacked_sosreport
pgdump extracted into: /tmp/tmp.do6qohRDhN/pg_dump_dir
Welcome to unpackHostsSosReports script!
Extracting sosreport from hypervisor HYPERVISOR1 in /tmp/ovirt-log-collector-analyzer-hosts/HYPERVISOR1
Extracting sosreport from hypervisor HYPERVISOR2 in /tmp/ovirt-log-collector-analyzer-hosts/HYPERVISOR2
Extracting sosreport from hypervisor HYPERVISOR3 in /tmp/ovirt-log-collector-analyzer-hosts/HYPERVISOR3
Extracting sosreport from hypervisor HYPERVISOR4 in /tmp/ovirt-log-collector-analyzer-hosts/HYPERVISOR4

Creating a temporary database in /tmp/tmp.do6qohRDhN/postgresDb/pgdata. Log of initdb is in /tmp/tmp.do6qohRDhN/initdb.log

WARNING: enabling "trust" authentication for local connections
You can change this by editing pg_hba.conf or using the option -A, or
--auth-local and --auth-host, the next time you run initdb.
LOG:  redirecting log output to logging collector process
HINT:  Future log output will appear in directory "pg_log".
Importing the dump into a temporary database. Log of the restore process is in /tmp/tmp.do6qohRDhN/db-restore.log

Generating reports:
===================
Generated analyzer_report.html

Cleaning up:
============
Stopping temporary database
Removing temporary directory /tmp/tmp.do6qohRDhN

You'll find a analyzer_report.html file in your current working directory. It can be reviews with a text-only browser such as lynx/links , or opened with a proper full-blown browser.

Bonus track

Sometimes it can also be helpful to check the database dump that is included in the logcollector report. In order to do that, you can do something like:

Review pg_dump_dir in the log above: /tmp/tmp.do6qohRDhN/pg_dump_dir .

Initiate a new postgres instance as follows :

postgres@localhost $ source scl_source enable rh-postgresql95
postgres@localhost $ export PGDATA=/tmp/foo
postgres@localhost $ initdb -D ${PGDATA} 
postgres@localhost $ /opt/rh/rh-postgresql95/root/usr/libexec/postgresql-ctl start -D ${PGDATA} -s -w -t 30 &
postgres@localhost $ psql -c "create database testengine"
postgres@localhost $ psql -c "create schema testengine"
postgres@localhost $ psql testengine < /tmp/tmp.*/pg_dump_dir/restore.sql

Happy hacking!

Satellite 6: Upgrading to Satellite 6.3

We have a new and shiny Satellite 6.3.0 available as of now, so I just bit the bullet and upgraded my lab's Satellite.

The first thing to know if you now have the foreman-maintain tool to do some pre-flight checks, as well as drive the upgrade. You'll need to enable the repository (included as a part of RHEL product) :

subscription-manager repos --disable="*" --enable rhel-7-server-rpms --enable rhel-7-server-satellite-6.3-rpms --enable rhel-server-rhscl-7-rpms --enable rhel-7-server-satellite-maintenance-6-rpms

yum install -y rubygem-foreman_maintain

Check your Satellite health with :

# foreman-maintain health check
Running ForemanMaintain::Scenario::FilteredScenario
================================================================================
Check for verifying syntax for ISP DHCP configurations:               [FAIL]
undefined method `strip' for nil:NilClass
--------------------------------------------------------------------------------
Check for paused tasks:                                               [OK]
--------------------------------------------------------------------------------
Check whether all services are running using hammer ping:             [OK]
--------------------------------------------------------------------------------
Scenario [ForemanMaintain::Scenario::FilteredScenario] failed.

The following steps ended up in failing state:

  [foreman-proxy-verify-dhcp-config-syntax]

Resolve the failed steps and rerun
the command. In case the failures are false positives,
use --whitelist="foreman-proxy-verify-dhcp-config-syntax"

And finally perform the upgrade with :

# foreman-maintain upgrade  run  --target-version 6.3 --whitelist="foreman-proxy-verify-dhcp-config-syntax,disk-performance,repositories-setup"                          

Running Checks before upgrading to Satellite 6.3
================================================================================
Skipping pre_upgrade_checks phase as it was already run before.
To enforce to run the phase, use `upgrade run --phase pre_upgrade_checks`

Scenario [Checks before upgrading to Satellite 6.3] failed.

The following steps ended up in failing state:

 [foreman-proxy-verify-dhcp-config-syntax]

Resolve the failed steps and rerun
the command. In case the failures are false positives,
use --whitelist="foreman-proxy-verify-dhcp-config-syntax"



Running Procedures before migrating to Satellite 6.3
================================================================================
Skipping pre_migrations phase as it was already run before.
To enforce to run the phase, use `upgrade run --phase pre_migrations`


Running Migration scripts to Satellite 6.3
================================================================================
Setup repositories: 
- Configuring repositories for 6.3                                    [FAIL]    
Failed executing subscription-manager repos --enable=rhel-7-server-rpms --enable=rhel-server-rhscl-7-rpms --enable=rhel-7-server-satellite-maintenance-6-rpms --enable=rhel-7-server-satellite-tools-6.3-rpms --enable=rhel-7-server-satellite-6.3-rpms, exit status 1:
Error: 'rhel-7-server-satellite-6.3-rpms' does not match a valid repository ID. Use "subscription-manager repos --list" to see valid repositories.
Repository 'rhel-7-server-rpms' is enabled for this system.
Repository 'rhel-7-server-satellite-maintenance-6-rpms' is enabled for this system.
Repository 'rhel-7-server-satellite-tools-6.3-rpms' is enabled for this system.
Repository 'rhel-server-rhscl-7-rpms' is enabled for this system.
-------------------------------------------------------------------------------
Update package(s) : 
  (yum stuff)

                                                                        [OK]
 --------------------------------------------------------------------------------
Procedures::Installer::Upgrade: 
Upgrading, to monitor the progress on all related services, please do:
  foreman-tail | tee upgrade-$(date +%Y-%m-%d-%H%M).log
Upgrade Step: stop_services...
Upgrade Step: start_databases...
Upgrade Step: update_http_conf...
Upgrade Step: migrate_pulp...
Upgrade Step: mark_qpid_cert_for_update...
Marking certificate /root/ssl-build/satmaster.rhci.local/satmaster.rhci.local-qpid-broker for update
Upgrade Step: migrate_candlepin...
Upgrade Step: migrate_foreman...
Upgrade Step: Running installer...
Installing             Done                                               [100%] [............................................]
  The full log is at /var/log/foreman-installer/satellite.log
Upgrade Step: restart_services...
Upgrade Step: db_seed...
Upgrade Step: correct_repositories (this may take a while) ...
Upgrade Step: correct_puppet_environments (this may take a while) ...
Upgrade Step: clean_backend_objects (this may take a while) ...
Upgrade Step: remove_unused_products (this may take a while) ...
Upgrade Step: create_host_subscription_associations (this may take a while) ...
Upgrade Step: reindex_docker_tags (this may take a while) ...
Upgrade Step: republish_file_repos (this may take a while) ...
Upgrade completed!
                                                     [OK]
--------------------------------------------------------------------------------


Running Procedures after migrating to Satellite 6.3
================================================================================
katello-service start: 
- No katello service to start                                         [OK]      
--------------------------------------------------------------------------------
Turn off maintenance mode:                                            [OK]
--------------------------------------------------------------------------------
re-enable sync plans: 
- Total 4 sync plans are now enabled.                                 [OK]      
--------------------------------------------------------------------------------

Running Checks after upgrading to Satellite 6.3
================================================================================
Check for verifying syntax for ISP DHCP configurations:               [FAIL]
undefined method `strip' for nil:NilClass
--------------------------------------------------------------------------------
Check for paused tasks:                                               [OK]
--------------------------------------------------------------------------------
Check whether all services are running using hammer ping:             [OK]
--------------------------------------------------------------------------------

--------------------------------------------------------------------------------
Upgrade finished.

Happy hacking!

Configuring Cisco UCS fencing in RHV Manager

Just a quick postit on how to configure fencing of Cisco UCS servers in RHV. Since the fencing agent requires the full path to the Service Profile in UCS, it needs to be carefully specified as:

  • Slot: MyServer
  • Options: suborg=org-CLOUD/org-CLOUD-HYPERVISORS/org-AZ1,ssl_insecure=1

The path to the Service Profile can be checked by inspecting the UCS 'Events' tab, which will show entries similar to:

<eventRecord
affected="org-root/org-CLOUD/org-CLOUD-HYPERVISORS/org-AZ1/ls-MyServer"
cause="transition"
changeSet=""
code="E4196007"
created="2018-02-21T12:44:16Z"
descr="[FSM:BEGIN]: Configuring Service Profile CFS1PHY03(FSM:sam:dme:LsServerConfigure)"
id="2586840"
ind="state-transition"
sessionId="internal"
severity="info"
trig="special"
txId="1209150"
user="ucsCentral"
dn="event-log/2586840"
status="created"
sacl="addchild,del,mod">
</eventRecord>

Happy hacking!

Satellite 6: Managing physical and virt-who subscriptions

Virtual Datacenter subscriptions allow customers to subscribe all VMs running in a certain hypervisor. For that to work, Satellite generates a pool of derived subscriptions that are to be latter mapped to the running VMs. This mapping (assignment) only occurs if the virt-who daemon is properly configured to query the virtualization infrastructure to understand in which hypervisor a certain VM is running.

There is a very nice document (Subscription-manager for the former Red Hat Network User: Part 9 - A Case Study with activation keys) that covers in far more detail all options available; however I want to comment on a case that is prevalent if you are managing Red Hat products as well as internal products / repositories.

Imagine you:

  • Need to provision physical servers that will consume a physical RHCI subscription.
  • Need to provision virtual machines that will consume a derived/virt-who RHCI subscription.
  • Need that all systems above access the same custom Products repositories.

For that you will need 3 activation keys :

  • "ak-rhel7-physical-rhci"
    • Autoattach: yes
    • Subscriptions added: the pool(s) of RHCI subscriptions
    • Product content: Enable the repositories from Red Hat products as required.
  • "ak-rhel7-virtual-rhci"
    • Autoattach: No
    • Subscriptions none
    • Product content: Enable the repositories from Red Hat products as required.
  • "ak-custom-products"
    • Subscriptions: All of your required custom products
    • Product content: Enable the repositories of your required products.

When provisioning and/or registering a system, just ensure you use both AKs simultaneously to ensure systems get access to their required software, for example:

subscription-manager register --org="Default_Organization" --activationkey=ak-rhel7-physical-rhci,ak-custom-products

If you are using Host Groups, edit each host group to use both activation keys so the provisioning will also use these.

Note that the first AK you specify will be used to set the Content View, Enviroment (Library, ...), Service Level, etc of your content host. Also this does not mean multiple content views can be used; the only CV used by content hosts is the specified in your first AK.

This also solves the problem of virt-who not sending the VM-to-host mapping while provisioning. With no subscriptions associated to the AK for virtual systems, Satellite will generate a temporary 1day/1week subscription with access to all available Red Hat products within Satellite. This gives virt-who plenty of time to generate and assign the derived VDC subscriptions.

tl;dr Don't try to do everything at once in a single AK, just use multiple AKs and ensure each one does it part well.

Happy hacking!

Satellite 6: Native integration with Red Hat IDM (FreeIPA)

Integrating Satellite and IDM/FreeIPA

These are some quick notes on integrating Satellite 6.2 with the FreeIPA single-sign-on solution.

Documentation reference:

Satellite integration

Register Satellite server into IDM as per usual process (ipa-client-install)

Create an IPA service:

(ipa-server)#  ipa service-add HTTP/satellite.fqdn

Optionally, fetch keytab from Satellite system. Otherwise, satellite-installer will try to fetch it:

(satellite)#  ipa-getkeytab -p HTTP/satellite.fqdn -k /etc/foreman-proxy/freeipa.keytab -e aes256-cts

Note that the HTTP in HTTP/satellite.fqdn MUST be uppercase, otherwise the setup will fail.

Satellite configuration

Run satellite-installer to enable IPA integration:

% satellite-installer --scenario satellite --foreman-ipa-authentication true

Create a Satellite user group. It will be mapped into the External group. Assign roles and admin status as required.

(satellite)# hammer user-group create --name SatelliteGroup
User group [automate] created

(satellite)# hammer user-group external create --name IDM-Group --user-group SatelliteGroup --auth-source-id 3
External user group created

IDM-Group is the group created within FreeIPA. SatelliteGroup is the group created withing Satellite (they don't need to match).

Unfortunatelly auth-source-id needs to be fetched from the Database until BZ 1336236 is addressed.

The ID can be gathered as follows (identified with the External name):

foreman=# select id,type,name from auth_sources;
 id |        type        |      name      
----+--------------------+----------------
  1 | AuthSourceInternal | Internal
  2 | AuthSourceHidden   | Hidden
  3 | AuthSourceExternal | External
  4 | AuthSourceLdap     | ldaptest.rhci.local

Role assignment:

% hammer user-group update --name SatelliteGroup --admin true

% hammer user-group update --name SatelliteGroup --roles Manager
User group [SatelliteGroup] updated

Testing

Log into Satellite WebUI with IDM username.

Note that hammer cannot be used with external autentication. Satellite 6.3 will provide API and hammer integration for external users.

Happy hacking!

Fixing yum incomplete transactions

I just found an issue with a server that wouldn't let me install any additional software with yum because it complained of an incomplete transaction. Plus checking the list of installed RPMs with rpm -qa showed that many of them where duplicated (old and new versions at the same time).

I used the following commands to get out that mess, and worked beautifully :

package-cleanup --dupes | grep -v Loaded | awk 'NR % 2 == 0' | xargs -n1 rpm -e --nodeps --justdb --noscripts
yum update
yum-complete-transaction
yum -y reinstall kernel

The first command is the meaty part; get a list of duplicate RPMs and, with the awk command get one out of two lines. Then pipe that list to rpm so we can mark the RPM as removed without running their removal scripts, or actually removing the files from disk !

After that, running yum update will take care of ensuring we didn't remove any dependency and finally yum-complete-transaction will replay the last failed command.

The last command is optional but apparently is a common problem to end up with a borked kernel and/or initrd, so reinstalling it for good measure.

As a sample of what yum and yum-complete-transaction where complaining about, here's a small excerpt of what was going on at the time of the issue:

--> Restarting Dependency Resolution with new changes. 
--> Running transaction check 
--> Finished Dependency Resolution 
Error: Trying to remove "systemd", which is protected 
Error: Trying to remove "yum", which is protected  
 You could try using --skip-broken to work around the problem   
** Found 43 pre-existing rpmdb problem(s), 'yum check' output follows: 
audit-libs-2.7.6-3.el7.i686 is a duplicate with audit-libs-2.6.5-3.el7.x86_64   
32:bind-license-9.9.4-50.el7.noarch is a duplicate with 32:bind-license-9.9.4-37.el7.noarch
7:device-mapper-1.02.140-8.el7.x86_64 is a duplicate with 7:device-mapper-1.02.135-1.el7.x86_64
7:device-mapper-libs-1.02.140-8.el7.x86_64 is a duplicate with 7:device-mapper-libs-1.02.135-1.el7.x86_64  
dracut-033-502.el7.x86_64 is a duplicate with dracut-033-463.el7.x86_64
e2fsprogs-libs-1.42.9-10.el7.x86_64 is a duplicate with e2fsprogs-libs-1.42.9-9.el7.x86_64  
fipscheck-lib-1.4.1-6.el7.x86_64 is a duplicate with fipscheck-lib-1.4.1-5.el7.x86_64  
glib2-2.50.3-3.el7.x86_64 is a duplicate with glib2-2.46.2-4.el7.x86_64
glibc-2.17-196.el7.i686 is a duplicate with glibc-2.17-157.el7.x86_64  
glibc-common-2.17-196.el7.x86_64 is a duplicate with glibc-common-2.17-157.el7.x86_64  
gobject-introspection-1.50.0-1.el7.x86_64 is a duplicate with gobject-introspection-1.42.0-1.el7.x86_64
kbd-misc-1.15.5-13.el7.noarch is a duplicate with kbd-misc-1.15.5-12.el7.noarch
kernel-tools-libs-3.10.0-693.el7.x86_64 is a duplicate with kernel-tools-libs-3.10.0-514.el7.x86_64
krb5-libs-1.15.1-8.el7.x86_64 is a duplicate with krb5-libs-1.14.1-26.el7.x86_64   
libblkid-2.23.2-43.el7.x86_64 is a duplicate with libblkid-2.23.2-33.el7.x86_64
libcap-2.22-9.el7.i686 is a duplicate with libcap-2.22-8.el7.x86_64
libcom_err-1.42.9-10.el7.x86_64 is a duplicate with libcom_err-1.42.9-9.el7.x86_64  
libcurl-7.29.0-42.el7.x86_64 is a duplicate with libcurl-7.29.0-35.el7.x86_64  
libdb-5.3.21-20.el7.i686 is a duplicate with libdb-5.3.21-19.el7.x86_64
libgcc-4.8.5-16.el7.i686 is a duplicate with libgcc-4.8.5-11.el7.x86_64   

Happy hacking!

Configuring proxy settings in lftp

I sometimes need to upload data to Red Hat's dropbox service, and most of the times I need to go through a proxy of some sort. Here's a quick note on how to configure lftp to use such a proxy.

# lftp
lftp :~> set ftp:proxy http://USER:password@yourproxy:8080
lftp :~> open 209.132.183.100
lftp 209.132.183.100:~> user anonymous
Password: 
lftp anonymous@209.132.183.100:~> cd /incoming
cd ok, cwd=/incoming
lftp anonymous@209.132.183.100:/incoming> put yourfile.tar.gz -o YOURCASENUMBER-yourfile.tar.gz

As you can see:

  • I used the set ftp:proxy command to configure the proxy.
  • Then I used 209.132.183.100 rather than dropbox.redhat.com . Some proxies do not work if using the DNS hostname, for some reason.
  • After that, I blindy changed into /incoming . It's the only directory with allowed wrting permissions
  • Finally I used put -o to specify the destination filename.

Additional notes on configuring lftp to upload information to Red Hat can be found in KCS 2112 .

Happy hacking!

Satellite 6 Maintenance tools

Almost missed this release. A few days ago, the Satellite 6 Maintenance tools were released. These are tools that will ease the migration to Satellite 6.3 once is released, as well as one of my favourite BZs for Satellite: BZ 1480346 Tool to change Satellite hostname

They can be installed with :

subscription-manager repos --enable rhel-7-server-satellite-maintenance-6-rpms
yum install rubygem-foreman_maintain satellite-clone

Note the repository above is part of RHEL 7, not Satellite product.

Also bear in mind this is a very first iteration of these tools; I'd expect they will be further enhanced before the Satellite 6.3 GA release. At the moment they are in a BETA state :-)

You can read the official notes here:

Happy hacking!

Gerrit for dummies and Software Factory

Last Thursday I attended a great presentation by Javier Peña, an introduction to Gerrit. In no particular order, this are the things that catched my eye:

  • Gerrit is a code review system. It sits between your source code repository (Git) and your developers, so any proposed change can be better discussed and reviewed. In parallel, Gerrit can be instructed to launch integration tests to ensure the proposed change doesn't break anything. If the CI integration tests are successfull, the patch can be merged with the code. Otherwise, further changes can be done by the submitter with git commit --amend until the patch is succesfully merged (or discarded).
  • It is better than Github's review workflow because:
    • It's not propietary ;-)
    • Can work in a disconnected fashion with Gertty
    • Reviewing dozens of commits for a single PR can get confusing fast, specially with large patches.
    • As mentioned earlier, Gerrit can be easily integrated with several CI systems.
  • Important opensource projects such as Openstack and oVirt are using it, although it started at Google.

There is project that is integrating Git, Gerrit, Zuul, Nodepool and a bunch of other tools to make development easier. It's called Software Factory and you can find additional info in their documentation: https://softwarefactory-project.io/docs/.

Software Factory logo

Happy hacking!