Satellite 6.2 : Adding RHV4 compute resources

Satellite server allows to manage compute resources such as Vmware and Red Hat Virtualization (RHEV/RHV). In the later RHV versions there's a caveat on how they should be added into Satellite.

  • The RHV certificate file has changed location:

http://rhv4-fqdn//ovirt-engine/services/pki-resource?resource=ca-certificate&format=X509-PEM-CA

  • The API endpoint should be specified with a 'v3' tag, as follows :

https://rhv4-fqdn/ovirt-engine/api/v3

Happy hacking!

Injecting proxy configuration into remote-viewer (RHEV/oVirt console)

One of the things that bother me is that I have not been able to find an easy way to change proxy settings when using the remote-viewer tool to connect to RHEV/oVirt virtual machines.

Finally I managed to put some thought into how to handle this situation -- and the easiest way I found is just to puto a wrapper into remote-viewer so whenver it's invoked via the browser, it can do all required mangling of the *.vv file we just downloaded.

The virt-viwer *.vv files are just INI files so updating them is usually quite easy via :

All in all, my wrapper looks like this :

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
#!/bin/bash

# Wrapper around remote-viewer to inject proxy parameters.
# Needs ansible, but can be trivially amended to use crudini.

export PATH=$PATH:/usr/bin:/usr/sbin

path=$1
chmod 600 $path

logger $0 $path


# Add your magic about detecting whether we need to use a proxy in the 'if' below ;-)
if [ $(netstat -putan 2> /dev/null | grep -P ':3128.*LISTEN' -c) -ne 0 ]; then
    logger $0 enabling proxy
    ansible -m ini_file localhost -a "state=present section=virt-viewer option=proxy value=\"http://localhost:3128\" path=\"$path\""
    ansible -m ini_file localhost -a "state=present section=ovirt       option=proxy value=\"http://localhost:3128\" path=\"$path\""
    sed -i 's# = #=#g' $path
fi

#logger < $path
/usr/bin/remote-viewer $path

exit $?

And then you just need to instruct your browser to launch remote-viewer.sh rather than remote-viewer to have your proxy settings automatically added.

Happy hacking!

Satellite 6: Force repository download

As a part of the Satellite 6.2.9 release, it is now possible to force a re-download of a certain repository.

This is a big help to fix the following scenarios:

  • Missing/broken on-disk RPM files due to human error, bitrot, others.
  • Restoring a Satellite without the Pulp data.

The resynchornization can be launched with :

# hammer repository list --organization "Example"
# hammer repository synchronize --validate-contents=true --id=42

Alternatively a full RPM spool check can be triggered with :

# foreman-rake katello:validate_yum_content --trace

There is further information in the following KCS note.

Happy hacking!

Taming PackageKit's disk usage

PackageKit is is a D-Bus abstraction layer that allows the session user to manage packages in a secure way using a cross-distro, cross-architecture API.

To me is more a space hog that caches all available RPM updates of my installed software, with no reasonable way of getting them cleaned/expunged.

There are a few bugs still opened for F24 and F25 on its default behaviour and while no consensus has been reached, I was still in need of a way to trim the usage of my /var/cache/PackageKit directory.

The long term solution is apparently to just disable the software updates via Gnome's gconf :

$ gsettings get org.gnome.software download-updates
true

$ gsettings set org.gnome.software download-updates false

$  gsettings get org.gnome.software download-updates
false

This will only affect future downloads; it does not take care of actually cleaning the currently cached packages. The pkcon tool should take care of this:

# pkcon refresh force -c -1
Refreshing cache              [=========================]         
Loading cache                 [=========================]         
Downloading repository information[=========================]         
Loading cache                 [=========================]         
Downloading repository information[=========================]         
Loading cache                 [=========================]         
Downloading repository information[=========================]         
Loading cache                 [=========================]         
Downloading repository information[=========================]         
Loading cache                 [=========================]         
Downloading repository information[=========================]         
Loading cache                 [=========================]         
Downloading repository information[=========================]         
Loading cache                 [=========================]         
Downloading repository information[=========================]         
Loading cache                 [=========================]         
Downloading repository information[=========================]         
Loading cache                 [=========================]         
Downloading repository information[=========================]         
Loading cache                 [=========================]         
Downloading repository information[=========================]         
Loading cache                 [=========================]         
Downloading repository information[=========================]         
Loading cache                 [=========================]         
Finished                      [=========================]         

... however it seems it only cleans old information partially. In the end I ended up going through the drastic method and just sudo rm -rf /var/cache/PackageKit/* .

Happy hacking!

Identifying VirtIO disks in RHEV

When you have a lot of disks attached to a single VM, it can be cumbersome to identify which VM disk is what inside the VM. For example, you might have several 100 GB disks named vde, vdf, vdg and have no obvious way to properly detect which one is the right one.

Say you want to remove your vdd disk, and have it removed from your RHEV environment.

You can check their VirtIO identifier with:

# find /dev/disk/by-id "(" -name "virtio*" -and -not -name "*part*" ")"  -exec ls -l "{}" +
lrwxrwxrwx. 1 root root 9 Mar  8 09:18 /dev/disk/by-id/virtio-38ebb33e-db1c-4048-b -> ../../vdb
lrwxrwxrwx. 1 root root 9 Mar  8 09:18 /dev/disk/by-id/virtio-4d18901c-1dea-414d-b -> ../../vda
lrwxrwxrwx. 1 root root 9 Mar  8 09:18 /dev/disk/by-id/virtio-69c3688d-f8b3-464c-8 -> ../../vdc
lrwxrwxrwx. 1 root root 9 Mar  8 09:18 /dev/disk/by-id/virtio-b1666304-fba1-44a2-a -> ../../vdd

We can confirm the VirtIO identifier for vdd is b1666304-fba1-44a2 .

Now we need to map this to the human-readable names available in the RHEV Web UI. This can be done by leveraging the URL as follows :

# curl --silent -k -u "admin@internal:password" https://server:port/api/vms/UUID/disks | grep -P 'disk href|alias'

    <disk href="/api/vms/VM-UUID/disks/38ebb33e-db1c-4048-a345-e381abc7f8cc" id="c8aad46c-9373-4670-a345-e381abc7f8cc">
        <alias>MyVMName_Disk3</alias>
    <disk href="/api/vms/VM-UUID/disks/4d18901c-1dea-414d-9cca-811dac616443" id="b7840cff-8f55-4dbb-9cca-811dac616443">
        <alias>MyVMName_Disk2</alias>
    <disk href="/api/vms/VM-UUID/disks/69c3688d-f8b3-464c-87e1-000e19d85c79" id="e9938a29-b792-4f6e-87e1-000e19d85c79">
        <alias>MyVMName_Disk1</alias>
    <disk href="/api/vms/VM-UUID/disks/b1666304-fba1-44a2-9cb1-fb4cfcbc19dd" id="9c2df047-343e-484a-9cb1-fb4cfcbc19dd">
        <alias>MyVMName_Disk4</alias>

The UUID for the instance can be checked in the RHEV Web UI by opening the VMs tab and clicking in the VM name.

Happy hacking! :-)

Listing Hypervisors and VMs in Red Hat Satellite

One of my current pet peeves in Satellite is getting a list of Hypervisors and their associated VMs. This is specially helpful for trouleshooting virt-who or performing general Satellite clean up tasks.

With the snippet below we can get a list of all hypervisors configured in Satellite:

Additionally, virt-who created entities are usually named virt-who-UUID-1 or virt-who-HypervisorHostname-1. In order to easily identify them in the Satellite Web UI, the following filter can be used as a search pattern:

name ~ "virt-who-%-%-%"

Happy hacking!

Deploying the Cloudforms appliance template in vCloud Director

Red Hat provides the Cloudforms software nicely packaged as a Vmware OVA template; unfortunately this means that some manual work is required to deploy it under vCloud Director. Note that this blog post only talks about getting the template in vCloud Director; managing a vCloud Director provider is outside the supported list of Cloud providers for Cloudforms.

Once we have downloaded the Cloudforms software, the first thing is converting the template from OVA to OVF formats - the only supported format for vCloud Director.

# ovftool cfme-vsphere-5.7.1.3-1.x86_64.vsphere.ova cfme-vsphere-5.7.1.3-1.x86_64.vsphere.ovf
Opening OVA source: cfme-vsphere-5.7.1.3-1.x86_64.vsphere.ova
Opening OVF target: cfme-vsphere-5.7.1.3-1.x86_64.vsphere.ovf
Writing OVF package: cfme-vsphere-5.7.1.3-1.x86_64.vsphere.ovf
Transfer Completed                    
Warning:
 - Wrong file size specified in OVF descriptor for 'disk.vmdk' (specified: 42949672960, actual 707240448).
 - No manifest entry found for: 'disk.vmdk'.
 - No manifest file found.
Completed successfully

Note there is a warning message regarding the disk.vmdk file; this is due the template being in Thin format. In order to vCloud to accept the OVF file, it is necessary we modify the produced cfme-vsphere-*.ova with the right size:

sed -i 's#42949672960#707240448#g' cfme-vsphere-*.ovf

Once we have done that, we need to ammend the Manifest file with the right sha1sum:

# sha1sum  cfme-vsphere-5.7.1.3-1.x86_64.vsphere.ovf
17df197d9ef7859414aac0f6703808a9a8b99286  cfme-vsphere-5.7.1.3-1.x86_64.vsphere.ovf

# cat cfme-vsphere-5.7.1.3-1.x86_64.vsphere.mf
SHA1(cfme-vsphere-5.7.1.3-1.x86_64.vsphere.ovf)= 17df197d9ef7859414aac0f6703808a9a8b99286
SHA1(cfme-vsphere-5.7.1.3-1.x86_64.vsphere-disk1.vmdk)= 696baa7f8803beca7be2ad21cde2b6cc975c6c57

Finally, we can import the Template into vCloud Director itself, using again the ovftool software:

# ovftool --vCloudTemplate cfme-vsphere-5.7.1.3-1.x86_64.vsphere.ovf  "vcloud://myuser@myvcloud.example.org:443?org=myorg&vappTemplate=CFME42-Template&catalog=MyCatalog&vdc=MyVDC"
Opening OVF source: cfme-vsphere-5.7.1.3-1.x86_64.vsphere.ovf
The manifest validates
Enter login information for target vcloud://myvcloud.example.org/myorg
Username: myuser
Password: *******
Opening vCloud target: vcloud://myuser@myvcloud.example.org/
Deploying to vCloud vApp template: vcloud://myuser@myvcloud.example.org/
Transfer Completed
Completed successfully

Once we complete that step, the template is available to start deploying a new Cloudforms VM.

Happy hacking!

Red Hat Satellite useful links

This is a list of assorted links I found useful related to Red Hat Satellite and its vibrant ecosystem of community produced tools. I'll keep this page updated whenever I find anything interesting so keep visiting it :) .

Red Hat content

General content

Red Hat Knowledge base articles

Github projects

API usage and samples

Happy reading ;-)

Red Hat Virtualization to get Hyperconvergence (HCI) support

It's been a while since I last touched Red Hat Virtualization (now without the 'E'!). Once thing that picked my eye in this review is the fact that Hyperconvergence is now a thing and will be supported very soon.

The idea behind this is providing ROBOs (remote offices / branch offices) a low-complexity solution to support their computing requirements without investing in difficult to manage technology. But enough with the buzzwords, the technical details are:

  • Virtualization technology provided by Red Hat Virtualization (RHV) 4.1 .
  • Software-defined Storage technology provided by GlusterFS .
  • Automathed installation via the Cockpit software.

RHV Pod components image

The plan is to support up to 3 pods (3, 6 and 9 hosts) configurations (remember this is a ROBO product). As with all beta software, this is subject to change in the final product so keep an eye on the Release Notes.

Here are a few links that will bring further info if this sounds interesting to you:

Happy hacking!

Introduction to hammer csv

Hammer CSV is an extension of the Satellite hammer administration tool, built by Thomas McKay. The main idea behind it was to generate machine-parsable export of different Satellite configurations that can be easily applied among Organizations of same/different Satellite servers.

What is developed so far is:

  • activation-keys
  • architectures
  • compute-profiles
  • compute-resources
  • containers
  • content-hosts
  • content-view-filters
  • content-views
  • domains
  • host-collections
  • host-groups
  • hosts
  • installation-media
  • job-templates
  • lifecycle-environments
  • locations
  • operating-systems
  • organizations
  • partition-tables
  • products
  • provisioning-templates
  • puppet-environments
  • puppet-facts
  • puppet-reports
  • reports
  • roles
  • settings
  • smart-proxies
  • subnets
  • subscriptions
  • sync-plans
  • users

And these are a few ideas of what can be done :

  • Export Content Views and Composite Content Views between Satellites, provided that you are using date-based CV filters.
  • Backup your settings to a git repo so you can understand when changes have been done.
  • Export your hosts/content-hosts to a list.

I put together a very simple export script that can be used to poke around what is actually being exported :

Happy hacking!