Openvswitch – Cannot Update CentOS 7


Whenever you use CentOS 7 with openvswith enabled you cannot update you system. yum update will time out. The server can access the internet, the mirror server can be resolved. So what is the problem? Usually this happened on CentOS 7 machine running on the cloud or inside the virtualization environment that use openvswitch.

You probably will get below error. when you do yum update or wget something.
[root@katello-vm ~]# yum update
Loaded plugins: fastestmirror
Loading mirror speeds from cached hostfile
* base:
* extras:
* updates:
base | 3.6 kB 00:00:00 [Errno 12] Timeout on (28, 'Operation too slow. Less than 1000 bytes/sec transferred the last 30 seconds')
Trying other mirror.
extras | 3.4 kB 00:00:00
updates | 3.4 kB 00:00:00
extras/7/x86_64/primary_db FAILED ] 0.0 B/s | 11 kB --:--:-- ETA

and the list will continue..

This happened because CentOS kernel is not supported by openvswith. In my case, Im running kernel 3.10.0-327.28.3.el7.x86_64. this kernel is not supported by openvswith. So I need to install new kernel (3.10.0-957.21.3.el7.x86_64) and firmware. Download the kernel and firmware here.

Now install the kernel and firmware without dependencies. Follow the sequence, first kernel, second firmware.
rpm -ivh kernel-3.10.0-957.21.3.el7.x86_64.rpm --nodeps
rpm -Uvh linux-firmware-20180911-69.git85c5d90.el7.noarch.rpm --nodeps

Remove grubenv
rm -rf /boot/grub2/grubenv

Reboot your system.

After your server rebooted, you can update your system.
yum update -y

have a nice day!


Posted in Install, Linux, RedHat | Tagged , , , ,

Create ID In RHV/oVirt


Below are step by step guide to create user in RHV/oVirt and give user privilege to access to Manager.

Create ID
ovirt-aaa-jdbc-tool user add hanief --attribute=firstName=Hanief --attribute=lastName=Harun

Show ID
ovirt-aaa-jdbc-tool user show hanief

Set User Password
ovirt-aaa-jdbc-tool user password-reset hanief --password-valid-to="2025-08-01 12:00:00-0800"

Allow User to Access RHV/oVirt Manager

  1. Using super-admin user > access to RHV/oVirt Manager > Administration > Users
  2. Select User > Permissions > Add System Permissions > Role to Assign > Add SuperUser


Posted in Linux, oVirt, RHEV, RHV | Tagged ,

Error “Cannot import VM. Selected display type is not supported by the operating system” When Importing VM from RHEV to oVirt


When importing VMs from a StorageDomain (usually export domain) you may get a “Cannot import VM. Selected display type is not supported by the operating system” error.

This is my workaround solution. The issue happened when I try to import VM to oVirt 4.3. The VM is originally from RHEV 3.4.

  1. Detach Storage Domain (Mine is export domain)
  2. Find OVF files on Storage Domain. (Search for VM’s ID. In oVirt 4.2, ovf file will be in /export_domain_path/export_domain_ID/master/vms/VM_ID)
  3. Edit this OVF volume files and change their <DefaultDisplayType> to 1 (or a supported value)
  4. Reattach Storage Domain so OVF_STORE files are re-read.
  5. Import the VM.
  6. After you have success imported the VM, You may need to modify the value on VM console. (VM > Edit > Show Advance Option > Console)
  7. Boot up the Vm to test.

That’s it, we’re done!


Posted in RHEV, RHV, Virtualization | Tagged , , ,

How to Check Top Running Processes by Highest Memory and CPU Usage in Linux


The following command will show the list of top processes ordered by RAM and CPU use in descendant form (remove the pipeline and head if you want to see the full list):

ps -eo pid,ppid,cmd,%mem,%cpu --sort=-%mem | head

Brief explanation of above options used in above command.

The -o (or –format) option of ps allows you to specify the output format. A favourite of mine is to show the processes’ PIDs (pid), PPIDs (pid), the name of the executable file associated with the process (cmd), and the RAM and CPU utilization (%mem and %cpu, respectively).

Additionally, I use --sort to sort by either %mem or %cpu. By default, the output will be sorted in ascendant form, but personally I prefer to reverse that order by adding a minus sign in front of the sort criteria.

To add other fields to the output, or change the sort criteria, refer to the OUTPUT FORMAT CONTROL section in the man page of ps command.


Posted in Linux, UNIX | Tagged , , , ,

How to Backup oVirt Engine


Backup is straightforward. See --help for details.

Below is the steps on how to backup ovirt engine.

engine-backup --mode=backup --file=backup1 --log=backup1.log

Full backup
engine-backup --mode=backup --scope=all --file="/var/lib/ovirt-engine/backups/fullbackup-$(date +%Y%m%d%H%M%S).tar.bz2" --log=/var/log/ovirt-engine/fullbackup-$(date +%Y%m%d%H%M%S).log

DB backup
engine-backup --mode=backup --scope=files --scope=db --file="/var/lib/ovirt-engine/backups/DBbackup-$(date +%Y%m%d%H%M%S).tar.bz2" --log=/var/log/ovirt-engine/DBbackup-$(date +%Y%m%d%H%M%S).log



Posted in Linux, oVirt, RedHat, RHEV, RHV | Tagged , ,

Iperf3: How to Specify the Amount of Data to be Transmitted

I need to test my server to server connection with 1gb data transfer. Below is the command

Server IP:
Client IP:

On Server, run
iperf -s

On client, run
iperf3 -c -n 1G
iperf3 -c -n 1024M

Example on client:

With the default buffer size of 8KB:

iperf3 -c -n 1024M
Client connecting to, TCP port 5201
TCP window size: 85.0 KByte (default)
[  3] local port 56565 connected with port 5201
[ ID] Interval       Transfer     Bandwidth
[  3]  0.0-10.5 sec  1.00 GBytes  8.20 Gbits/sec 

Wit the the same value for the -n option but with -l 32K

iperf3 -c -n 1024M -l 32K
Client connecting to, TCP port 5201
TCP window size: 85.0 KByte (default)
[  3] local port 56568 connected with port 5201
[ ID] Interval       Transfer     Bandwidth
[  3]  0.0-10.5 sec  1.00 GBytes  8.17 Gbits/sec


Posted in Linux | Tagged ,

Configure Multipathing in Linux (RHEL7/CentOS7


After you have assigned LUN (usually configured in storage) to the specific client, below is the steps on how to configure mutltipath on client machine (RHEL7/CentOS7.

Install device-mapper-multipath

root@www ~]# yum -y install device-mapper-multipath

Install iSCSI Initiator.

[root@www ~]# yum -y install iscsi-initiator-utils

Configure multipath.conf. comment line “find_multipaths yes”

[root@www ~]# vim /etc/multipath.conf
# find_multipaths yes

Configure iSCSI Initiator

[root@www ~]# vi /etc/iscsi/initiatorname.iscsi
# change to the same IQN you set on the iSCSI target server

Discover target (server is

[root@www ~]#iscsiadm -m discovery -t st -p,-1

Confirm status after discovery

[root@www ~]# iscsiadm -m node -o show

Login to the target

root@www ~]# iscsiadm -m node --login
node.tpgt = -1
node.startup = automatic
node.leading_login = No
node.conn[0].iscsi.HeaderDigest = None
node.conn[0].iscsi.IFMarker = No
node.conn[0].iscsi.OFMarker = No

Confirm the established session

root@www ~]# iscsiadm -m session -o show
tcp: [1],1 (non-flash)
tcp: [2],1 (non-flash)

Restart iscsid and mulptipathd

[root@www ~]# systemctl restart multipathd

[root@www ~]# systemctl status iscsid

Check attached disk

[root@www ~]# multipath -ll

mpathb (36589cfc0000007b06bdc809ab3ad6dc2) dm-2 FreeNAS ,iSCSI Disk      
size=5.0G features='0' hwhandler='0' wp=rw
`-+- policy='service-time 0' prio=1 status=active
  `- 3:0:0:0 sdb 8:16 active ready running

Edit /etc/multipathd.conf. Add below line at the bottom

multipaths {
        multipath {
                wwid    36589cfc0000007b06bdc809ab3ad6dc2
                alias   DATA01

Restart iscsid and mulptipathd

[root@www ~]# systemctl restart multipathd

Create new path

[root@www ~]# mkdir -p /data01

Print disk format disk

[root@www ~]# parted /dev/mapper/DATA01 
Error: /dev/mapper/DATA01: unrecognised disk label
Model: Linux device-mapper (multipath) (dm)                               
Disk /dev/mapper/DATA01: 5369MB
Sector size (logical/physical): 512B/16384B
Partition Table: unknown
Disk Flags:

Label disk

[root@www ~]# parted /dev/mapper/DATA01 mklabel gpt

Print again to check label

root@www ~]# parted /dev/mapper/DATA01 print
Model: Linux device-mapper (multipath) (dm)
Disk /dev/mapper/DATA01: 5369MB
Sector size (logical/physical): 512B/16384B
Partition Table: gpt
Disk Flags: 

Number  Start  End  Size  File system  Name  Flags

Configure disk to use all space

[root@www ~]# parted /dev/mapper/DATA01 mkpart DATA01 xfs 1 5369MB

Format disk (I’m using xfs as my file system)

[root@www ~]# mkfs.xfs -f /dev/mapper/DATA01p1

Check the block id (and copy the UUID as we will use it in the further step)

[root@www ~]# blkid /dev/mapper/DATA01 
/dev/mapper/DATA01: UUID="909a097e-6e01-490a-9d50-f4727c3efc04" TYPE="xfs"

Mount disk at boot using fstab (note: use _netdev)

[root@www ~]# vim /etc/fstab

UUID=909a097e-6e01-490a-9d50-f4727c3efc04       /data01	xfs	_netdev	0 0

Mount new disk

[root@www ~]# mount -a

To test your newly mounted disk, try to create any file (using touch or vi) in /data01 partition. If success then your disk is ready to work.

Now time to reboot your server and check either your /data01 is automatically mounted or not. If everything is fine, then your are golden.


Posted in Linux | Tagged , , , , , ,