Resolving poor NFS performance on a virtualized Exadata

While researching an NFS performance issue on virtualized clusters for a client’s Exadata, I ran across an article on MOS (Doc ID 1930365.1) that, while specifically for a virtualized Exalytics system, resolved the issue nicely.

The network parameter txqueuelen requires larger values than the default when a network port is going to be used for something like copying multi-gigabyte files or doing a data pump import of a larger database from an NFS mount. This is especially true when we are dealing with 10 gigabit Ethernet.

The Oracle VM server that serves as the base for virtualization on Exadata was setting the txqueuelen parameter to 32 on each of the virtual interfaces (vif).

For the client, we did the following:

1.  Ensured that the NFS parameters in /etc/fstab matched Oracle recommendations.

2.  Utilized the ‘ifconfig’ program (‘ip link’ works as well) to set the txqueuelen parameter for the actual physical interface (eth1) and the virtual interfaces (such as vif7.2 on our test VM node) to 10000 on dom0.

3.  After this, we also set the same parameter (txqueuelen 10000) on the virtual cluster guest nodes.

4.  We made these changes permanent by setting the parameter in /etc/rc.local on dom0 and on each of the virtual guest domains, per the MOS document referenced.

Results:  a 50+ gigabyte table file that took 6 hours to import using data pump via NFS previously, finished in about half an hour. Only 10 minutes of the later test was actual data moving across the network.

During initial testing, we were lucky to see 9 MB/sec transfer rate across a 10 gigabit Ethernet port. After implementing the changes, that jumped to 200 MB/sec.

Keep this in mind if NFS performance on a Virtualized Exadata is slow.

Oracle VM for SPARC: Move PCIe Fibre Cards into Guest Domains

I recently worked on a project where the customer wanted to move a Fibre Channel HBA card into a Solaris 11 Guest Domain, making it an I/O domain. Here are the steps necessary to perform this task.

Note that you will want to be extremely careful about which slots you place the Fibre Cards in.  On the T5 system I was performing the work on, for example, one of the HBA cards was initially in slot 6. Slot 6 belongs to pci_0 in this example, which is also the main PCI bus that contains the Solaris boot drives for the Control Domain!

Run_prtdiag

In this example, I have two PCIe Fibre Channel HBAs.  I’m moving the one in slot 4 and the one in slot 1 to two separate Guest Domains.

Ldom

Confirm_PCIE

Output that follows is from the two separate Guest Domains.

Confirm_PCIE2