NFS VMware

CORE STUDY: Ubuntu NFS / VMWARE standup

Reading Time: 3 minutes

LAST UPDATE: 02/2/2022 – THIS IS MEANT TO BE A PREVIEW – But I am publishing to make it easier to review. While I am editing this… the step numbers are not going to match. Also not all of the following needs to be done in lock step order.

Let’s review some basics as a practical exercise. This is a very practical exercise. There are a number of follow-up exercises that can be performed after this… or more advanced cases. Running NFS as a HA might would be good. Bonding adapters to increase throughput. Reviewing packet size. Keeping an eye on default ARP TTL,

Let’s review a theoretical environment to set some an understanding. Again the goal here is standing up an Ubuntu NFS server which can act as a DATASTORE for a VMWARE ESXI server.

Let’s say we take a DELL R410 as an Ubuntu physical host. For ease of conversation let’s say we are booting of a 64GB flash drive and using 2TB of HD for storage. I have a number of storage devices that are either SFTP or NFS as a matter of convenience.

STEP 1: Install Ubuntu. In this case we are configuring the host to use 172.16.104.251. The installation of Ubuntu is a core skill and we won’t review the details here. Suffice it to say that we use Ubuntu server; Configure a network IP address on the machine; test network connections by pinging perhaps another well known host; I would ping the gateway and the ESXI host as well. You might consider using a command to verify that you are connecting at the speed you intended. (Say 1G or 10G ) Nothing is as puzzling during reviewing to find out that Ubuntu and your hardware seemed only to negotiate at 10M.

STEP 2: We can install Ubuntu first and then install the NFS service.

apt install nfs-kernel-server

A good point to remember is to set at least some permissions on directory (and any children). something like the following might work nicely for you.

From /mnt I usually create a series of directories like lvnfs, lviso, lvdict, and lvkvm to store items. I will mount a logical volume as a matter of convenience to each of these directories.

chown nobody.nogroup lvnfs

The nobody.nogroup user and group works nicely.

Not you can edit the /etc/exports file which is going to define which resources are exported via NFS.

/mnt/lviso  172.16.104.250/24(rw,sync,no_subtree_check)
/mnt/lvnfs  172.16.104.250/24(rw,sync,no_subtree_check)

You can always issue the “exportfs” command to reload or trigger the NFS services to re-learn resources if you make a change. This is far easier then restarting the machine – or the NFS services using systemctl.

exportfs -ra

STEP 4: Configuring the NFS share as a DATASTORE on ESXI

If you are recycling a machine or a configuration you may want to remove the old “datastore” fist. You can do this from the command line like.

esxcli storage nfs remove -v lvnfs

Now from the command line you can add it back in again.

The quickest way to add a new data store is to do it from the command line of the ESXi host. Turn on the SSH services at the GUI console. Then SSH into the machine.

esxcfg-nas -a lvnfs -o 172.16.104.250 -s /mnt/lvnfs
esxcfg-nas -a lviso -o 172.16.104.250 -s /mnt/lviso

Once the data store is in place it’s ready for use. You can define news hosts there as or (if you are recycling) you can import a HOST that you had used previously. This is where I typically have to remind myself to set the permissions on the folders so that the ESXi host has permissions to write/update

STEP 6 [Modify /etc/fstab]

I am considering this a clean up step. If you wanted this could have been done earlier – say just after you established your NFS exports. One of the things we might want to do next is to configure the “fstab” of the Unix host with the Logical Drive used for storage. Automating getting things into place after a reboot is better than manually doing it. We can use the BLKID command to discover the UUID of the logical volume.

root@nfs2:/etc# blkid /dev/vg00/lvnfs
/dev/vg00/lvnfs: UUID="576360d8-3633-4cac-9838-4d59848c3e68" BLOCK_SIZE="4096" TYPE="ext3"

Now that we know the UUID for the volume we want to mount we can edit the /etc/fstab and place it in there.

UUID=576360d8-3633-4cac-9838-4d59848c3e68   /mnt/lvnfs  ext3 defaults 0 2

Thank you!

Related posts

Installing VMware 6.5

Tom Hamilton

VMWARE VSAN Cluster Remove

Tom Hamilton

VMware – Linux – open-vm-tools

Tom Hamilton