The main reason that I document the detailed processes for performing system administration tasks in this site is for me. I know that others enjoy reading these processes and also find them helpful, but my primary target audience is myself. Normally I forget the steps that I perform fairly quickly after I complete a task. Over the years I have found it most helpful to document the steps on the web since I can easily find them and repeat the process whenever needed, wherever I happen to be.
Recently, I had the opportunity to refer back to one of my articles I had written a few months before on how to install the python-pip package for Python 2.6 running under Centos 6. I was planning on using the Boto API to create and modify infrastructure in AWS. Unfortunately, I had repurposed the original server that had Boto installed. I needed to use Pip to add Boto to a new Centos 6 server, and of course this required that I install Pip first. Since I had written the process so recently, I was very surprised when I ran into errors when trying to use it just a few months later. Continue reading
In this article I will describe the step by step process for installing the Boto API under Python 2.6 that I used on my Centos 6.3 server. Boto can be used to create and manage AWS infrastructure from a Python program or script. Once I have installed and configured Python access using the API, I will demonstrate a “hello world” type script for starting existing server instances and also for shutting down those instances. Continue reading
This is a very short article on how to install pip under Python 2.6. I am doing this so that I can load the Amazon Web Services Python API called Boto. However, since this looks like a procedure I may need to repeat in the future, I decided to make it a separate article so that I will be able to find it and refer to it easier when I need it. Continue reading
In this article I will describe the process that I used to connect 3 ESXi hosts to shared storage using VMware Virtual SAN (VSAN). Virtual SAN is included with ESXi 5.5 and allows unallocated disk space on each host to become part of a shared storage array. At least 3 hosts are required in the storage cluster which may be shared with up to 32 nodes. Each host must have both SSD and non SSD storage available.
I am going to create 3 ESXi hosts, each with 4GB of SSD and 20GB of non SSD storage. Each host will have a primary NIC used for the management network and a second NIC used for the VSAN network. The 2nd NIC of each host will connected to a VMware distributed switch. These 3 nodes will then be clustered in vCenter with VSAN enabled. Continue reading
There are currently 2 clients available for managing vSphere, the legacy vSphere Desktop Client and the newer vSphere Web Client. The web client was released as part of ESXi 5.1 and is the strategic direction for VMware. As a result, the newer features of ESXi 5.5 are available in the web client, but not in the desktop client. With 5.5, the web client offers a powerful management solution. This article will describe the step by step process for accessing the client from your browser and configuring it for use.
You must have vCenter installed in order to use the web client. To login to the web client, first point your browser to the IP address of the vCenter server using HTTPS and no port specified. This will bring up a getting started screen which will have a link for logging into the web client. Continue reading
In this article I am going to present the step by step process for installing the vCenter Server Appliance (VCSA) version 5.5. With ESXi 5.5, VCSA has become a production ready platform and for many use cases it is a preferred solution to installing vCenter Server under Windows. VCSA is deployed as an OVF template and requires an ESXi host for installation. In this article, I will be using an ESXi 5.1 host for the installation.
After the OVF template has been downloaded, use the vSphere client on the existing ESXi host to deploy it. Continue reading
This article captures the step by step process I followed for installing the 64 bit version of Centos Linux 6.3 as VMware guest under ESXi 5.5. In order to follow this process, it will be necessary to first install VMWare ESXi 5.5 host software and then install and configure the vSphere 5.5 client software. Please refer to the previous articles that I have written describing the step by step processes needed to perform both of those tasks. Continue reading
After ESXi 5.5 has been successfully installed, we would like to create guest VMs. Before we can do that, we must first install the vSphere client which is used to manage ESXi. This article contains the step by step procedure for installing and configuring the client so that it can be used to create guest VMs under ESXi 5.5. Continue reading
Here is the step by step process that I used to install VMware ESXi 5.5 in my home lab. For this article, I will be using nested virtualization and will install this product using VMware Player. I have preconfigured the Player environment to have 2 CPUs, 4GB of RAM and a 40GB local hard disk. I also mounted the ESXi 5.5 iso on the DVD drive so that it would boot during startup. The process will start by turning on the server. Continue reading
While working with some VMs in the lab, I realized that I needed to increase the size of a file system. This is actually not that big a deal since I typically create template servers and clone them. It would very easy to just create a server that had the correct file system size and clone it.
However, as effective as that solution would be, it did not sound like much fun. As a result I did a bit of research using Google and found several procedures for extending the file systems under VirtualBox using the Gparted utility. I have used this utility in the past and know that it is effective. But, again this just didn’t sound like much fun.
As I was about to try the Gparted process, I happened to find a blog post by Frank Munz at www.munzandmore.com. I don’t know anything about Frank or his blog, but the post interested me because instead of using Gparted, he used the Linux Logical Volume Manager (LVM). I have used LVM many years ago as a System Administrator, but had not really used it in the past few years. As a matter of fact, 2 articles I had written years ago about using LVM with AIX and also with HPUX are still referred to by some of my readers.
Seeing his blog post made me think that using LVM instead of Gparted would be a lot more fun. So this article describes the step by step process I used, following his lead to extend the root file system of an Oracle Linux 5.8 server running under VirtualBox from 16GB to 34GB. Continue reading