18 April, 2016

Configuring Microsoft Cluster on Windows Server 2003

Even though, Officially, the support for Microsoft Server 2003 has ended, I have created this post of configuring Microsoft Cluster on Windows 2003. This is after knowing the fact, that there are quite a good number of domains in which Windows Server 2003 is still being used

In this lab, we have used the following machines:-

- Windows Server 2003 (Node1)
- Windows Server 2003 (Node2)
- Windows Server 2008 R2 (Domain Controller)
- Windows Server 2008 R2 (To Access OpenFiler SAN)
- OpenFiler SAN

Read the Following Post on "How to Configure SAN on OpenFiler" 

The Domain Controller 


Node1


Node2


OpenFiler SAN UI


OpenFiler SAN Connected From Another Machine


Create a Service Account to run the cluster service

Name of the Account: Clus_Srv



On both the Nodes, we need to have 2 NIC

One NIC is for the communication of the machine with rest of the machines and storage
Second NIC is dedicated for communication with the other Node (Node2) for Heartbeat. To check if the other node is up and running so that in case no response is received then it will failover assuming that the other node is down

For convenience, I have renamed the tow NIC`s as Public and Private (For Heartbeat)


Public Network Card

- IP to communicate with the domain controller
- "Register This Connection`s Addresses in DNS" should be checked


Private Network Card

- IP to communicate with the other Node (Node2)
- IP Address should be of different subnet range
- "Register This Connection`s Addresses in DNS" should be unchecked
- Disable NetBIOS over TCP/IP should be checked



In Binding Order, The Public Network Card should be on top


Same configuration on Node2


On a Windows 2003 / 2008 Server we can download iSCSI Initiator however on Windows 2012 iSCSI Initiator is Built-In

Microsoft iSCSI Software Initiator Version 2.08
https://www.microsoft.com/en-in/download/details.aspx?id=18986

Lets install the iSCSI Initiator on Node1 and on Node2 (Windows Server 2003)






Once installed, double click on the icon to launch the iSCSI Initiator

General Tab

Beside other options, the General Tab will list the name of the Initiator Node. This is one of the most important piece of information. On the iSCSI Target, when you have multiple nodes connecting to it, you will need to know the name of all the Nodes which have connected to that target. Also, you do have an option to change this name

- Initiator Node Name
- Rename the Initiator Node
- Specify CHAP Secret
- Configure IPSec


Discovery Tab

Connecting a Server to SAN or "Presenting a LUN" from SAN consist of two parts:-

1. From SAN : Create a LUN and Map that LUN to a Disk/Drive using iSCSI Target
2. From Server/Client: Connect to the iSCSCI Target using iSCSI Initiator

Part 2 will require you to first "Discover" a iSCSI Target. For that you need to proovide an IP address of the Target with a Port Number



IP of SAN: 192.168.1.50



Targets Tab

Once the discovery is completed, in the Target Tab you will find the iSCSI Target IQN which was generated in the previous post

"How to Configure SAN on OpenFiler" 


To Confirm, here is the screenshot of the SAN with that particular IQN listed in the Target Tab of iSCSI Initiator

Both starts with the same number "1027"


We now need to do an authentication by connecting with this particular LUN / iSCSI Target

Click on "LogOn" and you will get a dialog box with the following options:-

- Automatically restore this connection when the system boots
- Enable multiple-path

You can keep the default settings (Both options unchecked) and click on OK
However, if you want that the connection be made automatically next time the system boots then select the first option
Also, select the second option if you want to configure multi-path 

Multi-Path is setting up different paths to connect to the storage so that if one path fails, the connection be made using another path



Advance Settings


And once you click on OK, the status will be "Connected" instead of "Inactive"


Checking the Disk Management, you will find that 10 GB disk connected to the system. However, its just connected, not initialized.


Lets Initialize the Disk



Create a New Partition





Since this is a Quorum Disk, we will assign a letter "Q" to this disk

Quorum Disk:- All the configuration of a cluster is stored on Quorum Disk. It is very important for all the nodes of a cluster to have access to the Quorum Disk. That`s why this disk should be on a shared disk (SAN)




The quorum disk is ready now


In the background, I have created a storage disk as well (100 GB). This disk will store my application related data


Mapping the LUN with the Disk


The new LUN showed up on iSCSI Target


And its connected now


The disk appeared in disk management


"S" Drive (The Storage Drive - 100 GB)



Node2

We will perform the same steps on Node2







And now the fun begins...

Cluster Administrator


Once you launch the cluster administrator, you will get the following options:-

- Create New Cluster
- Add Nodes to Cluster
- Open Connection to Cluster



Select the option "Create New Cluster"



Specify the Domain Name and Cluster Name


Domain Name: ADShotGyan.com
Cluster Name: Win2k3Cluster


Specify the Computer Name


"Advanced"

- Typical Configuration (Storage will be added automatically)
- Advanced Configuration (Storage needs to be specified manually)


We need to get into all the details as to what happens in the background when the cluster wizard runs

Select the "Advanced Configuration"




The setup is completed


Next few slides we will see what exactly happens when the cluster wizard runs











Enter an IP Address

This IP Address will be Unique and will represent this cluster instance. So every cluster instance will have its own Name and IP Address


IP Address: 192.168.1.100


User Name
Password
Domain Name

The User Name will be the Cluster Service account name (Clus_Srv) which we have already created in active directory






Completed


Lets again go through the screenshots to check what happened in the background









Finally.... The Cluster Administrator


Lets browse through the snap-in

Cluster Group

A default Cluster Group will be created named "Cluster Group"

A cluster group will consist of the following:-

- Cluster Name
- Cluster IP
- Quorum Disk

As discussed earlier, a cluster will have a Name and IP address which will be Unique to a cluster
Also, a quorum is a disk attached to a cluster which will store all the configuration information about that cluster and the quorum should be accessible from all the nodes of a cluster


Resources

You can add the following resource:-

IP Address
Network Name
Local Quorum
DHCP Service
Distributed Transaction Coordinator
File Server
Generic Application
Generic Service
Message Queuing
Physical Disk
Print Spooler
WINS Service
Volume Shadow Copy Service Task
Majority Node Set
Generic Script


Resource Types

List the Resource Types and their associated DLL`s


Networks

List the network cards associated with the nodes



Network Interfaces


Node1

Active Groups

List the groups which are active on this node


Active Resources

List all the resources which are active on this node


Network Interfaces

List all the network interfaces on this node along with their status (Up/Down)


Lets now start adding that Node2 in the existing cluster

Click on the top left corner (Highlighted in Red Box)


You will get the same box which you got while creating the cluster

This time, select the option "Add Nodes to Cluster"


Specify the Cluster Name or the Server Name



Specify the node name (Node2) to be added




Error !!!!


Lets view the log

The basics of reading any log is to start from the end of the log. Also, do note the date and exact time when the issue happened and then read the log of that date/time

In the log below, the issue seems to be related to "Local Quorum". Cluster is unable to add Node2 to the local quorum. Isn't this obvious. As discussed earlier, Quorum Disk will store the configuration data of a cluster and this disk should be accessible from all the nodes. However, in this cluster it is using a local quorum (which is the default setting). But don`t we think that we have added a shared disk (Q:) to be used as quorum. So does it means that we need t change the quorum disk from local to the shared one


Right click on the cluster and select properties


Click on the Quorum Tab

But wait, we don`t see any option to change the Quorum Disk to Q: Drive


So does it means that we need to first add the Quorum Disk


Right click on Resources and select the option "New" -> Resource


Specify the following :-

Name
Description
Resource Type (Physical Disk)
Group (Cluster Group)



Select the Possible Owners

Here it is displaying only one node as only one node is currently added in this cluster


Lets not select any dependency


Select the Disk


Done


The Resource (Physical Disk) is still not online


Right click and select the option "Bring Online"



Right click on the cluster name and select properties

Select the Quorum Tab


From the drop down menu, select the Quorum Disk (Q:)



Now lets try to add a node









Success !!!












Once the node has been added, there are few changes that should ideally be done


In the cluster properties, select the "Network Priority" Tab

Move the Private Network on top as this network will be used periodically by the nodes to check the availability of the nodes





Currently the active node is Node1

Lets check if the cluster is functioning correctly





Stop the cluster service on Node1


The Cluster Group is now getting moved from Node1 to Node2



All the resources have now moved to Node2




Currently the active node is Node2

On Node1, the disk (Quorum and Storage) will appear like this (Unallocated)


And on the Active Node (Node2), they will appear online/healthy