Adding Nodes to a Cluster

Describes how to add nodes to a cluster.

About this task

You can add nodes to a cluster using the web-based Installer (version 1.6 or later), the Installer Stanzas, or manually. To add nodes to your cluster using the Installer or Installer Stanzas, see Extending a Cluster by Adding Nodes. Complete the following steps to add nodes manually to a cluster:

Procedure

  1. Prepare all nodes.
    If you do not use the Domain Name System (DNS), ping the new node from an existing node and vice versa. Use the host name instead of an IP address. If you do not get a response, and if you rule out a network problem, a possible fix is to edit the /etc/hosts files of all nodes in the cluster. All nodes need to be listed in all /etc/hosts files.
  2. Plan which packages to install based on services you want to run on the new nodes.
  3. Install HPE Ezmeral Data Fabric Software.
    • On all new nodes, add the HPE Ezmeral Data Fabric Repository.
    • On each new node, install the planned packages.
  4. Configure all new nodes by running configure.sh.
    If you added a ZooKeeper role to a node, run the following command on all nodes with the new ZooKeeper list: configure.sh -no-autostart. See configure.sh for more information.
  5. On all new nodes, format disks for use by HPE Ezmeral Data Fabric if you plan to re-use a node from another cluster.
    Format the disks from a re-used node to remove data from the old cluster.
    NOTE All the disks (for use by HPE Ezmeral Data Fabric) on a node must be of the same type. That is, all the disks on a node must either be rotational or SSDs; node with disks of both types is not supported.
    See Formatting Disks on a Node From the Command-line for more information.
  6. If you manually modified configuration files on the existing nodes and those changes apply to the new nodes, copy only those changes to the respective files on the new nodes.
  7. Perform the following steps if you added the node(s) to any secure cluster that is configured for cross-cluster operations.
    1. Copy the /opt/mapr/conf/mapr-clusters.conf file and /opt/mapr/conf/ssl_truststore file from another node to the new node(s).
    2. Copy the /opt/mapr/conf/maprserverticket file from:
      • A CLDB node if the new node is a CLDB node.
      • A non-CLDB node if the new node is not a CLDB node.
      The /opt/mapr/conf/maprserverticket file contains additional entry for cross-cluster tickets. See Configuring Secure Clusters for Cross-Cluster NFS Access for more information.
  8. Start ZooKeeper on all new nodes that have ZooKeeper installed:
    service mapr-zookeeper start
  9. Start Warden on all new nodes:
    service mapr-warden start
  10. Restart services that you reconfigured.
    Running configure.sh alone does not reconfigure services, such as ZooKeeper. Reconfigured services also require a restart. For example, restart ZooKeeper on each node, one at a time after running configure.sh. Restart the lead ZooKeeper last. Restarting ZooKeeper adds the new nodes into the existing ZooKeeper quorum. Services that need to connect to CLDB do not always discover a newly added CLDB node without restarting warden.
  11. Set up node topology for the new nodes.
  12. On any new nodes running NFS, set up NFS for HA.