Case 2: Secure MapR and HDFS Clusters
Describes how to configure access between a secure MapR cluster and a secure HDFS cluster.
About this task
Procedure
-
Verify that the
hadoop.rpc.protection
configuration parameter is set to the same value on the MapR and HDFS clusters. This parameter takes one of three values:authentication
,integrity
, orprivacy
. The default value for a MapR cluster with security features enabled isprivacy
. Configure this value in the$HADOOP_HOME/conf/core-site.xml
. After changing this value, you must restart Hadoop services on the node where you changed the configuration. -
On each node in your MapR cluster that runs the JobTracker or JobClient, create
the file
$HADOOP_HOME
/conf/mapred-site.xml
with the following properties:<property> <name>hadoop.security.authentication</name> <value>kerberos</value> </property> <property> <name>hadoop.security.authorization</name> <value>true</value> </property> <property> <name>dfs.namenode.kerberos.principal</name> <!-- Name Node principal in HDFS cluster --> <value>hdfs/_HOST@<KERBEROS_REALM></value> </property> <property> <name>dfs.web.authentication.kerberos.principal</name> <value>HTTP/_HOST@<KERBEROS_REALM></value> </property>
-
For each JobTracker node in the MapR cluster:
-
Launch any commands that need to access the HDFS cluster with the
“HADOOP_OPTS=-Dhadoop.login=hybrid”
Java property.