Known Issues (Release 7.6.1)

You might encounter the following known issues after upgrading to release 7.6.1. This list is current as of the release date.

IMPORTANT The "Support notices of known issues" tool is no longer available, but you can obtain the same information by logging on to the HPE Support Center. See Support Articles in the HPE Support Center.

Where available, the workaround for an issue is also documented. HPE regularly releases maintenance releases and patches to fix issues. We recommend checking the release notes for any subsequent maintenance releases to see if one or more of these issues are fixed.

Clients

CORE-960
In a Java 17 environment, installing the Data Fabric client for Windows generates the following error during client configuration:
C:\>C:\opt\mapr\server\configure.bat -N mycluster -c -secure -C node1:7222 node2:7222
Don't forget to copy conf\ssl_truststore from a server on your cluster.

java.lang.reflect.InaccessibleObjectException: Unable to make protected final java.lang.Class java.lang.ClassLoader.defineClass(java.lang.String,byte[],int,int,java.security.ProtectionDomain) throws java.lang.ClassFormatError accessible: module java.base does not "opens java.lang" to unnamed module @491666ad

at java.base/java.lang.reflect.AccessibleObject.checkCanSetAccessible(AccessibleObject.java:354)
at java.base/java.lang.reflect.AccessibleObject.checkCanSetAccessible(AccessibleObject.java:297)
at java.base/java.lang.reflect.Method.checkCanSetAccessible(Method.java:199)
at java.base/java.lang.reflect.Method.setAccessible(Method.java:193)
at com.mapr.fs.ShimLoader.injectNativeLoader(ShimLoader.java:281)
at com.mapr.fs.ShimLoader.load(ShimLoader.java:225)
at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:77)
at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.base/java.lang.reflect.Method.invoke(Method.java:568)
at org.apache.hadoop.conf.CoreDefaultProperties.<clinit>(CoreDefaultProperties.java:63)
at org.apache.hadoop.conf.Configuration.<clinit>(Configuration.java:803)
at org.apache.hadoop.util.ShutdownHookManager$HookEntry.<init>(ShutdownHookManager.java:206)
at org.apache.hadoop.util.ShutdownHookManager.addShutdownHook(ShutdownHookManager.java:304)
at org.apache.hadoop.util.RunJar.run(RunJar.java:301)
at org.apache.hadoop.util.RunJar.main(RunJar.java:236)
java.lang.RuntimeException: Failure loading MapRClient.
at com.mapr.fs.ShimLoader.injectNativeLoader(ShimLoader.java:307)
at com.mapr.fs.ShimLoader.load(ShimLoader.java:225)
at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:77)
at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.base/java.lang.reflect.Method.invoke(Method.java:568)
at org.apache.hadoop.conf.CoreDefaultProperties.<clinit>(CoreDefaultProperties.java:63)
at org.apache.hadoop.conf.Configuration.<clinit>(Configuration.java:803)
at org.apache.hadoop.util.ShutdownHookManager$HookEntry.<init>(ShutdownHookManager.java:206)
at org.apache.hadoop.util.ShutdownHookManager.addShutdownHook(ShutdownHookManager.java:304)
at org.apache.hadoop.util.RunJar.run(RunJar.java:301)
at org.apache.hadoop.util.RunJar.main(RunJar.java:236)

Caused by: java.lang.reflect.InaccessibleObjectException: Unable to make protected final java.lang.Class java.lang.ClassLoader.defineClass(java.lang.String,byte[],int,int,java.security.ProtectionDomain) throws java.lang.ClassFormatError accessible: module java.base does not "opens java.lang" to unnamed module @491666ad

at java.base/java.lang.reflect.AccessibleObject.checkCanSetAccessible(AccessibleObject.java:354)
at java.base/java.lang.reflect.AccessibleObject.checkCanSetAccessible(AccessibleObject.java:297)
at java.base/java.lang.reflect.Method.checkCanSetAccessible(Method.java:199)
at java.base/java.lang.reflect.Method.setAccessible(Method.java:193)
at com.mapr.fs.ShimLoader.injectNativeLoader(ShimLoader.java:281)
... 11 more
2024-01-15 13:23:36,761 INFO conf.CoreDefaultProperties: Cannot execute load() method
java.lang.reflect.InvocationTargetException
at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:77)
at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.base/java.lang.reflect.Method.invoke(Method.java:568)
at org.apache.hadoop.conf.CoreDefaultProperties.<clinit>(CoreDefaultProperties.java:63)
at org.apache.hadoop.conf.Configuration.<clinit>(Configuration.java:803)
at org.apache.hadoop.util.ShutdownHookManager$HookEntry.<init>(ShutdownHookManager.java:206)
at org.apache.hadoop.util.ShutdownHookManager.addShutdownHook(ShutdownHookManager.java:304)
at org.apache.hadoop.util.RunJar.run(RunJar.java:301)
at org.apache.hadoop.util.RunJar.main(RunJar.java:236)
Caused by: java.lang.ExceptionInInitializerError
at com.mapr.fs.ShimLoader.load(ShimLoader.java:245)
... 10 more

Caused by: java.lang.RuntimeException: Failure loading MapRClient.

at com.mapr.fs.ShimLoader.injectNativeLoader(ShimLoader.java:307)
at com.mapr.fs.ShimLoader.load(ShimLoader.java:225)
... 10 more
The error occurs because Java 17 generates an exception on reflect operation handling.
Workaround: Add the HADOOP_OPTS environment variable with a value of --add-opens java.base/java.lang=ALL-UNNAMED.
MFS-15570
Using the Data Fabric client on Mac OS X fails with the following warning:
2023-01-17 10:13:51,170 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
fs/common/ossl_dynlib.cc:185: dlopen(libssl.1.1.dylib, 6): image not found
This warning is returned when you use the client to run the hadoop fs -ls command.
Workaround: Try using the Mac client for release 7.1.0. The release 7.1.0 Mac client can access a release 7.2 or later cluster without experiencing this error.
MFS-18249
When you refresh the JWT access token and refresh token for a FUSE-based POSIX client manually, the FUSE-based POSIX client does not list the ticket, and remains in an inactive and dead state.
Workaround: None.
MFS-18258
When you add a new cluster to a cluster group, the FUSE-based POSIX client and the loopbacknfs POSIX client take about five mintues to load or list the newly added cluster.
Workaround: None.

Control System

For additional known issues related to the Control System, see the Control System Release Notes.

DFUI-639
A non-LDAP SSO user authenticating to Keycloak cannot create a volume, stream, or table using the Control System.
Workaround: None. Non-LDAP and SSO local users are not currently supported.

Database

MAPRDB-2583

High File Server Memory Alarm is raised during replication of JSON tables that have row size less than 1 KB, and secondary indexes have been created on the JSON table being replicated.

Workaround: See Addressing High Memory File Server Alarm for JSON Table Replication

MFS-18188
When you create a column in a table, the column name is prepended with "v." in the Column Permissions tab. When you create a new column, an additional "v." is prepended to the names of existing columns.
Workaround: None.

Data Fabric UI

Sign-in Issues

DFUI-160
If you sign in to the Data Fabric UI as an SSO user but you do not have fabric-level login permission, a sign-in page for the "Managed Control System" (MCS) is displayed. The "Managed Control System" sign-in is not usable for the consumption-based HPE Ezmeral Data Fabric.
Workaround: Use one of the following workarounds:
  • Edit the MCS URL, and retry logging in. For example, change the boldface characters in the following URL:
    https;//<host-name>:8443/app/mcs/#/app/overview
    To this:
    https;//<host-name>:8443/app/dfui
  • Try signing in as a user who has fabric-level login permission.
  • Dismiss the MCS page, clear your browser cache, and retry signing in.
DFUI-437
If you sign in to the Data Fabric UI as a non-SSO user and then sign out and try to sign in as an SSO user, a sign-in page for the "Managed Control System" (MCS) is displayed. The "Managed Control System" sign-in is not usable for the consumption-based HPE Ezmeral Data Fabric.
Workaround: Use one of the following workarounds:
  • Edit the MCS URL, and retry logging in. For example, change the boldface characters in the following URL:
    https;//<host-name>:8443/app/mcs/#/app/overview
    To this:
    https;//<host-name>:8443/app/dfui
  • Dismiss the "Managed Control System" sign-in screen, and retry signing in as a non-SSO user.
  • Dismiss the MCS page, clear your browser cache, and retry signing in.
DFUI-811
If you launch the Data Fabric UI and then sign out and wait for 5-10 minutes and then attempt to sign in, a sign-in page for the "Managed Control System" (MCS) is displayed.
Workaround: See the workaround for DFUI-437.
DFUI-826
In a cloud fabric, an empty page is displayed after a session expires and you subsequently click on a fabric name. The browser can display the following URL:
https://<hostname>:8443/oath/login
Workaround: None.
DFUI-874
Sometimes when you attempt to sign in to the Data Fabric UI, the "Managed Control System" (MCS) is displayed, or the Object Store UI is displayed.
Workaround: See the workaround for DFUI-437.
DFUI-897
A user with no assigned role cannot sign in to the Data Fabric UI.
Workaround: Using your SSO provider software, assign a role to the user, and retry the sign-in operation.
DFUI-902
Incorrect resource data is displayed when an LDAP user signs in to the Data Fabric UI without any SSO roles.
Workaround: See the workaround for DFUI-897
DFUI-1123
Attempting to sign in to the Data Fabric UI as a group results in a login error message in the browser. For example:
https://<hostname>:8443/login?error
Workaround: None.
DFUI-1135
The Data Fabric UI does not allow an SSO user to log in after an unsuccessful login attempt.
Workaround: None.

Mirroring Issues

DFUI-1227
If you create a mirror volume with a security policy, an error is generated when you try to remove the security policy.
Workaround: None.
DFUI-1229
Data aces on a mirror volume cannot be edited.
Workaround: None.

Display Issues

DFUI-1186
After you complete the SSO setup for a new fabric, fabric resources such as volumes and mirrors are not immediately displayed in the Data Fabric UI.
Workaround: Wait at least 20 minutes or more for the Data Fabric UI to display the fabric details.
DFUI-1221
If a fabric includes a large number of resources, loading the resources to display in the Resources card on the home page can take a long time.
Workaround: None.
MFS-18188
When you create a column in a table, the column name is prepended with "v." in the Column Permissions tab. When you create a new column, an additional "v." is prepended to the names of existing columns.
Workaround: None.
DFUI-2127
When you set the access control of a table to Public, the Column families and Replication tabs do not display the create or edit options for your tables and table replicas.
Workaround: None.
DFUI-1793
When you set the access control of a column or column family to Public, the Name field is enabled.
Workaround: None.
DFUI-2102
When you create a table replica on a primary cluster with the source table on a secondary cluster, the replication operation times out. However, the table replica is successfully created on the primary cluster. The table replica appears in the Replication tab, but does not appear in the Data Fabric UI Graph or Table view for the primary cluster.

This behavior is the same for both a source table on the primary cluster and the replica on the secondary cluster.

Workaround: None.
DFUI-2175
The Client library side drawer displays the same export environment variable for both the access token and refresh token.
The expected values are as follows:
export MAPR_JWT_TOKEN_LOCATION="/root/jwt_access" 
export MAPR_REFRESH_TOKEN_LOCATION="/root/jwt_refresh"
However, the Client library side drawer displays the following:
export MAPR_JWT_TOKEN_LOCATION="/root/jwt_access"
export MAPR_JWT_TOKEN_LOCATION = "/root/jwt_refresh"
Workaround: None.
DFUI-2099
When you delete a table replica from the Data Fabric UI Home page, the table replica remains listed in the Replication tab. When you select the table on the Replication tab, a message returns stating that the requested file does not exist.
Workaround: None.
DFUI-2128

For external S3 buckets, the size of individual buckets is displayed as zero on the Resources card, even when the individual buckets contain one or more objects.

Workaround: None
DFUI-2136

The Overview tab for an external S3 bucket does not display any bucket-related information.

Workaround: None
DFUI-2095
Security policy created on global policy master immediately before creating a volume on secondary fabric is not available to assign to the volume, while creating the volume.

Workaround: Wait for about 15 minutes and then create the volume to which you wish to assign the new security policy.

DFUI-2161

When a node has multiple IP addresses (for example, the node has both IPv4 and IPv6 addresses associated with it), the IP addresses are missing a comma or a space, or any other separator between the individual IP addresses, on node access point modal.

Workaround: None. This issue should not affect the fabric functionality.

External S3

DFUI-2157
Editing buckets on external S3 servers is not supported.
Workaround: None.

Installation

See also Installer Known Issues.

DFUI-565, EZINDFAAS-169
Installation or fabric creation can fail if a proxy is used for internet traffic with the HPE Ezmeral Data Fabric.
Workaround: Export the following proxy settings, and retry the operation:
# cat /etc/environment
export http_proxy=http://<proxy_server_hostname_or_IP>:<proxy_port>
export https_proxy=http://<proxy_server_hostname_or_IP>:<proxy_port>
export HTTP_PROXY=http://<proxy_server_hostname_or_IP>:<proxy_port>
export HTTPS_PROXY=http://<proxy_server_hostname_or_IP>:<proxy_port>
DFUI-1224
Under Administration, in the Fabrics card, the Data Fabric UI can show the progress of installation as 100% and the status as Inactive before certain post-installation activities are complete.
Workaround: None. This issue should not affect the fabric functionality.

NFS

MFS-16516
When more than one external NFS servers in a cluster group are mounted on the /mapr path, and at least one external NFS server is inactive, the ls /mapr command hangs at the command prompt.
Workaround: Remove the inactive external NFS server referral using the maprcli clustergroup remove command, and re-run the ls /mapr command.

NFSv4

MFS-18264
Attempts to mount the NFSv4 server fail and return the following error:
Mount.nfs4: Stale file handle
Workaround:
  1. Update the EXPORT section of the /opt/mapr/conf/nfs4server.conf file as follows:
    EXPORT
    {
      # Export Id (mandatory, each EXPORT must have a unique Export_Id)
      Export_Id = 30;
    
      # Exported path (mandatory)
      Path = /mapr/clustername; <-- here instead of mapr please use /mapr/clustername
    
      # Pseudo Path (required for NFS v4)
      Pseudo = /mapr;
    
      Squash = No_Root_Squash;
    
      # Required for access (default is None)
      # Could use CLIENT blocks instead
      Access_Type = RW;
    
      # Security type (krb5,krb5i,krb5p)
      SecType = sys;
    
      # Exporting FSAL
      FSAL {
        Name = MAPR;
      }
    
      #SuperUser_Uid = 0;
    }
    For more information about the /opt/mapr/conf/nfs4server.conf file, see Configuring the NFSv4 Server.
  2. Restart the NFSv4 server:
    maprcli node services -nodes <node names> -nfs4 restart
    For more information about starting or restarting NFSv4, see Starting, Stopping, and Restarting HPE Ezmeral Data Fabric NFSv4.

Object Store

MFS-17233
On cloud (AWS, Azure, or GCP) fabrics, if an instance is rebooted, the public IP addresses can change. If this happens, the MOSS certificates must be regenerated to include the new IP addresses, and the changes must be propagated to all fabric nodes.
Workaround: To regenerate the MOSS certificates:
  1. Identify the new external IP address for each cloud instance.
  2. On each cloud instance:
    1. Log on as a sudo user.
    2. Update the certificate using the following manageSSLKeys.sh command:
      /opt/mapr/server/manageSSLKeys.sh createusercert -u moss -ug mapr:mapr -k <ssl_keystore_password> -p <ssl_truststore_password> -ips "<new external ip of the instance>" -a moss -w
    3. Restart the MOSS service:
      maprcli node services -nodes 'hostname -f' -name s3server -action restart -json
    NOTE You can obtain the ssl_keystore_password and ssl_truststore_password from the node where the configure.sh -secure -genkeys command was issued. In the /opt/mapr/conf/store-passwords.txt file, the passwords are listed under keys as ssl.server.keystore.keypassword and ssl.server.truststore.password.
    Use the following commands to ensure correct file ownership:
    chown mapr:mapr /opt/mapr/conf/ssl_usertruststore.p12
    chmod 0444 /opt/mapr/conf/ssl_usertruststore.p12"
    chown mapr:mapr /opt/mapr/conf/ssl_userkeystore.p12
    chmod 0400 /opt/mapr/conf/ssl_userkeystore.p12"
DFUI-519
An SSO user is unable to create buckets on the Data Fabric UI and the Object Store. This is applicable to an SSO user with any role such as infrastructure administrator, fabric manager or developer.
Workaround: Create an IAM policy with all permissions in the user account. This has to be done via minIO client or the Object Store UI. Assign the IAM policy to the SSO user. Login to the Data Fabric UI and create a bucket/view bucket.
DFUI-577
Downloading a large file (1 GB or larger) can fail with the following error:
Unable to download file "<filename>": Request failed with status code 500
Workaround: None.
DFUI-623
Non-LDAP SSO user authenticating to Keycloak is unable to log on to the Object Store UI.
Workaround: Log on using an SSO user.
MFS-16362
When one or more tags have been assigned to an object, it is not possible to remove all tags while editing the object via the Object Store UI.
Workaround: Use the mc tag remove command to remove the tags via maprcli.

Online Help

DFUI-459
If a proxy is used for internet traffic with the HPE Ezmeral Data Fabric, online help screens can time out or fail to fetch help content.
Workaround: Add the following proxy servers to the /opt/mapr/apiserver/conf/properties.cfg file:
  • http.proxy=<proxyServer>:<proxyPort>
  • https.proxy=<proxyServer>:<proxyPort>

Security Policies

DFUI-2106
A newly created policy cannot be assigned immediately to a fabric resource such as a volume on a secondary fabric in the global namespace.
Workaround: A fabric user must wait for 5-10 minutes before the fabric user can assign the policy to a fabric resource on a secondary fabric.
MFS-18154
A security policy created on a cloud-based primary fabric (such as AWS) is not replicated on to a secondary fabric created on another cloud provider (such as GCP).
Workaround: None.

Topics

DFUI-637
Non-LDAP SSO user authenticating to Keycloak cannot create topic on the Data Fabric UI.
Workaround: None.
DFUI-639
A non-LDAP SSO user authenticating to Keycloak cannot create a volume or stream using the Data Fabric UI.
Workaround: None. Non-LDAP and SSO local users are not currently supported.

Upgrade

EZINDFAAS-581
Because of a known issue (EZINDFAAS-581), upgrades of cloud fabrics from release 7.4.0 to 7.5.0 can fail because the keypair.pem file has the wrong permissions. This issue affects upgrades on AWS, Azure, and GCP, but does NOT affect upgrades for on-premises fabrics.
Workaround: See the steps in Upgrading a Data Fabric.
MFS-18222
Suppose you use the seed node deployment steps to create a fabric (f1), and then you add a fabric (f2). If you then upgrade f1 successfully as fabric manager, and then you sign in to f1 and initiate an upgrade of f2, the upgrade of f2 might fail because of a state mismatch.
Workaround: In the /deployments folder of f2, find the upgrade.json file, and copy the file to the /deployments folder of f1. Then update the state on f1 using the following command:
maprcli installer upgradeinfo add -data <f2-state-filename> -clustername <f2-clustername> -json
OTSDB-147
After upgrading OpenTSDB from version 2.4.0 to version 2.4.1, the Crontab on each OpenTSDB node is not updated and continues to point to the previous OpenTSDB version.
Workaround: To fix the Crontab, run the following commands on each OpenTSDB node, replacing $MAPR_USER with the name of the cluster admin (typically mapr) :
  • RHEL
    export CRONTAB="/var/spool/cron/$MAPR_USER"
    sed -i 's/2.4.0/2.4.1/' $CRONTAB
  • SLES
    export CRONTAB="/var/spool/cron/tabs/$MAPR_USER"
    sed -i 's/2.4.0/2.4.1/' $CRONTAB
  • Ubuntu
    export CRONTAB="/var/spool/cron/crontabs/$MAPR_USER"
    sed -i 's/2.4.0/2.4.1/' $CRONTAB
COMSECURE-615
Upgrading directly from release 6.1.x to release 7.x.x can fail because the upgrade process reads password information from the default Hadoop ssl-server.xml and ssl-client.xml files rather than the original .xml files. Note that upgrades from release 6.2.0 to 7.x.x are not affected by this issue.
The issue does not occur, and the upgrade succeeds, if either of the following conditions is true:
  • The existing password is mapr123 (the default value) when the EEP upgrade is initiated.
  • You upgrade the cluster first to release 6.2.0 and then subsequently to release 7.x.x.
Understanding the Upgrade Process and Workaround: The workaround in this section modifies the release 6.1.x-to-7.x.x upgrade so that it works like the 6.2.0-to-7.x.x upgrade.
Upgrading to core 7.x.x requires installing the mapr-hadoop-util package. Before the upgrade, Hadoop files are stored in a subdirectory such as hadoop-2.7.0. Installation of the mapr-hadoop-util package:
  • Creates a subdirectory to preserve the original .xml files. This subdirectory has the same name as the original Hadoop directory and a timestamp suffix (for example, hadoop-2.7.0.20210324131839.GA).
  • Creates a subdirectory for the new Hadoop version (hadoop-2.7.6).
  • Deletes the original hadoop-2.7.0 directory.
During the upgrade, a special file called /opt/mapr/hadoop/prior_hadoop_dir needs to be created to store the location of the prior Hadoop directory. The configure.sh script uses this location to copy the ssl-server.xml and ssl-client.xml files to the new hadoop-2.7.6 subdirectory.
In a release 6.1.x-to-7.x.x upgrade, the prior_hadoop_dir file does not get created, and configure.sh uses the default ssl-server.xml and ssl-client.xml files provided with Hadoop 2.7.6. In this scenario, any customization in the original .xml files is not applied.
The following workaround restores the missing prior_hadoop_dir file. With the file restored, configure.sh -R consumes the prior_hadoop_dir file and copies the the original ssl-server.xml and ssl-client.xml files into the hadoop-2.7.6 directory, replacing the files that contain the default mapr123 password.
Workaround: After upgrading the ecosystem packages, but before running configure.sh -R:
  1. Create a file named prior_hadoop_dir that contains the Hadoop directory path. For example:
    # cat /opt/mapr/hadoop/prior_hadoop_dir
    /opt/mapr/hadoop/hadoop-2.7.0.20210324131839.GA
    If multiple directories are present, specify the directory with the most recent timestamp.
  2. Run the configure.sh -R command as instructed to complete the EEP upgrade.

Volumes

DFUI-638
Non-LDAP SSO user authenticating to Keycloak cannot create volume on the Data Fabric UI.

Workaround: Create a volume via the Data Fabric minIO client.