When i try to install Hive 2.0 i get a multiple SLF4J bindings and that Hive metastore database is not initialized

English is not my native language; please excuse typing errors. I tried to install hive with hadoop in a linux environment following this tutorial. Hadoop is installed correctly but when i try to install hive i get the following output in my shell:

SLF4J: Class path contains multiple SLF4J bindings.
SLF4J: Found binding in [jar:file:/home/phd2014/hive/lib/hive-jdbc-2.0.0-standalone.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: Found binding in [jar:file:/home/phd2014/hive/lib/log4j-slf4j-impl-2.4.1.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: Found binding in [jar:file:/home/phd2014/hadoop/share/hadoop/common/lib/slf4j-log4j12-1.7.10.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
SLF4J: Actual binding is of type [org.apache.logging.slf4j.Log4jLoggerFactory]
Logging initialized using configuration in jar:file:/home/phd2014/hive/lib/hive-common-2.0.0.jar!/hive-log4j2.properties
Java HotSpot(TM) Client VM warning: You have loaded library /home/phd2014/hadoop/lib/native/libhadoop.so.1.0.0 which might have disabled stack guard. The VM will try to fix the stack guard now.
It's highly recommended that you fix the library with 'execstack -c <libfile>', or link it with '-z noexecstack'.
Exception in thread "main" java.lang.RuntimeException: Hive metastore database is not initialized. Please use schematool (e.g. ./schematool -initSchema -dbType ...) to create the schema. If needed, don't forget to include the option to auto-create the underlying database in your JDBC connection string (e.g. ?createDatabaseIfNotExist=true for mysql)

In my ~/.bashrc file y put the following:

export JAVA_HOME=/usr/lib/jvm/java-8-oracle
export HADOOP_PREFIX=/home/phd2014/hadoop
export HADOOP_HOME=/home/phd2014/hadoop
export HADOOP_MAPRED_HOME=/home/phd2014/hadoop
export HADOOP_COMMON_HOME=/home/phd2014/hadoop
export HADOOP_HDFS_HOME=/home/phd2014/hadoop
export YARN_HOME=/home/phd2014/hadoop
export HADOOP_CONF_DIR=/home/phd2014/hadoop/etc/hadoop
export HIVE_HOME=/home/phd2014/hive

I also export the variables HADOOP_HOME and HIVE_HOME inside the .profile file

This question here didn't work for me, i also run the command to create the Schema but it failed : schematool -dbType derby -initSchema

I have one more thing that i think it could help and is to modify the pom.xml file to avoid the multiple SLF4J bindings, but i can't find it. Try this but i didn't find it.

Thanks in advance


SLF4J is a logging API. It will dynamically bind to an implementation but it expects that there will only be one present. In your case it appears that you have three jars that provide an SLF4J implementation; hive-jdbc-2.0.0-standalone.jar, log4j-slf4j-impl-2.4.1.jar and slf4j-log4j12-1.7.10.jar.

hive-jdbc-2.0.0-standalone.jar appears to be a "shaded" jar - it includes the classes from multiple third party jars, including the contents of log4j-slf4j-impl. I am guessing that this is what SLF4J actually selected since it was the first one found.

The issues is that you are somehow including jars that the standalone jar has already incorporated. Normally with a standalone jar everything you need should already be in that jar.

When i try to install hive 2.0.0 i got the error that i posted, but if i install the version 1.2.1 instead it works fine, just by setting the environment variables and creating the /user/hive/warehouse directory in the HDFS. It must be a bug of the new version. My recommendation is to install the version 1.2.1 instead of the 2.0.0

Need Your Help

ignore/remove NA values in read.csv

r na read.csv

I have a csv file as shown below that I read into R using read.csv, where column C has 12/30 empty values. I want to work out the max of each column, but the R function "max" returns "NA" when used...

Checking if my Windows application is running

c# winforms

How do I check if my C# Windows application is running ?