Hive ODBC Connection ETIMEDOUT

I was trying to connect TableAU to Cloudera Hadoop on Windows, every time I got an error saying ETIMEDOUT, no other error message can be found.

To solve this, make sure HiveServer 2 can be selected, and for Authentication Mechanism, choose Username and then type ‘hive‘ in the Username text.

Hive ODBC Connection ETIMEDOUT by @sskaje: https://sskaje.me/2015/03/hive-odbc-etimedout/

Incoming search terms:

YARN NodeManager Failed to Start

I upgraded my CDH, one of my NodeNamager cannot be brought up.
NullPointer Exceptions were found in error log:

I tried deleting all ZooKeeper-related configs(which you can find it from Manually Upgrade CDH 5.2 in CM 5, exact the YARN part), not working.
Deleted the NodeManager instance and then reinstall, same.

Many ‘Recovering application’ and ‘Recovering localized resource’ were found in that log file:

Deleted those, still failed.

And, in the start-up message part,

‘/tmp/hadoop-yarn/’ were read every on starting.
The solution is, stop the instance, delete ‘/tmp/hadoop-yarn/’ from local filesystem, start the instance.

YARN NodeManager Failed to Start by @sskaje: https://sskaje.me/2014/11/yarn-nodemanager-failed-start/

Incoming search terms:

Missing Hive Databases in Cloudera Hue

I created some new databases using hive cli, but those are not listed in Hue. The browser I’m using is Google Chrome.

I logged out and re-login, failed.

I deleted all cached browser files, failed.

I checked cookies, nothing found there.

No API queries are found in the Network tab.

I noticed the ‘Local Storage’ in Chrome’s Developer Tools, many configurations are cached there.
Delete those and re-login.

Missing Hive Databases in Cloudera Hue by @sskaje: https://sskaje.me/2014/11/missing-hive-databases-cloudera-hue/

Manually Upgrade CDH 5.2 in CM 5

I was interrupted again when upgrading CDH.

HDFS

This time, NameNode was not started, I have to bring them up and resume the upgrade progress.
I didn’t save any log about NN’s error, stop all HDFS components and ran ‘Upgrade HDFS Metadata‘, then start HDFS.

YARN

Next, YARN.
I started YARN, and then all other services. Hive went down, then YARN.
I checked CM’s monitor:

I found both instances of ResourceManager were ‘Standby‘.

Here is what I found from /var/log/hadoop-yarn/hadoop-cmf-yarn-RESOURCEMANAGER-hadoop4.xxx.com.log.out

Google helps a lot: http://community.cloudera.com/t5/Cloudera-Manager-Installation/CDH-5-YARN-Resource-Manager-HA-deadlock-in-Kerberos-cluster/td-p/14396

In /opt/cloudera/parcels/CDH/lib/zookeeper/bin/zkCli.sh,
Do

one by one, because zkCli.sh does not have wildcard support.

Hive

I just guessed that hive didn’t work because of YARN, but I was wrong.
I checked all hive related commands executed by CM:

So I stopped Hive, ran Update Hive Metastore NameNodes and Upgrade Hive Metastore Database Schema, none of them worked but with the error message above.
I got more from logs:

The schemaTool reminded me, I manually upgraded hive metastore in Feb: Hive MetaStore Schema Upgrade Failed When Upgrading CDH5.
But this time, dbType should be postgres instead of derby.(Derby is not supported by Impala, that’s why I changed to postgresql embedded in Cloudera Manager.)

I cann’t find the terminal output, but when I ran:

I found a similar output (only first few lines) to the blog post above, saying schemaTool was trying to connect to derby

I re-deploy hive’s client configuration, and checked /etc/hive/conf/hive-site.xml, and compared with /var/run/cloudera-scm-agent/process/4525-hive-HIVEMETASTORE/hive-site.xml,
xml under /etc uses hive metastore’s thrift server and that under CM’s running folder speicified the exact database connection. schemaTool uses the /etc one.
So I replaced /etc one with CM’s, and then redo upgradeSchema:

Same error as I saw in CM’s log, plpgsql does not exist. Fix this by:

You can find password from the xml I mentioned above of file like

If you meet error message saying OWNER_NAME or OWNER_TYPE already exists in table DBS, open /opt/cloudera/parcels/CDH/lib/hive/scripts/metastore/upgrade/postgres/016-HIVE-6386.postgres.sql and comment/delete the two ALTER TABLE lines.

Manually Upgrade CDH 5.2 in CM 5 by @sskaje: https://sskaje.me/2014/10/manually-upgrade-cdh-5-2-cm-5/

Incoming search terms:

Delete Unexpected 127.0.0.1 from Cloudera Manager

I have cloudera manager 5.0.0 installed in my small cluster, tried to delete some nodes and then found cloudera manager not working, exceptions thrown in the landing page and 500 in the hosts page, almost null pointer exception everywhere.
Next time when I restart cm and log into CM, agent upgrading guide begins again. Here I found the 127.0.0.1 appeared in the host list which is not delete-able.
So I try to delete data from PostgreSQL.

1 read database config

password can be found from Cloudera Manager Drop Database/User on Embedded Postgresql

2 check table

3 find data

The row host_id=13 has an empty cluster id, no ip address nor name filled.

4 delete data

Try to find out deletes on foreign keys.

So Delete like

5 restart

Delete Unexpected 127.0.0.1 from Cloudera Manager by @sskaje: https://sskaje.me/2014/04/delete-unexpected-127-0-0-1-cloudera-manager/