FAQ
I'm getting the following while trying to build Hue:

(6211) *** Controller starting at Thu Aug 8 11:29:50 2013
Should start 1 new children
Controller.spawn_children(number=1)
$HADOOP_HOME=
$HADOOP_BIN=/usr/local/hadoop/bin/hadoop
$HIVE_CONF_DIR=~/hive-0.10.0/conf
$HIVE_HOME=~/hive-0.10.0
find: `~/hive-0.10.0/lib': No such file or directory
$HADOOP_CLASSPATH=:
$HADOOP_OPTS=-Dlog4j.configuration=log4j.properties
$HADOOP_CONF_DIR=~/hive-0.10.0/conf:/usr/local/hadoop/conf
$HADOOP_MAPRED_HOME=/usr/lib/hadoop-0.20-mapreduce
CWD=/usr/local/hue/desktop/conf
Executing /usr/local/hadoop/bin/hadoop jar /usr/local/hue/apps/beeswax/src/beeswax/../../java-lib/BeeswaxServer.jar --beeswax 8002 --desktop-host 127.0.0.1 --desktop-port 8888 --query-lifetime 604800000 --metastore 8003
Exception in thread "main" java.io.IOException: Permission denied
     at java.io.UnixFileSystem.createFileExclusively(Native Method)
     at java.io.File.createTempFile(File.java:1879)
     at org.apache.hadoop.util.RunJar.main(RunJar.java:119)

I've changed the configuration file so it doesn't use hue but the user that
I'm logged in as which has read and write permissions in the hadoop dfs,
hadoop, hive, etc. Not sure why it's doing this...

Search Discussions

  • Abraham Elmahrek at Aug 9, 2013 at 12:00 am
    It looks like Beeswaxd cannot create a temporary directory in '/tmp/'. What
    do your permissions on that directory look like? What is the output of
    "stat /tmp/"?

    On Thu, Aug 8, 2013 at 4:52 PM, wrote:

    I'm getting the following while trying to build Hue:

    (6211) *** Controller starting at Thu Aug 8 11:29:50 2013
    Should start 1 new children
    Controller.spawn_children(number=1)
    $HADOOP_HOME=
    $HADOOP_BIN=/usr/local/hadoop/bin/hadoop
    $HIVE_CONF_DIR=~/hive-0.10.0/conf
    $HIVE_HOME=~/hive-0.10.0
    find: `~/hive-0.10.0/lib': No such file or directory
    $HADOOP_CLASSPATH=:
    $HADOOP_OPTS=-Dlog4j.configuration=log4j.properties
    $HADOOP_CONF_DIR=~/hive-0.10.0/conf:/usr/local/hadoop/conf
    $HADOOP_MAPRED_HOME=/usr/lib/hadoop-0.20-mapreduce
    CWD=/usr/local/hue/desktop/conf
    Executing /usr/local/hadoop/bin/hadoop jar /usr/local/hue/apps/beeswax/src/beeswax/../../java-lib/BeeswaxServer.jar --beeswax 8002 --desktop-host 127.0.0.1 --desktop-port 8888 --query-lifetime 604800000 --metastore 8003
    Exception in thread "main" java.io.IOException: Permission denied
    at java.io.UnixFileSystem.createFileExclusively(Native Method)
    at java.io.File.createTempFile(File.java:1879)
    at org.apache.hadoop.util.RunJar.main(RunJar.java:119)

    I've changed the configuration file so it doesn't use hue but the user
    that I'm logged in as which has read and write permissions in the hadoop
    dfs, hadoop, hive, etc. Not sure why it's doing this...
  • Vinamrata Singal at Aug 9, 2013 at 12:03 am
    File: `/tmp'
       Size: 4096 Blocks: 8 IO Block: 4096 directory
    Device: fd00h/64768d Inode: 131082 Links: 11
    Access: (1777/drwxrwxrwt) Uid: ( 0/ root) Gid: ( 0/ root)
    Access: 2013-08-08 22:47:49.435308999 +0000
    Modify: 2013-08-09 00:01:17.251309001 +0000
    Change: 2013-08-09 00:01:17.251309001 +0000
      Birth: -


    On Thu, Aug 8, 2013 at 5:00 PM, Abraham Elmahrek wrote:

    It looks like Beeswaxd cannot create a temporary directory in '/tmp/'.
    What do your permissions on that directory look like? What is the output of
    "stat /tmp/"?

    On Thu, Aug 8, 2013 at 4:52 PM, wrote:

    I'm getting the following while trying to build Hue:

    (6211) *** Controller starting at Thu Aug 8 11:29:50 2013
    Should start 1 new children
    Controller.spawn_children(number=1)
    $HADOOP_HOME=
    $HADOOP_BIN=/usr/local/hadoop/bin/hadoop
    $HIVE_CONF_DIR=~/hive-0.10.0/conf
    $HIVE_HOME=~/hive-0.10.0
    find: `~/hive-0.10.0/lib': No such file or directory
    $HADOOP_CLASSPATH=:
    $HADOOP_OPTS=-Dlog4j.configuration=log4j.properties
    $HADOOP_CONF_DIR=~/hive-0.10.0/conf:/usr/local/hadoop/conf
    $HADOOP_MAPRED_HOME=/usr/lib/hadoop-0.20-mapreduce
    CWD=/usr/local/hue/desktop/conf
    Executing /usr/local/hadoop/bin/hadoop jar /usr/local/hue/apps/beeswax/src/beeswax/../../java-lib/BeeswaxServer.jar --beeswax 8002 --desktop-host 127.0.0.1 --desktop-port 8888 --query-lifetime 604800000 --metastore 8003
    Exception in thread "main" java.io.IOException: Permission denied
    at java.io.UnixFileSystem.createFileExclusively(Native Method)
    at java.io.File.createTempFile(File.java:1879)
    at org.apache.hadoop.util.RunJar.main(RunJar.java:119)

    I've changed the configuration file so it doesn't use hue but the user
    that I'm logged in as which has read and write permissions in the hadoop
    dfs, hadoop, hive, etc. Not sure why it's doing this...

    --
    Best,
    Vinamrata Singal | BS Computer Science | Stanford Class of 2016 |
    (c) 650.215.3775 | (w) https://stanford.edu/~vsingal5
  • Abraham Elmahrek at Aug 9, 2013 at 12:11 am
    Could you provide your core-site.xml? This should be creating a file in
    'hadoop.tmp.dir'.

    -Abe

    On Thu, Aug 8, 2013 at 5:03 PM, Vinamrata Singal wrote:

    File: `/tmp'
    Size: 4096 Blocks: 8 IO Block: 4096 directory
    Device: fd00h/64768d Inode: 131082 Links: 11
    Access: (1777/drwxrwxrwt) Uid: ( 0/ root) Gid: ( 0/ root)
    Access: 2013-08-08 22:47:49.435308999 +0000
    Modify: 2013-08-09 00:01:17.251309001 +0000
    Change: 2013-08-09 00:01:17.251309001 +0000
    Birth: -


    On Thu, Aug 8, 2013 at 5:00 PM, Abraham Elmahrek wrote:

    It looks like Beeswaxd cannot create a temporary directory in '/tmp/'.
    What do your permissions on that directory look like? What is the output of
    "stat /tmp/"?

    On Thu, Aug 8, 2013 at 4:52 PM, wrote:

    I'm getting the following while trying to build Hue:

    (6211) *** Controller starting at Thu Aug 8 11:29:50 2013
    Should start 1 new children
    Controller.spawn_children(number=1)
    $HADOOP_HOME=
    $HADOOP_BIN=/usr/local/hadoop/bin/hadoop
    $HIVE_CONF_DIR=~/hive-0.10.0/conf
    $HIVE_HOME=~/hive-0.10.0
    find: `~/hive-0.10.0/lib': No such file or directory
    $HADOOP_CLASSPATH=:
    $HADOOP_OPTS=-Dlog4j.configuration=log4j.properties
    $HADOOP_CONF_DIR=~/hive-0.10.0/conf:/usr/local/hadoop/conf
    $HADOOP_MAPRED_HOME=/usr/lib/hadoop-0.20-mapreduce
    CWD=/usr/local/hue/desktop/conf
    Executing /usr/local/hadoop/bin/hadoop jar /usr/local/hue/apps/beeswax/src/beeswax/../../java-lib/BeeswaxServer.jar --beeswax 8002 --desktop-host 127.0.0.1 --desktop-port 8888 --query-lifetime 604800000 --metastore 8003
    Exception in thread "main" java.io.IOException: Permission denied
    at java.io.UnixFileSystem.createFileExclusively(Native Method)
    at java.io.File.createTempFile(File.java:1879)
    at org.apache.hadoop.util.RunJar.main(RunJar.java:119)

    I've changed the configuration file so it doesn't use hue but the user
    that I'm logged in as which has read and write permissions in the hadoop
    dfs, hadoop, hive, etc. Not sure why it's doing this...

    --
    Best,
    Vinamrata Singal | BS Computer Science | Stanford Class of 2016 | (c)
    650.215.3775 | (w) https://stanford.edu/~vsingal5
  • Vinamrata Singal at Aug 9, 2013 at 12:15 am
    <configuration>
    <property>
             <name>hadoop.tmp.dir </name>
             <value>/app/hadoop/tmp</value>
             <description> A base for other temporary directories </description>
    </property>

    <property>
             <name>fs.default.name</name>
             <value>hdfs://master:54310</value>
    </property>
    </configuration>

    On Thu, Aug 8, 2013 at 5:11 PM, Abraham Elmahrek wrote:

    Could you provide your core-site.xml? This should be creating a file in
    'hadoop.tmp.dir'.

    -Abe

    On Thu, Aug 8, 2013 at 5:03 PM, Vinamrata Singal wrote:

    File: `/tmp'
    Size: 4096 Blocks: 8 IO Block: 4096 directory
    Device: fd00h/64768d Inode: 131082 Links: 11
    Access: (1777/drwxrwxrwt) Uid: ( 0/ root) Gid: ( 0/ root)
    Access: 2013-08-08 22:47:49.435308999 +0000
    Modify: 2013-08-09 00:01:17.251309001 +0000
    Change: 2013-08-09 00:01:17.251309001 +0000
    Birth: -


    On Thu, Aug 8, 2013 at 5:00 PM, Abraham Elmahrek wrote:

    It looks like Beeswaxd cannot create a temporary directory in '/tmp/'.
    What do your permissions on that directory look like? What is the output of
    "stat /tmp/"?

    On Thu, Aug 8, 2013 at 4:52 PM, wrote:

    I'm getting the following while trying to build Hue:


    (6211) *** Controller starting at Thu Aug 8 11:29:50 2013
    Should start 1 new children
    Controller.spawn_children(number=1)
    $HADOOP_HOME=
    $HADOOP_BIN=/usr/local/hadoop/bin/hadoop
    $HIVE_CONF_DIR=~/hive-0.10.0/conf
    $HIVE_HOME=~/hive-0.10.0
    find: `~/hive-0.10.0/lib': No such file or directory
    $HADOOP_CLASSPATH=:
    $HADOOP_OPTS=-Dlog4j.configuration=log4j.properties
    $HADOOP_CONF_DIR=~/hive-0.10.0/conf:/usr/local/hadoop/conf
    $HADOOP_MAPRED_HOME=/usr/lib/hadoop-0.20-mapreduce
    CWD=/usr/local/hue/desktop/conf
    Executing /usr/local/hadoop/bin/hadoop jar /usr/local/hue/apps/beeswax/src/beeswax/../../java-lib/BeeswaxServer.jar --beeswax 8002 --desktop-host 127.0.0.1 --desktop-port 8888 --query-lifetime 604800000 --metastore 8003
    Exception in thread "main" java.io.IOException: Permission denied
    at java.io.UnixFileSystem.createFileExclusively(Native Method)
    at java.io.File.createTempFile(File.java:1879)
    at org.apache.hadoop.util.RunJar.main(RunJar.java:119)

    I've changed the configuration file so it doesn't use hue but the user
    that I'm logged in as which has read and write permissions in the hadoop
    dfs, hadoop, hive, etc. Not sure why it's doing this...

    --
    Best,
    Vinamrata Singal | BS Computer Science | Stanford Class of 2016 | (c)
    650.215.3775 | (w) https://stanford.edu/~vsingal5

    --
    Best,
    Vinamrata Singal | BS Computer Science | Stanford Class of 2016 |
    (c) 650.215.3775 | (w) https://stanford.edu/~vsingal5
  • Abraham Elmahrek at Aug 9, 2013 at 12:19 am
    Your 'hadoop.tmp.dir' doesn't vary per user. I'm not entirely sure if that
    directory is chown'd or if the first user to create it takes ownership.
    Either way, you probably don't have access to that directory using the user
    starting Beeswaxd (hue). Take a look at
    http://hadoop.apache.org/docs/current/hadoop-project-dist/hadoop-common/core-default.xmlfor
    an example 'hadoop.tmp.dir' that varies by username.

    -Abe

    On Thu, Aug 8, 2013 at 5:15 PM, Vinamrata Singal wrote:

    <configuration>
    <property>
    <name>hadoop.tmp.dir </name>
    <value>/app/hadoop/tmp</value>
    <description> A base for other temporary directories </description>
    </property>

    <property>
    <name>fs.default.name</name>
    <value>hdfs://master:54310</value>
    </property>
    </configuration>

    On Thu, Aug 8, 2013 at 5:11 PM, Abraham Elmahrek wrote:

    Could you provide your core-site.xml? This should be creating a file in
    'hadoop.tmp.dir'.

    -Abe

    On Thu, Aug 8, 2013 at 5:03 PM, Vinamrata Singal wrote:

    File: `/tmp'
    Size: 4096 Blocks: 8 IO Block: 4096 directory
    Device: fd00h/64768d Inode: 131082 Links: 11
    Access: (1777/drwxrwxrwt) Uid: ( 0/ root) Gid: ( 0/ root)
    Access: 2013-08-08 22:47:49.435308999 +0000
    Modify: 2013-08-09 00:01:17.251309001 +0000
    Change: 2013-08-09 00:01:17.251309001 +0000
    Birth: -


    On Thu, Aug 8, 2013 at 5:00 PM, Abraham Elmahrek wrote:

    It looks like Beeswaxd cannot create a temporary directory in '/tmp/'.
    What do your permissions on that directory look like? What is the output of
    "stat /tmp/"?

    On Thu, Aug 8, 2013 at 4:52 PM, wrote:

    I'm getting the following while trying to build Hue:


    (6211) *** Controller starting at Thu Aug 8 11:29:50 2013
    Should start 1 new children
    Controller.spawn_children(number=1)
    $HADOOP_HOME=
    $HADOOP_BIN=/usr/local/hadoop/bin/hadoop
    $HIVE_CONF_DIR=~/hive-0.10.0/conf
    $HIVE_HOME=~/hive-0.10.0
    find: `~/hive-0.10.0/lib': No such file or directory
    $HADOOP_CLASSPATH=:
    $HADOOP_OPTS=-Dlog4j.configuration=log4j.properties
    $HADOOP_CONF_DIR=~/hive-0.10.0/conf:/usr/local/hadoop/conf
    $HADOOP_MAPRED_HOME=/usr/lib/hadoop-0.20-mapreduce
    CWD=/usr/local/hue/desktop/conf
    Executing /usr/local/hadoop/bin/hadoop jar /usr/local/hue/apps/beeswax/src/beeswax/../../java-lib/BeeswaxServer.jar --beeswax 8002 --desktop-host 127.0.0.1 --desktop-port 8888 --query-lifetime 604800000 --metastore 8003
    Exception in thread "main" java.io.IOException: Permission denied
    at java.io.UnixFileSystem.createFileExclusively(Native Method)
    at java.io.File.createTempFile(File.java:1879)
    at org.apache.hadoop.util.RunJar.main(RunJar.java:119)

    I've changed the configuration file so it doesn't use hue but the user
    that I'm logged in as which has read and write permissions in the hadoop
    dfs, hadoop, hive, etc. Not sure why it's doing this...

    --
    Best,
    Vinamrata Singal | BS Computer Science | Stanford Class of 2016 | (c)
    650.215.3775 | (w) https://stanford.edu/~vsingal5

    --
    Best,
    Vinamrata Singal | BS Computer Science | Stanford Class of 2016 | (c)
    650.215.3775 | (w) https://stanford.edu/~vsingal5
  • Vinamrata Singal at Aug 9, 2013 at 12:21 am
    I actually think I fixed the issue by changing the user in the
    supervisor.py script to be the user that owns that directory.

    Question: now I'm getting this error:

    xception in thread "main" java.lang.NoClassDefFoundError:
    org/apache/hadoop/hive/conf/HiveConf
    at java.lang.Class.forName0(Native Method)
      at java.lang.Class.forName(Class.java:270)
    at org.apache.hadoop.util.RunJar.main(RunJar.java:153)
    Caused by: java.lang.ClassNotFoundException:
    org.apache.hadoop.hive.conf.HiveConf
      at java.net.URLClassLoader$1.run(URLClassLoader.java:366)
    at java.net.URLClassLoader$1.run(URLClassLoader.java:355)
      at java.security.AccessController.doPrivileged(Native Method)
    at java.net.URLClassLoader.findClass(URLClassLoader.java:354)
      at java.lang.ClassLoader.loadClass(ClassLoader.java:424)
    at java.lang.ClassLoader.loadClass(ClassLoader.java:357)

    On Thu, Aug 8, 2013 at 5:19 PM, Abraham Elmahrek wrote:

    Your 'hadoop.tmp.dir' doesn't vary per user. I'm not entirely sure if that
    directory is chown'd or if the first user to create it takes ownership.
    Either way, you probably don't have access to that directory using the user
    starting Beeswaxd (hue). Take a look at
    http://hadoop.apache.org/docs/current/hadoop-project-dist/hadoop-common/core-default.xmlfor an example 'hadoop.tmp.dir' that varies by username.

    -Abe

    On Thu, Aug 8, 2013 at 5:15 PM, Vinamrata Singal wrote:

    <configuration>
    <property>
    <name>hadoop.tmp.dir </name>
    <value>/app/hadoop/tmp</value>
    <description> A base for other temporary directories
    </description>
    </property>

    <property>
    <name>fs.default.name</name>
    <value>hdfs://master:54310</value>
    </property>
    </configuration>

    On Thu, Aug 8, 2013 at 5:11 PM, Abraham Elmahrek wrote:

    Could you provide your core-site.xml? This should be creating a file in
    'hadoop.tmp.dir'.

    -Abe

    On Thu, Aug 8, 2013 at 5:03 PM, Vinamrata Singal wrote:

    File: `/tmp'
    Size: 4096 Blocks: 8 IO Block: 4096 directory
    Device: fd00h/64768d Inode: 131082 Links: 11
    Access: (1777/drwxrwxrwt) Uid: ( 0/ root) Gid: ( 0/ root)
    Access: 2013-08-08 22:47:49.435308999 +0000
    Modify: 2013-08-09 00:01:17.251309001 +0000
    Change: 2013-08-09 00:01:17.251309001 +0000
    Birth: -


    On Thu, Aug 8, 2013 at 5:00 PM, Abraham Elmahrek wrote:

    It looks like Beeswaxd cannot create a temporary directory in '/tmp/'.
    What do your permissions on that directory look like? What is the output of
    "stat /tmp/"?

    On Thu, Aug 8, 2013 at 4:52 PM, wrote:

    I'm getting the following while trying to build Hue:



    (6211) *** Controller starting at Thu Aug 8 11:29:50 2013
    Should start 1 new children
    Controller.spawn_children(number=1)
    $HADOOP_HOME=
    $HADOOP_BIN=/usr/local/hadoop/bin/hadoop
    $HIVE_CONF_DIR=~/hive-0.10.0/conf
    $HIVE_HOME=~/hive-0.10.0
    find: `~/hive-0.10.0/lib': No such file or directory
    $HADOOP_CLASSPATH=:
    $HADOOP_OPTS=-Dlog4j.configuration=log4j.properties
    $HADOOP_CONF_DIR=~/hive-0.10.0/conf:/usr/local/hadoop/conf
    $HADOOP_MAPRED_HOME=/usr/lib/hadoop-0.20-mapreduce
    CWD=/usr/local/hue/desktop/conf
    Executing /usr/local/hadoop/bin/hadoop jar /usr/local/hue/apps/beeswax/src/beeswax/../../java-lib/BeeswaxServer.jar --beeswax 8002 --desktop-host 127.0.0.1 --desktop-port 8888 --query-lifetime 604800000 --metastore 8003
    Exception in thread "main" java.io.IOException: Permission denied
    at java.io.UnixFileSystem.createFileExclusively(Native Method)
    at java.io.File.createTempFile(File.java:1879)
    at org.apache.hadoop.util.RunJar.main(RunJar.java:119)

    I've changed the configuration file so it doesn't use hue but the
    user that I'm logged in as which has read and write permissions in the
    hadoop dfs, hadoop, hive, etc. Not sure why it's doing this...

    --
    Best,
    Vinamrata Singal | BS Computer Science | Stanford Class of 2016 | (c)
    650.215.3775 | (w) https://stanford.edu/~vsingal5

    --
    Best,
    Vinamrata Singal | BS Computer Science | Stanford Class of 2016 | (c)
    650.215.3775 | (w) https://stanford.edu/~vsingal5

    --
    Best,
    Vinamrata Singal | BS Computer Science | Stanford Class of 2016 |
    (c) 650.215.3775 | (w) https://stanford.edu/~vsingal5
  • Abraham Elmahrek at Aug 9, 2013 at 12:25 am
    That error is likely because of the error: find: `~/hive-0.10.0/lib': No
    such file or directory. Are you using a custom installation of Hive?
    Perhaps, the apache distribution rather than the CDH distribution?

    Note: It's probably not a good idea to change the user from 'hue' to
    something else.

    On Thu, Aug 8, 2013 at 5:20 PM, Vinamrata Singal wrote:

    I actually think I fixed the issue by changing the user in the
    supervisor.py script to be the user that owns that directory.

    Question: now I'm getting this error:

    xception in thread "main" java.lang.NoClassDefFoundError:
    org/apache/hadoop/hive/conf/HiveConf
    at java.lang.Class.forName0(Native Method)
    at java.lang.Class.forName(Class.java:270)
    at org.apache.hadoop.util.RunJar.main(RunJar.java:153)
    Caused by: java.lang.ClassNotFoundException:
    org.apache.hadoop.hive.conf.HiveConf
    at java.net.URLClassLoader$1.run(URLClassLoader.java:366)
    at java.net.URLClassLoader$1.run(URLClassLoader.java:355)
    at java.security.AccessController.doPrivileged(Native Method)
    at java.net.URLClassLoader.findClass(URLClassLoader.java:354)
    at java.lang.ClassLoader.loadClass(ClassLoader.java:424)
    at java.lang.ClassLoader.loadClass(ClassLoader.java:357)

    On Thu, Aug 8, 2013 at 5:19 PM, Abraham Elmahrek wrote:

    Your 'hadoop.tmp.dir' doesn't vary per user. I'm not entirely sure if
    that directory is chown'd or if the first user to create it takes
    ownership. Either way, you probably don't have access to that directory
    using the user starting Beeswaxd (hue). Take a look at
    http://hadoop.apache.org/docs/current/hadoop-project-dist/hadoop-common/core-default.xmlfor an example 'hadoop.tmp.dir' that varies by username.

    -Abe

    On Thu, Aug 8, 2013 at 5:15 PM, Vinamrata Singal wrote:

    <configuration>
    <property>
    <name>hadoop.tmp.dir </name>
    <value>/app/hadoop/tmp</value>
    <description> A base for other temporary directories
    </description>
    </property>

    <property>
    <name>fs.default.name</name>
    <value>hdfs://master:54310</value>
    </property>
    </configuration>

    On Thu, Aug 8, 2013 at 5:11 PM, Abraham Elmahrek wrote:

    Could you provide your core-site.xml? This should be creating a file in
    'hadoop.tmp.dir'.

    -Abe


    On Thu, Aug 8, 2013 at 5:03 PM, Vinamrata Singal <vsingal5@stanford.edu
    wrote:
    File: `/tmp'
    Size: 4096 Blocks: 8 IO Block: 4096 directory
    Device: fd00h/64768d Inode: 131082 Links: 11
    Access: (1777/drwxrwxrwt) Uid: ( 0/ root) Gid: ( 0/
    root)
    Access: 2013-08-08 22:47:49.435308999 +0000
    Modify: 2013-08-09 00:01:17.251309001 +0000
    Change: 2013-08-09 00:01:17.251309001 +0000
    Birth: -


    On Thu, Aug 8, 2013 at 5:00 PM, Abraham Elmahrek wrote:

    It looks like Beeswaxd cannot create a temporary directory in
    '/tmp/'. What do your permissions on that directory look like? What is the
    output of "stat /tmp/"?

    On Thu, Aug 8, 2013 at 4:52 PM, wrote:

    I'm getting the following while trying to build Hue:



    (6211) *** Controller starting at Thu Aug 8 11:29:50 2013
    Should start 1 new children
    Controller.spawn_children(number=1)
    $HADOOP_HOME=
    $HADOOP_BIN=/usr/local/hadoop/bin/hadoop
    $HIVE_CONF_DIR=~/hive-0.10.0/conf
    $HIVE_HOME=~/hive-0.10.0
    find: `~/hive-0.10.0/lib': No such file or directory
    $HADOOP_CLASSPATH=:
    $HADOOP_OPTS=-Dlog4j.configuration=log4j.properties
    $HADOOP_CONF_DIR=~/hive-0.10.0/conf:/usr/local/hadoop/conf
    $HADOOP_MAPRED_HOME=/usr/lib/hadoop-0.20-mapreduce
    CWD=/usr/local/hue/desktop/conf
    Executing /usr/local/hadoop/bin/hadoop jar /usr/local/hue/apps/beeswax/src/beeswax/../../java-lib/BeeswaxServer.jar --beeswax 8002 --desktop-host 127.0.0.1 --desktop-port 8888 --query-lifetime 604800000 --metastore 8003
    Exception in thread "main" java.io.IOException: Permission denied
    at java.io.UnixFileSystem.createFileExclusively(Native Method)
    at java.io.File.createTempFile(File.java:1879)
    at org.apache.hadoop.util.RunJar.main(RunJar.java:119)

    I've changed the configuration file so it doesn't use hue but the
    user that I'm logged in as which has read and write permissions in the
    hadoop dfs, hadoop, hive, etc. Not sure why it's doing this...

    --
    Best,
    Vinamrata Singal | BS Computer Science | Stanford Class of 2016 | (c)
    650.215.3775 | (w) https://stanford.edu/~vsingal5

    --
    Best,
    Vinamrata Singal | BS Computer Science | Stanford Class of 2016 | (c)
    650.215.3775 | (w) https://stanford.edu/~vsingal5

    --
    Best,
    Vinamrata Singal | BS Computer Science | Stanford Class of 2016 | (c)
    650.215.3775 | (w) https://stanford.edu/~vsingal5
  • Vinamrata Singal at Aug 9, 2013 at 12:44 am
    I have an Apache installation. Is that an issue for Hue?

    Got it, will look into the hue user installation issues.

    On Thu, Aug 8, 2013 at 5:25 PM, Abraham Elmahrek wrote:

    That error is likely because of the error: find: `~/hive-0.10.0/lib': No
    such file or directory. Are you using a custom installation of Hive?
    Perhaps, the apache distribution rather than the CDH distribution?

    Note: It's probably not a good idea to change the user from 'hue' to
    something else.

    On Thu, Aug 8, 2013 at 5:20 PM, Vinamrata Singal wrote:

    I actually think I fixed the issue by changing the user in the
    supervisor.py script to be the user that owns that directory.

    Question: now I'm getting this error:

    xception in thread "main" java.lang.NoClassDefFoundError:
    org/apache/hadoop/hive/conf/HiveConf
    at java.lang.Class.forName0(Native Method)
    at java.lang.Class.forName(Class.java:270)
    at org.apache.hadoop.util.RunJar.main(RunJar.java:153)
    Caused by: java.lang.ClassNotFoundException:
    org.apache.hadoop.hive.conf.HiveConf
    at java.net.URLClassLoader$1.run(URLClassLoader.java:366)
    at java.net.URLClassLoader$1.run(URLClassLoader.java:355)
    at java.security.AccessController.doPrivileged(Native Method)
    at java.net.URLClassLoader.findClass(URLClassLoader.java:354)
    at java.lang.ClassLoader.loadClass(ClassLoader.java:424)
    at java.lang.ClassLoader.loadClass(ClassLoader.java:357)

    On Thu, Aug 8, 2013 at 5:19 PM, Abraham Elmahrek wrote:

    Your 'hadoop.tmp.dir' doesn't vary per user. I'm not entirely sure if
    that directory is chown'd or if the first user to create it takes
    ownership. Either way, you probably don't have access to that directory
    using the user starting Beeswaxd (hue). Take a look at
    http://hadoop.apache.org/docs/current/hadoop-project-dist/hadoop-common/core-default.xmlfor an example 'hadoop.tmp.dir' that varies by username.

    -Abe

    On Thu, Aug 8, 2013 at 5:15 PM, Vinamrata Singal wrote:

    <configuration>
    <property>
    <name>hadoop.tmp.dir </name>
    <value>/app/hadoop/tmp</value>
    <description> A base for other temporary directories
    </description>
    </property>

    <property>
    <name>fs.default.name</name>
    <value>hdfs://master:54310</value>
    </property>
    </configuration>

    On Thu, Aug 8, 2013 at 5:11 PM, Abraham Elmahrek wrote:

    Could you provide your core-site.xml? This should be creating a file
    in 'hadoop.tmp.dir'.

    -Abe


    On Thu, Aug 8, 2013 at 5:03 PM, Vinamrata Singal <
    vsingal5@stanford.edu> wrote:
    File: `/tmp'
    Size: 4096 Blocks: 8 IO Block: 4096 directory
    Device: fd00h/64768d Inode: 131082 Links: 11
    Access: (1777/drwxrwxrwt) Uid: ( 0/ root) Gid: ( 0/
    root)
    Access: 2013-08-08 22:47:49.435308999 +0000
    Modify: 2013-08-09 00:01:17.251309001 +0000
    Change: 2013-08-09 00:01:17.251309001 +0000
    Birth: -


    On Thu, Aug 8, 2013 at 5:00 PM, Abraham Elmahrek wrote:

    It looks like Beeswaxd cannot create a temporary directory in
    '/tmp/'. What do your permissions on that directory look like? What is the
    output of "stat /tmp/"?

    On Thu, Aug 8, 2013 at 4:52 PM, wrote:

    I'm getting the following while trying to build Hue:




    (6211) *** Controller starting at Thu Aug 8 11:29:50 2013
    Should start 1 new children
    Controller.spawn_children(number=1)
    $HADOOP_HOME=
    $HADOOP_BIN=/usr/local/hadoop/bin/hadoop
    $HIVE_CONF_DIR=~/hive-0.10.0/conf
    $HIVE_HOME=~/hive-0.10.0
    find: `~/hive-0.10.0/lib': No such file or directory
    $HADOOP_CLASSPATH=:
    $HADOOP_OPTS=-Dlog4j.configuration=log4j.properties
    $HADOOP_CONF_DIR=~/hive-0.10.0/conf:/usr/local/hadoop/conf
    $HADOOP_MAPRED_HOME=/usr/lib/hadoop-0.20-mapreduce
    CWD=/usr/local/hue/desktop/conf
    Executing /usr/local/hadoop/bin/hadoop jar /usr/local/hue/apps/beeswax/src/beeswax/../../java-lib/BeeswaxServer.jar --beeswax 8002 --desktop-host 127.0.0.1 --desktop-port 8888 --query-lifetime 604800000 --metastore 8003
    Exception in thread "main" java.io.IOException: Permission denied
    at java.io.UnixFileSystem.createFileExclusively(Native Method)
    at java.io.File.createTempFile(File.java:1879)
    at org.apache.hadoop.util.RunJar.main(RunJar.java:119)

    I've changed the configuration file so it doesn't use hue but the
    user that I'm logged in as which has read and write permissions in the
    hadoop dfs, hadoop, hive, etc. Not sure why it's doing this...

    --
    Best,
    Vinamrata Singal | BS Computer Science | Stanford Class of 2016 | (c)
    650.215.3775 | (w) https://stanford.edu/~vsingal5

    --
    Best,
    Vinamrata Singal | BS Computer Science | Stanford Class of 2016 | (c)
    650.215.3775 | (w) https://stanford.edu/~vsingal5

    --
    Best,
    Vinamrata Singal | BS Computer Science | Stanford Class of 2016 | (c)
    650.215.3775 | (w) https://stanford.edu/~vsingal5

    --
    Best,
    Vinamrata Singal | BS Computer Science | Stanford Class of 2016 |
    (c) 650.215.3775 | (w) https://stanford.edu/~vsingal5
  • Abraham Elmahrek at Aug 9, 2013 at 12:53 am
    Vinamrata,

    I think it will work with the exception of the job browser application. It
    depends on the thrift service code for the job tracker packaged in
    hue-plugins.jar, which only runs on CDH. There have been users who've
    ported that code over to the upstream job tracker in the past though.

    To solve the rest of your issues, take a look at
    apps/beeswax/beeswax_server.sh. There are environment variables that you
    need to set when you start Hue. In particular, "HIVE_LIB" should be set or
    some how grok'd into Hue's environment.

    -Abe

    On Thu, Aug 8, 2013 at 5:43 PM, Vinamrata Singal wrote:

    I have an Apache installation. Is that an issue for Hue?

    Got it, will look into the hue user installation issues.

    On Thu, Aug 8, 2013 at 5:25 PM, Abraham Elmahrek wrote:

    That error is likely because of the error: find: `~/hive-0.10.0/lib': No
    such file or directory. Are you using a custom installation of Hive?
    Perhaps, the apache distribution rather than the CDH distribution?

    Note: It's probably not a good idea to change the user from 'hue' to
    something else.

    On Thu, Aug 8, 2013 at 5:20 PM, Vinamrata Singal wrote:

    I actually think I fixed the issue by changing the user in the
    supervisor.py script to be the user that owns that directory.

    Question: now I'm getting this error:

    xception in thread "main" java.lang.NoClassDefFoundError:
    org/apache/hadoop/hive/conf/HiveConf
    at java.lang.Class.forName0(Native Method)
    at java.lang.Class.forName(Class.java:270)
    at org.apache.hadoop.util.RunJar.main(RunJar.java:153)
    Caused by: java.lang.ClassNotFoundException:
    org.apache.hadoop.hive.conf.HiveConf
    at java.net.URLClassLoader$1.run(URLClassLoader.java:366)
    at java.net.URLClassLoader$1.run(URLClassLoader.java:355)
    at java.security.AccessController.doPrivileged(Native Method)
    at java.net.URLClassLoader.findClass(URLClassLoader.java:354)
    at java.lang.ClassLoader.loadClass(ClassLoader.java:424)
    at java.lang.ClassLoader.loadClass(ClassLoader.java:357)

    On Thu, Aug 8, 2013 at 5:19 PM, Abraham Elmahrek wrote:

    Your 'hadoop.tmp.dir' doesn't vary per user. I'm not entirely sure if
    that directory is chown'd or if the first user to create it takes
    ownership. Either way, you probably don't have access to that directory
    using the user starting Beeswaxd (hue). Take a look at
    http://hadoop.apache.org/docs/current/hadoop-project-dist/hadoop-common/core-default.xmlfor an example 'hadoop.tmp.dir' that varies by username.

    -Abe


    On Thu, Aug 8, 2013 at 5:15 PM, Vinamrata Singal <vsingal5@stanford.edu
    wrote:
    <configuration>
    <property>
    <name>hadoop.tmp.dir </name>
    <value>/app/hadoop/tmp</value>
    <description> A base for other temporary directories
    </description>
    </property>

    <property>
    <name>fs.default.name</name>
    <value>hdfs://master:54310</value>
    </property>
    </configuration>

    On Thu, Aug 8, 2013 at 5:11 PM, Abraham Elmahrek wrote:

    Could you provide your core-site.xml? This should be creating a file
    in 'hadoop.tmp.dir'.

    -Abe


    On Thu, Aug 8, 2013 at 5:03 PM, Vinamrata Singal <
    vsingal5@stanford.edu> wrote:
    File: `/tmp'
    Size: 4096 Blocks: 8 IO Block: 4096 directory
    Device: fd00h/64768d Inode: 131082 Links: 11
    Access: (1777/drwxrwxrwt) Uid: ( 0/ root) Gid: ( 0/
    root)
    Access: 2013-08-08 22:47:49.435308999 +0000
    Modify: 2013-08-09 00:01:17.251309001 +0000
    Change: 2013-08-09 00:01:17.251309001 +0000
    Birth: -


    On Thu, Aug 8, 2013 at 5:00 PM, Abraham Elmahrek wrote:

    It looks like Beeswaxd cannot create a temporary directory in
    '/tmp/'. What do your permissions on that directory look like? What is the
    output of "stat /tmp/"?

    On Thu, Aug 8, 2013 at 4:52 PM, wrote:

    I'm getting the following while trying to build Hue:




    (6211) *** Controller starting at Thu Aug 8 11:29:50 2013
    Should start 1 new children
    Controller.spawn_children(number=1)
    $HADOOP_HOME=
    $HADOOP_BIN=/usr/local/hadoop/bin/hadoop
    $HIVE_CONF_DIR=~/hive-0.10.0/conf
    $HIVE_HOME=~/hive-0.10.0
    find: `~/hive-0.10.0/lib': No such file or directory
    $HADOOP_CLASSPATH=:
    $HADOOP_OPTS=-Dlog4j.configuration=log4j.properties
    $HADOOP_CONF_DIR=~/hive-0.10.0/conf:/usr/local/hadoop/conf
    $HADOOP_MAPRED_HOME=/usr/lib/hadoop-0.20-mapreduce
    CWD=/usr/local/hue/desktop/conf
    Executing /usr/local/hadoop/bin/hadoop jar /usr/local/hue/apps/beeswax/src/beeswax/../../java-lib/BeeswaxServer.jar --beeswax 8002 --desktop-host 127.0.0.1 --desktop-port 8888 --query-lifetime 604800000 --metastore 8003
    Exception in thread "main" java.io.IOException: Permission denied
    at java.io.UnixFileSystem.createFileExclusively(Native Method)
    at java.io.File.createTempFile(File.java:1879)
    at org.apache.hadoop.util.RunJar.main(RunJar.java:119)

    I've changed the configuration file so it doesn't use hue but the
    user that I'm logged in as which has read and write permissions in the
    hadoop dfs, hadoop, hive, etc. Not sure why it's doing this...

    --
    Best,
    Vinamrata Singal | BS Computer Science | Stanford Class of 2016 |
    (c) 650.215.3775 | (w) https://stanford.edu/~vsingal5

    --
    Best,
    Vinamrata Singal | BS Computer Science | Stanford Class of 2016 | (c)
    650.215.3775 | (w) https://stanford.edu/~vsingal5

    --
    Best,
    Vinamrata Singal | BS Computer Science | Stanford Class of 2016 | (c)
    650.215.3775 | (w) https://stanford.edu/~vsingal5

    --
    Best,
    Vinamrata Singal | BS Computer Science | Stanford Class of 2016 | (c)
    650.215.3775 | (w) https://stanford.edu/~vsingal5
  • Vinamrata Singal at Aug 9, 2013 at 6:17 pm
    Abe,

    Thanks for all your help and quick replies.

    So I went ahead and fixed everything (including the username as you
    suggested) yet now I'm getting a different error:

    Executing /usr/local/hadoop/bin/hadoop jar
    /usr/share/hue/apps/beeswax/src/beeswax/../../java-lib/BeeswaxServer.jar
    --beeswax 8002 --desktop-host 127.0.0.1 --desktop-port 8888
    --query-lifetime 604800000 --metastore 8003
    (31443) socket 0.0.0.0:8888 already in use, retrying after 0.5 seconds...
    (31443) socket 0.0.0.0:8888 already in use, retrying after 1.0 seconds...
    (31443) socket 0.0.0.0:8888 already in use, retrying after 2.0 seconds...
    (31443) socket 0.0.0.0:8888 already in use, retrying after 4.0 seconds...
    13/08/09 11:15:37 INFO beeswax.Server: Starting metastore at port 8003
    13/08/09 11:15:37 INFO beeswax.Server: Starting beeswaxd at port 8002
    Exception in thread "main" org.apache.thrift.transport.TTransportException:
    Could not create ServerSocket on address 0.0.0.0/0.0.0.0:8002.
      at org.apache.thrift.transport.TServerSocket.(TServerSocket.java:75)
      at org.apache.thrift.transport.TServerSocket.(Server.java:250)
      at com.cloudera.beeswax.Server.main(Server.java:214)
    at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
      at
    sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
    at
    sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
      at java.lang.reflect.Method.invoke(Method.java:606)
    at org.apache.hadoop.util.RunJar.main(RunJar.java:160)
    org.apache.thrift.transport.TTransportException: Could not create
    ServerSocket on address 0.0.0.0/0.0.0.0:8003.
    at org.apache.thrift.transport.TServerSocket.(TServerSocket.java:75)
    at org.apache.thrift.transport.TServerSocket.(Server.java:348)
    at com.cloudera.beeswax.Server$1.run(Server.java:200)
      at java.lang.Thread.run(Thread.java:724)


    On Thu, Aug 8, 2013 at 5:53 PM, Abraham Elmahrek wrote:

    Vinamrata,

    I think it will work with the exception of the job browser application. It
    depends on the thrift service code for the job tracker packaged in
    hue-plugins.jar, which only runs on CDH. There have been users who've
    ported that code over to the upstream job tracker in the past though.

    To solve the rest of your issues, take a look at
    apps/beeswax/beeswax_server.sh. There are environment variables that you
    need to set when you start Hue. In particular, "HIVE_LIB" should be set or
    some how grok'd into Hue's environment.

    -Abe

    On Thu, Aug 8, 2013 at 5:43 PM, Vinamrata Singal wrote:

    I have an Apache installation. Is that an issue for Hue?

    Got it, will look into the hue user installation issues.

    On Thu, Aug 8, 2013 at 5:25 PM, Abraham Elmahrek wrote:

    That error is likely because of the error: find: `~/hive-0.10.0/lib':
    No such file or directory. Are you using a custom installation of Hive?
    Perhaps, the apache distribution rather than the CDH distribution?

    Note: It's probably not a good idea to change the user from 'hue' to
    something else.

    On Thu, Aug 8, 2013 at 5:20 PM, Vinamrata Singal wrote:

    I actually think I fixed the issue by changing the user in the
    supervisor.py script to be the user that owns that directory.

    Question: now I'm getting this error:

    xception in thread "main" java.lang.NoClassDefFoundError:
    org/apache/hadoop/hive/conf/HiveConf
    at java.lang.Class.forName0(Native Method)
    at java.lang.Class.forName(Class.java:270)
    at org.apache.hadoop.util.RunJar.main(RunJar.java:153)
    Caused by: java.lang.ClassNotFoundException:
    org.apache.hadoop.hive.conf.HiveConf
    at java.net.URLClassLoader$1.run(URLClassLoader.java:366)
    at java.net.URLClassLoader$1.run(URLClassLoader.java:355)
    at java.security.AccessController.doPrivileged(Native Method)
    at java.net.URLClassLoader.findClass(URLClassLoader.java:354)
    at java.lang.ClassLoader.loadClass(ClassLoader.java:424)
    at java.lang.ClassLoader.loadClass(ClassLoader.java:357)

    On Thu, Aug 8, 2013 at 5:19 PM, Abraham Elmahrek wrote:

    Your 'hadoop.tmp.dir' doesn't vary per user. I'm not entirely sure if
    that directory is chown'd or if the first user to create it takes
    ownership. Either way, you probably don't have access to that directory
    using the user starting Beeswaxd (hue). Take a look at
    http://hadoop.apache.org/docs/current/hadoop-project-dist/hadoop-common/core-default.xmlfor an example 'hadoop.tmp.dir' that varies by username.

    -Abe


    On Thu, Aug 8, 2013 at 5:15 PM, Vinamrata Singal <
    vsingal5@stanford.edu> wrote:
    <configuration>
    <property>
    <name>hadoop.tmp.dir </name>
    <value>/app/hadoop/tmp</value>
    <description> A base for other temporary directories
    </description>
    </property>

    <property>
    <name>fs.default.name</name>
    <value>hdfs://master:54310</value>
    </property>
    </configuration>

    On Thu, Aug 8, 2013 at 5:11 PM, Abraham Elmahrek wrote:

    Could you provide your core-site.xml? This should be creating a file
    in 'hadoop.tmp.dir'.

    -Abe


    On Thu, Aug 8, 2013 at 5:03 PM, Vinamrata Singal <
    vsingal5@stanford.edu> wrote:
    File: `/tmp'
    Size: 4096 Blocks: 8 IO Block: 4096 directory
    Device: fd00h/64768d Inode: 131082 Links: 11
    Access: (1777/drwxrwxrwt) Uid: ( 0/ root) Gid: ( 0/
    root)
    Access: 2013-08-08 22:47:49.435308999 +0000
    Modify: 2013-08-09 00:01:17.251309001 +0000
    Change: 2013-08-09 00:01:17.251309001 +0000
    Birth: -


    On Thu, Aug 8, 2013 at 5:00 PM, Abraham Elmahrek wrote:

    It looks like Beeswaxd cannot create a temporary directory in
    '/tmp/'. What do your permissions on that directory look like? What is the
    output of "stat /tmp/"?

    On Thu, Aug 8, 2013 at 4:52 PM, wrote:

    I'm getting the following while trying to build Hue:





    (6211) *** Controller starting at Thu Aug 8 11:29:50 2013
    Should start 1 new children
    Controller.spawn_children(number=1)
    $HADOOP_HOME=
    $HADOOP_BIN=/usr/local/hadoop/bin/hadoop
    $HIVE_CONF_DIR=~/hive-0.10.0/conf
    $HIVE_HOME=~/hive-0.10.0
    find: `~/hive-0.10.0/lib': No such file or directory
    $HADOOP_CLASSPATH=:
    $HADOOP_OPTS=-Dlog4j.configuration=log4j.properties
    $HADOOP_CONF_DIR=~/hive-0.10.0/conf:/usr/local/hadoop/conf
    $HADOOP_MAPRED_HOME=/usr/lib/hadoop-0.20-mapreduce
    CWD=/usr/local/hue/desktop/conf
    Executing /usr/local/hadoop/bin/hadoop jar /usr/local/hue/apps/beeswax/src/beeswax/../../java-lib/BeeswaxServer.jar --beeswax 8002 --desktop-host 127.0.0.1 --desktop-port 8888 --query-lifetime 604800000 --metastore 8003
    Exception in thread "main" java.io.IOException: Permission denied
    at java.io.UnixFileSystem.createFileExclusively(Native Method)
    at java.io.File.createTempFile(File.java:1879)
    at org.apache.hadoop.util.RunJar.main(RunJar.java:119)

    I've changed the configuration file so it doesn't use hue but the
    user that I'm logged in as which has read and write permissions in the
    hadoop dfs, hadoop, hive, etc. Not sure why it's doing this...

    --
    Best,
    Vinamrata Singal | BS Computer Science | Stanford Class of 2016 |
    (c) 650.215.3775 | (w) https://stanford.edu/~vsingal5

    --
    Best,
    Vinamrata Singal | BS Computer Science | Stanford Class of 2016 | (c)
    650.215.3775 | (w) https://stanford.edu/~vsingal5

    --
    Best,
    Vinamrata Singal | BS Computer Science | Stanford Class of 2016 | (c)
    650.215.3775 | (w) https://stanford.edu/~vsingal5

    --
    Best,
    Vinamrata Singal | BS Computer Science | Stanford Class of 2016 | (c)
    650.215.3775 | (w) https://stanford.edu/~vsingal5

    --
    Best,
    Vinamrata Singal | BS Computer Science | Stanford Class of 2016 |
    (c) 650.215.3775 | (w) https://stanford.edu/~vsingal5
  • Abraham Elmahrek at Aug 9, 2013 at 6:20 pm
    It looks like port 8888 is in use. Maybe 8002 and 8003 are in use as well.
    Could you provide the output of 'netstat -tln'?

    -Abe

    On Fri, Aug 9, 2013 at 11:16 AM, Vinamrata Singal wrote:

    Abe,

    Thanks for all your help and quick replies.

    So I went ahead and fixed everything (including the username as you
    suggested) yet now I'm getting a different error:

    Executing /usr/local/hadoop/bin/hadoop jar
    /usr/share/hue/apps/beeswax/src/beeswax/../../java-lib/BeeswaxServer.jar
    --beeswax 8002 --desktop-host 127.0.0.1 --desktop-port 8888
    --query-lifetime 604800000 --metastore 8003
    (31443) socket 0.0.0.0:8888 already in use, retrying after 0.5 seconds...
    (31443) socket 0.0.0.0:8888 already in use, retrying after 1.0 seconds...
    (31443) socket 0.0.0.0:8888 already in use, retrying after 2.0 seconds...
    (31443) socket 0.0.0.0:8888 already in use, retrying after 4.0 seconds...
    13/08/09 11:15:37 INFO beeswax.Server: Starting metastore at port 8003
    13/08/09 11:15:37 INFO beeswax.Server: Starting beeswaxd at port 8002
    Exception in thread "main"
    org.apache.thrift.transport.TTransportException: Could not create
    ServerSocket on address 0.0.0.0/0.0.0.0:8002.
    at
    org.apache.thrift.transport.TServerSocket.<init>(TServerSocket.java:93)
    at org.apache.thrift.transport.TServerSocket.<init>(TServerSocket.java:75)
    at
    org.apache.thrift.transport.TServerSocket.<init>(TServerSocket.java:68)
    at com.cloudera.beeswax.Server.serveBeeswax(Server.java:250)
    at com.cloudera.beeswax.Server.main(Server.java:214)
    at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
    at
    sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
    at
    sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
    at java.lang.reflect.Method.invoke(Method.java:606)
    at org.apache.hadoop.util.RunJar.main(RunJar.java:160)
    org.apache.thrift.transport.TTransportException: Could not create
    ServerSocket on address 0.0.0.0/0.0.0.0:8003.
    at org.apache.thrift.transport.TServerSocket.<init>(TServerSocket.java:93)
    at
    org.apache.thrift.transport.TServerSocket.<init>(TServerSocket.java:75)
    at org.apache.thrift.transport.TServerSocket.<init>(TServerSocket.java:68)
    at com.cloudera.beeswax.Server.serveMeta(Server.java:348)
    at com.cloudera.beeswax.Server$1.run(Server.java:200)
    at java.lang.Thread.run(Thread.java:724)


    On Thu, Aug 8, 2013 at 5:53 PM, Abraham Elmahrek wrote:

    Vinamrata,

    I think it will work with the exception of the job browser application.
    It depends on the thrift service code for the job tracker packaged in
    hue-plugins.jar, which only runs on CDH. There have been users who've
    ported that code over to the upstream job tracker in the past though.

    To solve the rest of your issues, take a look at
    apps/beeswax/beeswax_server.sh. There are environment variables that you
    need to set when you start Hue. In particular, "HIVE_LIB" should be set or
    some how grok'd into Hue's environment.

    -Abe

    On Thu, Aug 8, 2013 at 5:43 PM, Vinamrata Singal wrote:

    I have an Apache installation. Is that an issue for Hue?

    Got it, will look into the hue user installation issues.

    On Thu, Aug 8, 2013 at 5:25 PM, Abraham Elmahrek wrote:

    That error is likely because of the error: find: `~/hive-0.10.0/lib':
    No such file or directory. Are you using a custom installation of
    Hive? Perhaps, the apache distribution rather than the CDH distribution?

    Note: It's probably not a good idea to change the user from 'hue' to
    something else.


    On Thu, Aug 8, 2013 at 5:20 PM, Vinamrata Singal <vsingal5@stanford.edu
    wrote:
    I actually think I fixed the issue by changing the user in the
    supervisor.py script to be the user that owns that directory.

    Question: now I'm getting this error:

    xception in thread "main" java.lang.NoClassDefFoundError:
    org/apache/hadoop/hive/conf/HiveConf
    at java.lang.Class.forName0(Native Method)
    at java.lang.Class.forName(Class.java:270)
    at org.apache.hadoop.util.RunJar.main(RunJar.java:153)
    Caused by: java.lang.ClassNotFoundException:
    org.apache.hadoop.hive.conf.HiveConf
    at java.net.URLClassLoader$1.run(URLClassLoader.java:366)
    at java.net.URLClassLoader$1.run(URLClassLoader.java:355)
    at java.security.AccessController.doPrivileged(Native Method)
    at java.net.URLClassLoader.findClass(URLClassLoader.java:354)
    at java.lang.ClassLoader.loadClass(ClassLoader.java:424)
    at java.lang.ClassLoader.loadClass(ClassLoader.java:357)

    On Thu, Aug 8, 2013 at 5:19 PM, Abraham Elmahrek wrote:

    Your 'hadoop.tmp.dir' doesn't vary per user. I'm not entirely sure if
    that directory is chown'd or if the first user to create it takes
    ownership. Either way, you probably don't have access to that directory
    using the user starting Beeswaxd (hue). Take a look at
    http://hadoop.apache.org/docs/current/hadoop-project-dist/hadoop-common/core-default.xmlfor an example 'hadoop.tmp.dir' that varies by username.

    -Abe


    On Thu, Aug 8, 2013 at 5:15 PM, Vinamrata Singal <
    vsingal5@stanford.edu> wrote:
    <configuration>
    <property>
    <name>hadoop.tmp.dir </name>
    <value>/app/hadoop/tmp</value>
    <description> A base for other temporary directories
    </description>
    </property>

    <property>
    <name>fs.default.name</name>
    <value>hdfs://master:54310</value>
    </property>
    </configuration>

    On Thu, Aug 8, 2013 at 5:11 PM, Abraham Elmahrek wrote:

    Could you provide your core-site.xml? This should be creating a
    file in 'hadoop.tmp.dir'.

    -Abe


    On Thu, Aug 8, 2013 at 5:03 PM, Vinamrata Singal <
    vsingal5@stanford.edu> wrote:
    File: `/tmp'
    Size: 4096 Blocks: 8 IO Block: 4096 directory
    Device: fd00h/64768d Inode: 131082 Links: 11
    Access: (1777/drwxrwxrwt) Uid: ( 0/ root) Gid: ( 0/
    root)
    Access: 2013-08-08 22:47:49.435308999 +0000
    Modify: 2013-08-09 00:01:17.251309001 +0000
    Change: 2013-08-09 00:01:17.251309001 +0000
    Birth: -



    On Thu, Aug 8, 2013 at 5:00 PM, Abraham Elmahrek <abe@cloudera.com
    wrote:
    It looks like Beeswaxd cannot create a temporary directory in
    '/tmp/'. What do your permissions on that directory look like? What is the
    output of "stat /tmp/"?

    On Thu, Aug 8, 2013 at 4:52 PM, wrote:

    I'm getting the following while trying to build Hue:





    (6211) *** Controller starting at Thu Aug 8 11:29:50 2013
    Should start 1 new children
    Controller.spawn_children(number=1)
    $HADOOP_HOME=
    $HADOOP_BIN=/usr/local/hadoop/bin/hadoop
    $HIVE_CONF_DIR=~/hive-0.10.0/conf
    $HIVE_HOME=~/hive-0.10.0
    find: `~/hive-0.10.0/lib': No such file or directory
    $HADOOP_CLASSPATH=:
    $HADOOP_OPTS=-Dlog4j.configuration=log4j.properties
    $HADOOP_CONF_DIR=~/hive-0.10.0/conf:/usr/local/hadoop/conf
    $HADOOP_MAPRED_HOME=/usr/lib/hadoop-0.20-mapreduce
    CWD=/usr/local/hue/desktop/conf
    Executing /usr/local/hadoop/bin/hadoop jar /usr/local/hue/apps/beeswax/src/beeswax/../../java-lib/BeeswaxServer.jar --beeswax 8002 --desktop-host 127.0.0.1 --desktop-port 8888 --query-lifetime 604800000 --metastore 8003
    Exception in thread "main" java.io.IOException: Permission denied
    at java.io.UnixFileSystem.createFileExclusively(Native Method)
    at java.io.File.createTempFile(File.java:1879)
    at org.apache.hadoop.util.RunJar.main(RunJar.java:119)

    I've changed the configuration file so it doesn't use hue but
    the user that I'm logged in as which has read and write permissions in the
    hadoop dfs, hadoop, hive, etc. Not sure why it's doing this...

    --
    Best,
    Vinamrata Singal | BS Computer Science | Stanford Class of 2016 |
    (c) 650.215.3775 | (w) https://stanford.edu/~vsingal5

    --
    Best,
    Vinamrata Singal | BS Computer Science | Stanford Class of 2016 |
    (c) 650.215.3775 | (w) https://stanford.edu/~vsingal5

    --
    Best,
    Vinamrata Singal | BS Computer Science | Stanford Class of 2016 | (c)
    650.215.3775 | (w) https://stanford.edu/~vsingal5

    --
    Best,
    Vinamrata Singal | BS Computer Science | Stanford Class of 2016 | (c)
    650.215.3775 | (w) https://stanford.edu/~vsingal5

    --
    Best,
    Vinamrata Singal | BS Computer Science | Stanford Class of 2016 | (c)
    650.215.3775 | (w) https://stanford.edu/~vsingal5
  • Vinamrata Singal at Aug 9, 2013 at 6:23 pm
    tcp 0 0 0.0.0.0:50060 0.0.0.0:* LISTEN

    tcp 0 0 0.0.0.0:44141 0.0.0.0:* LISTEN

    tcp 0 0 0.0.0.0:50030 0.0.0.0:* LISTEN

    tcp 0 0 0.0.0.0:111 0.0.0.0:* LISTEN

    tcp 0 0 0.0.0.0:50070 0.0.0.0:* LISTEN

    tcp 0 0 0.0.0.0:22 0.0.0.0:* LISTEN

    tcp 0 0 0.0.0.0:58776 0.0.0.0:* LISTEN

    tcp 0 0 0.0.0.0:8888 0.0.0.0:* LISTEN

    tcp 0 0 0.0.0.0:50010 0.0.0.0:* LISTEN

    tcp 0 0 0.0.0.0:50075 0.0.0.0:* LISTEN

    tcp 0 0 0.0.0.0:55035 0.0.0.0:* LISTEN

    tcp 0 0 0.0.0.0:55039 0.0.0.0:* LISTEN

    tcp 0 0 127.0.0.1:60353 0.0.0.0:* LISTEN

    tcp 0 0 0.0.0.0:8002 0.0.0.0:* LISTEN

    tcp 0 0 0.0.0.0:8003 0.0.0.0:* LISTEN

    tcp 0 0 0.0.0.0:50020 0.0.0.0:* LISTEN

    tcp 0 0 127.0.0.1:54310 0.0.0.0:* LISTEN

    tcp 0 0 0.0.0.0:40358 0.0.0.0:* LISTEN

    tcp 0 0 127.0.0.1:54311 0.0.0.0:* LISTEN

    tcp 0 0 0.0.0.0:50090 0.0.0.0:* LISTEN

    tcp6 0 0 :::49932 :::* LISTEN

    tcp6 0 0 :::111 :::* LISTEN

    tcp6 0 0 :::22 :::* LISTEN

    On Fri, Aug 9, 2013 at 11:20 AM, Abraham Elmahrek wrote:

    It looks like port 8888 is in use. Maybe 8002 and 8003 are in use as well.
    Could you provide the output of 'netstat -tln'?

    -Abe

    On Fri, Aug 9, 2013 at 11:16 AM, Vinamrata Singal wrote:

    Abe,

    Thanks for all your help and quick replies.

    So I went ahead and fixed everything (including the username as you
    suggested) yet now I'm getting a different error:

    Executing /usr/local/hadoop/bin/hadoop jar
    /usr/share/hue/apps/beeswax/src/beeswax/../../java-lib/BeeswaxServer.jar
    --beeswax 8002 --desktop-host 127.0.0.1 --desktop-port 8888
    --query-lifetime 604800000 --metastore 8003
    (31443) socket 0.0.0.0:8888 already in use, retrying after 0.5 seconds...
    (31443) socket 0.0.0.0:8888 already in use, retrying after 1.0 seconds...
    (31443) socket 0.0.0.0:8888 already in use, retrying after 2.0 seconds...
    (31443) socket 0.0.0.0:8888 already in use, retrying after 4.0 seconds...
    13/08/09 11:15:37 INFO beeswax.Server: Starting metastore at port 8003
    13/08/09 11:15:37 INFO beeswax.Server: Starting beeswaxd at port 8002
    Exception in thread "main"
    org.apache.thrift.transport.TTransportException: Could not create
    ServerSocket on address 0.0.0.0/0.0.0.0:8002.
    at
    org.apache.thrift.transport.TServerSocket.<init>(TServerSocket.java:93)
    at org.apache.thrift.transport.TServerSocket.<init>(TServerSocket.java:75)
    at
    org.apache.thrift.transport.TServerSocket.<init>(TServerSocket.java:68)
    at com.cloudera.beeswax.Server.serveBeeswax(Server.java:250)
    at com.cloudera.beeswax.Server.main(Server.java:214)
    at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
    at
    sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
    at
    sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
    at java.lang.reflect.Method.invoke(Method.java:606)
    at org.apache.hadoop.util.RunJar.main(RunJar.java:160)
    org.apache.thrift.transport.TTransportException: Could not create
    ServerSocket on address 0.0.0.0/0.0.0.0:8003.
    at org.apache.thrift.transport.TServerSocket.<init>(TServerSocket.java:93)
    at
    org.apache.thrift.transport.TServerSocket.<init>(TServerSocket.java:75)
    at org.apache.thrift.transport.TServerSocket.<init>(TServerSocket.java:68)
    at com.cloudera.beeswax.Server.serveMeta(Server.java:348)
    at com.cloudera.beeswax.Server$1.run(Server.java:200)
    at java.lang.Thread.run(Thread.java:724)


    On Thu, Aug 8, 2013 at 5:53 PM, Abraham Elmahrek wrote:

    Vinamrata,

    I think it will work with the exception of the job browser application.
    It depends on the thrift service code for the job tracker packaged in
    hue-plugins.jar, which only runs on CDH. There have been users who've
    ported that code over to the upstream job tracker in the past though.

    To solve the rest of your issues, take a look at
    apps/beeswax/beeswax_server.sh. There are environment variables that you
    need to set when you start Hue. In particular, "HIVE_LIB" should be set or
    some how grok'd into Hue's environment.

    -Abe

    On Thu, Aug 8, 2013 at 5:43 PM, Vinamrata Singal wrote:

    I have an Apache installation. Is that an issue for Hue?

    Got it, will look into the hue user installation issues.

    On Thu, Aug 8, 2013 at 5:25 PM, Abraham Elmahrek wrote:

    That error is likely because of the error: find: `~/hive-0.10.0/lib':
    No such file or directory. Are you using a custom installation of
    Hive? Perhaps, the apache distribution rather than the CDH distribution?

    Note: It's probably not a good idea to change the user from 'hue' to
    something else.


    On Thu, Aug 8, 2013 at 5:20 PM, Vinamrata Singal <
    vsingal5@stanford.edu> wrote:
    I actually think I fixed the issue by changing the user in the
    supervisor.py script to be the user that owns that directory.

    Question: now I'm getting this error:

    xception in thread "main" java.lang.NoClassDefFoundError:
    org/apache/hadoop/hive/conf/HiveConf
    at java.lang.Class.forName0(Native Method)
    at java.lang.Class.forName(Class.java:270)
    at org.apache.hadoop.util.RunJar.main(RunJar.java:153)
    Caused by: java.lang.ClassNotFoundException:
    org.apache.hadoop.hive.conf.HiveConf
    at java.net.URLClassLoader$1.run(URLClassLoader.java:366)
    at java.net.URLClassLoader$1.run(URLClassLoader.java:355)
    at java.security.AccessController.doPrivileged(Native Method)
    at java.net.URLClassLoader.findClass(URLClassLoader.java:354)
    at java.lang.ClassLoader.loadClass(ClassLoader.java:424)
    at java.lang.ClassLoader.loadClass(ClassLoader.java:357)

    On Thu, Aug 8, 2013 at 5:19 PM, Abraham Elmahrek wrote:

    Your 'hadoop.tmp.dir' doesn't vary per user. I'm not entirely sure
    if that directory is chown'd or if the first user to create it takes
    ownership. Either way, you probably don't have access to that directory
    using the user starting Beeswaxd (hue). Take a look at
    http://hadoop.apache.org/docs/current/hadoop-project-dist/hadoop-common/core-default.xmlfor an example 'hadoop.tmp.dir' that varies by username.

    -Abe


    On Thu, Aug 8, 2013 at 5:15 PM, Vinamrata Singal <
    vsingal5@stanford.edu> wrote:
    <configuration>
    <property>
    <name>hadoop.tmp.dir </name>
    <value>/app/hadoop/tmp</value>
    <description> A base for other temporary directories
    </description>
    </property>

    <property>
    <name>fs.default.name</name>
    <value>hdfs://master:54310</value>
    </property>
    </configuration>

    On Thu, Aug 8, 2013 at 5:11 PM, Abraham Elmahrek wrote:

    Could you provide your core-site.xml? This should be creating a
    file in 'hadoop.tmp.dir'.

    -Abe


    On Thu, Aug 8, 2013 at 5:03 PM, Vinamrata Singal <
    vsingal5@stanford.edu> wrote:
    File: `/tmp'
    Size: 4096 Blocks: 8 IO Block: 4096 directory
    Device: fd00h/64768d Inode: 131082 Links: 11
    Access: (1777/drwxrwxrwt) Uid: ( 0/ root) Gid: ( 0/
    root)
    Access: 2013-08-08 22:47:49.435308999 +0000
    Modify: 2013-08-09 00:01:17.251309001 +0000
    Change: 2013-08-09 00:01:17.251309001 +0000
    Birth: -



    On Thu, Aug 8, 2013 at 5:00 PM, Abraham Elmahrek <
    abe@cloudera.com> wrote:
    It looks like Beeswaxd cannot create a temporary directory in
    '/tmp/'. What do your permissions on that directory look like? What is the
    output of "stat /tmp/"?

    On Thu, Aug 8, 2013 at 4:52 PM, wrote:

    I'm getting the following while trying to build Hue:






    (6211) *** Controller starting at Thu Aug 8 11:29:50 2013
    Should start 1 new children
    Controller.spawn_children(number=1)
    $HADOOP_HOME=
    $HADOOP_BIN=/usr/local/hadoop/bin/hadoop
    $HIVE_CONF_DIR=~/hive-0.10.0/conf
    $HIVE_HOME=~/hive-0.10.0
    find: `~/hive-0.10.0/lib': No such file or directory
    $HADOOP_CLASSPATH=:
    $HADOOP_OPTS=-Dlog4j.configuration=log4j.properties
    $HADOOP_CONF_DIR=~/hive-0.10.0/conf:/usr/local/hadoop/conf
    $HADOOP_MAPRED_HOME=/usr/lib/hadoop-0.20-mapreduce
    CWD=/usr/local/hue/desktop/conf
    Executing /usr/local/hadoop/bin/hadoop jar /usr/local/hue/apps/beeswax/src/beeswax/../../java-lib/BeeswaxServer.jar --beeswax 8002 --desktop-host 127.0.0.1 --desktop-port 8888 --query-lifetime 604800000 --metastore 8003
    Exception in thread "main" java.io.IOException: Permission denied
    at java.io.UnixFileSystem.createFileExclusively(Native Method)
    at java.io.File.createTempFile(File.java:1879)
    at org.apache.hadoop.util.RunJar.main(RunJar.java:119)

    I've changed the configuration file so it doesn't use hue but
    the user that I'm logged in as which has read and write permissions in the
    hadoop dfs, hadoop, hive, etc. Not sure why it's doing this...

    --
    Best,
    Vinamrata Singal | BS Computer Science | Stanford Class of 2016 |
    (c) 650.215.3775 | (w) https://stanford.edu/~vsingal5

    --
    Best,
    Vinamrata Singal | BS Computer Science | Stanford Class of 2016 |
    (c) 650.215.3775 | (w) https://stanford.edu/~vsingal5

    --
    Best,
    Vinamrata Singal | BS Computer Science | Stanford Class of 2016 | (c)
    650.215.3775 | (w) https://stanford.edu/~vsingal5

    --
    Best,
    Vinamrata Singal | BS Computer Science | Stanford Class of 2016 | (c)
    650.215.3775 | (w) https://stanford.edu/~vsingal5

    --
    Best,
    Vinamrata Singal | BS Computer Science | Stanford Class of 2016 | (c)
    650.215.3775 | (w) https://stanford.edu/~vsingal5

    --
    Best,
    Vinamrata Singal | BS Computer Science | Stanford Class of 2016 |
    (c) 650.215.3775 | (w) https://stanford.edu/~vsingal5
  • Abraham Elmahrek at Aug 9, 2013 at 6:27 pm
    It definitely looks like they're in use. The command "netstat -tlnp" should
    show you their PIDs and process names. If they're Hue, you can simply
    'kill' those processes using "kill <pid1> <pid2> ..." and start Hue again.

    On Fri, Aug 9, 2013 at 11:22 AM, Vinamrata Singal wrote:

    tcp 0 0 0.0.0.0:50060 0.0.0.0:*
    LISTEN
    tcp 0 0 0.0.0.0:44141 0.0.0.0:*
    LISTEN
    tcp 0 0 0.0.0.0:50030 0.0.0.0:*
    LISTEN
    tcp 0 0 0.0.0.0:111 0.0.0.0:*
    LISTEN
    tcp 0 0 0.0.0.0:50070 0.0.0.0:*
    LISTEN
    tcp 0 0 0.0.0.0:22 0.0.0.0:*
    LISTEN
    tcp 0 0 0.0.0.0:58776 0.0.0.0:*
    LISTEN
    tcp 0 0 0.0.0.0:8888 0.0.0.0:*
    LISTEN
    tcp 0 0 0.0.0.0:50010 0.0.0.0:*
    LISTEN
    tcp 0 0 0.0.0.0:50075 0.0.0.0:*
    LISTEN
    tcp 0 0 0.0.0.0:55035 0.0.0.0:*
    LISTEN
    tcp 0 0 0.0.0.0:55039 0.0.0.0:*
    LISTEN
    tcp 0 0 127.0.0.1:60353 0.0.0.0:*
    LISTEN
    tcp 0 0 0.0.0.0:8002 0.0.0.0:*
    LISTEN
    tcp 0 0 0.0.0.0:8003 0.0.0.0:*
    LISTEN
    tcp 0 0 0.0.0.0:50020 0.0.0.0:*
    LISTEN
    tcp 0 0 127.0.0.1:54310 0.0.0.0:*
    LISTEN
    tcp 0 0 0.0.0.0:40358 0.0.0.0:*
    LISTEN
    tcp 0 0 127.0.0.1:54311 0.0.0.0:*
    LISTEN
    tcp 0 0 0.0.0.0:50090 0.0.0.0:*
    LISTEN
    tcp6 0 0 :::49932 :::* LISTEN

    tcp6 0 0 :::111 :::* LISTEN

    tcp6 0 0 :::22 :::* LISTEN

    On Fri, Aug 9, 2013 at 11:20 AM, Abraham Elmahrek wrote:

    It looks like port 8888 is in use. Maybe 8002 and 8003 are in use as
    well. Could you provide the output of 'netstat -tln'?

    -Abe

    On Fri, Aug 9, 2013 at 11:16 AM, Vinamrata Singal wrote:

    Abe,

    Thanks for all your help and quick replies.

    So I went ahead and fixed everything (including the username as you
    suggested) yet now I'm getting a different error:

    Executing /usr/local/hadoop/bin/hadoop jar
    /usr/share/hue/apps/beeswax/src/beeswax/../../java-lib/BeeswaxServer.jar
    --beeswax 8002 --desktop-host 127.0.0.1 --desktop-port 8888
    --query-lifetime 604800000 --metastore 8003
    (31443) socket 0.0.0.0:8888 already in use, retrying after 0.5
    seconds...
    (31443) socket 0.0.0.0:8888 already in use, retrying after 1.0
    seconds...
    (31443) socket 0.0.0.0:8888 already in use, retrying after 2.0
    seconds...
    (31443) socket 0.0.0.0:8888 already in use, retrying after 4.0
    seconds...
    13/08/09 11:15:37 INFO beeswax.Server: Starting metastore at port 8003
    13/08/09 11:15:37 INFO beeswax.Server: Starting beeswaxd at port 8002
    Exception in thread "main"
    org.apache.thrift.transport.TTransportException: Could not create
    ServerSocket on address 0.0.0.0/0.0.0.0:8002.
    at
    org.apache.thrift.transport.TServerSocket.<init>(TServerSocket.java:93)
    at
    org.apache.thrift.transport.TServerSocket.<init>(TServerSocket.java:75)
    at
    org.apache.thrift.transport.TServerSocket.<init>(TServerSocket.java:68)
    at com.cloudera.beeswax.Server.serveBeeswax(Server.java:250)
    at com.cloudera.beeswax.Server.main(Server.java:214)
    at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
    at
    sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
    at
    sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
    at java.lang.reflect.Method.invoke(Method.java:606)
    at org.apache.hadoop.util.RunJar.main(RunJar.java:160)
    org.apache.thrift.transport.TTransportException: Could not create
    ServerSocket on address 0.0.0.0/0.0.0.0:8003.
    at
    org.apache.thrift.transport.TServerSocket.<init>(TServerSocket.java:93)
    at
    org.apache.thrift.transport.TServerSocket.<init>(TServerSocket.java:75)
    at
    org.apache.thrift.transport.TServerSocket.<init>(TServerSocket.java:68)
    at com.cloudera.beeswax.Server.serveMeta(Server.java:348)
    at com.cloudera.beeswax.Server$1.run(Server.java:200)
    at java.lang.Thread.run(Thread.java:724)


    On Thu, Aug 8, 2013 at 5:53 PM, Abraham Elmahrek wrote:

    Vinamrata,

    I think it will work with the exception of the job browser application.
    It depends on the thrift service code for the job tracker packaged in
    hue-plugins.jar, which only runs on CDH. There have been users who've
    ported that code over to the upstream job tracker in the past though.

    To solve the rest of your issues, take a look at
    apps/beeswax/beeswax_server.sh. There are environment variables that you
    need to set when you start Hue. In particular, "HIVE_LIB" should be set or
    some how grok'd into Hue's environment.

    -Abe


    On Thu, Aug 8, 2013 at 5:43 PM, Vinamrata Singal <vsingal5@stanford.edu
    wrote:
    I have an Apache installation. Is that an issue for Hue?

    Got it, will look into the hue user installation issues.

    On Thu, Aug 8, 2013 at 5:25 PM, Abraham Elmahrek wrote:

    That error is likely because of the error: find:
    `~/hive-0.10.0/lib': No such file or directory. Are you using a
    custom installation of Hive? Perhaps, the apache distribution rather than
    the CDH distribution?

    Note: It's probably not a good idea to change the user from 'hue' to
    something else.


    On Thu, Aug 8, 2013 at 5:20 PM, Vinamrata Singal <
    vsingal5@stanford.edu> wrote:
    I actually think I fixed the issue by changing the user in the
    supervisor.py script to be the user that owns that directory.

    Question: now I'm getting this error:

    xception in thread "main" java.lang.NoClassDefFoundError:
    org/apache/hadoop/hive/conf/HiveConf
    at java.lang.Class.forName0(Native Method)
    at java.lang.Class.forName(Class.java:270)
    at org.apache.hadoop.util.RunJar.main(RunJar.java:153)
    Caused by: java.lang.ClassNotFoundException:
    org.apache.hadoop.hive.conf.HiveConf
    at java.net.URLClassLoader$1.run(URLClassLoader.java:366)
    at java.net.URLClassLoader$1.run(URLClassLoader.java:355)
    at java.security.AccessController.doPrivileged(Native Method)
    at java.net.URLClassLoader.findClass(URLClassLoader.java:354)
    at java.lang.ClassLoader.loadClass(ClassLoader.java:424)
    at java.lang.ClassLoader.loadClass(ClassLoader.java:357)

    On Thu, Aug 8, 2013 at 5:19 PM, Abraham Elmahrek wrote:

    Your 'hadoop.tmp.dir' doesn't vary per user. I'm not entirely sure
    if that directory is chown'd or if the first user to create it takes
    ownership. Either way, you probably don't have access to that directory
    using the user starting Beeswaxd (hue). Take a look at
    http://hadoop.apache.org/docs/current/hadoop-project-dist/hadoop-common/core-default.xmlfor an example 'hadoop.tmp.dir' that varies by username.

    -Abe


    On Thu, Aug 8, 2013 at 5:15 PM, Vinamrata Singal <
    vsingal5@stanford.edu> wrote:
    <configuration>
    <property>
    <name>hadoop.tmp.dir </name>
    <value>/app/hadoop/tmp</value>
    <description> A base for other temporary directories
    </description>
    </property>

    <property>
    <name>fs.default.name</name>
    <value>hdfs://master:54310</value>
    </property>
    </configuration>


    On Thu, Aug 8, 2013 at 5:11 PM, Abraham Elmahrek <abe@cloudera.com
    wrote:
    Could you provide your core-site.xml? This should be creating a
    file in 'hadoop.tmp.dir'.

    -Abe


    On Thu, Aug 8, 2013 at 5:03 PM, Vinamrata Singal <
    vsingal5@stanford.edu> wrote:
    File: `/tmp'
    Size: 4096 Blocks: 8 IO Block: 4096 directory
    Device: fd00h/64768d Inode: 131082 Links: 11
    Access: (1777/drwxrwxrwt) Uid: ( 0/ root) Gid: ( 0/
    root)
    Access: 2013-08-08 22:47:49.435308999 +0000
    Modify: 2013-08-09 00:01:17.251309001 +0000
    Change: 2013-08-09 00:01:17.251309001 +0000
    Birth: -



    On Thu, Aug 8, 2013 at 5:00 PM, Abraham Elmahrek <
    abe@cloudera.com> wrote:
    It looks like Beeswaxd cannot create a temporary directory in
    '/tmp/'. What do your permissions on that directory look like? What is the
    output of "stat /tmp/"?

    On Thu, Aug 8, 2013 at 4:52 PM, wrote:

    I'm getting the following while trying to build Hue:






    (6211) *** Controller starting at Thu Aug 8 11:29:50 2013
    Should start 1 new children
    Controller.spawn_children(number=1)
    $HADOOP_HOME=
    $HADOOP_BIN=/usr/local/hadoop/bin/hadoop
    $HIVE_CONF_DIR=~/hive-0.10.0/conf
    $HIVE_HOME=~/hive-0.10.0
    find: `~/hive-0.10.0/lib': No such file or directory
    $HADOOP_CLASSPATH=:
    $HADOOP_OPTS=-Dlog4j.configuration=log4j.properties
    $HADOOP_CONF_DIR=~/hive-0.10.0/conf:/usr/local/hadoop/conf
    $HADOOP_MAPRED_HOME=/usr/lib/hadoop-0.20-mapreduce
    CWD=/usr/local/hue/desktop/conf
    Executing /usr/local/hadoop/bin/hadoop jar /usr/local/hue/apps/beeswax/src/beeswax/../../java-lib/BeeswaxServer.jar --beeswax 8002 --desktop-host 127.0.0.1 --desktop-port 8888 --query-lifetime 604800000 --metastore 8003
    Exception in thread "main" java.io.IOException: Permission denied
    at java.io.UnixFileSystem.createFileExclusively(Native Method)
    at java.io.File.createTempFile(File.java:1879)
    at org.apache.hadoop.util.RunJar.main(RunJar.java:119)

    I've changed the configuration file so it doesn't use hue but
    the user that I'm logged in as which has read and write permissions in the
    hadoop dfs, hadoop, hive, etc. Not sure why it's doing this...

    --
    Best,
    Vinamrata Singal | BS Computer Science | Stanford Class of 2016
    (c) 650.215.3775 | (w) https://stanford.edu/~vsingal5

    --
    Best,
    Vinamrata Singal | BS Computer Science | Stanford Class of 2016 |
    (c) 650.215.3775 | (w) https://stanford.edu/~vsingal5

    --
    Best,
    Vinamrata Singal | BS Computer Science | Stanford Class of 2016 |
    (c) 650.215.3775 | (w) https://stanford.edu/~vsingal5

    --
    Best,
    Vinamrata Singal | BS Computer Science | Stanford Class of 2016 | (c)
    650.215.3775 | (w) https://stanford.edu/~vsingal5

    --
    Best,
    Vinamrata Singal | BS Computer Science | Stanford Class of 2016 | (c)
    650.215.3775 | (w) https://stanford.edu/~vsingal5

    --
    Best,
    Vinamrata Singal | BS Computer Science | Stanford Class of 2016 | (c)
    650.215.3775 | (w) https://stanford.edu/~vsingal5
  • Vinamrata Singal at Aug 9, 2013 at 6:45 pm
    I have the following output, can't find port 8888

    Proto Recv-Q Send-Q Local Address Foreign Address State
         PID/Program name
    tcp 0 0 0.0.0.0:50060 0.0.0.0:* LISTEN
          30843/java
    tcp 0 0 0.0.0.0:44141 0.0.0.0:* LISTEN
          30312/java
    tcp 0 0 0.0.0.0:50030 0.0.0.0:* LISTEN
          30697/java
    tcp 0 0 0.0.0.0:111 0.0.0.0:* LISTEN
          -
    tcp 0 0 0.0.0.0:50070 0.0.0.0:* LISTEN
          30312/java
    tcp 0 0 0.0.0.0:22 0.0.0.0:* LISTEN
          -
    tcp 0 0 0.0.0.0:58776 0.0.0.0:* LISTEN
          30442/java
    tcp 0 0 0.0.0.0:8888 0.0.0.0:* LISTEN
          22739/python2.7
    tcp 0 0 0.0.0.0:50010 0.0.0.0:* LISTEN
          30442/java
    tcp 0 0 0.0.0.0:50075 0.0.0.0:* LISTEN
          30442/java
    tcp 0 0 0.0.0.0:55035 0.0.0.0:* LISTEN
          30572/java
    tcp 0 0 0.0.0.0:55039 0.0.0.0:* LISTEN
          30697/java
    tcp 0 0 127.0.0.1:60353 0.0.0.0:* LISTEN
          30843/java
    tcp 0 0 0.0.0.0:8002 0.0.0.0:* LISTEN
          -
    tcp 0 0 0.0.0.0:8003 0.0.0.0:* LISTEN
          -
    tcp 0 0 0.0.0.0:50020 0.0.0.0:* LISTEN
          30442/java
    tcp 0 0 127.0.0.1:54310 0.0.0.0:* LISTEN
          30312/java
    tcp 0 0 0.0.0.0:40358 0.0.0.0:* LISTEN
          -
    tcp 0 0 127.0.0.1:54311 0.0.0.0:* LISTEN
          30697/java
    tcp 0 0 0.0.0.0:50090 0.0.0.0:* LISTEN
          30572/java
    tcp6 0 0 :::49932 :::* LISTEN
          -
    tcp6 0 0 :::111 :::* LISTEN
          -
    tcp6 0 0 :::22 :::* LISTEN
          -

    On Fri, Aug 9, 2013 at 11:27 AM, Abraham Elmahrek wrote:

    It definitely looks like they're in use. The command "netstat -tlnp"
    should show you their PIDs and process names. If they're Hue, you can
    simply 'kill' those processes using "kill <pid1> <pid2> ..." and start Hue
    again.

    On Fri, Aug 9, 2013 at 11:22 AM, Vinamrata Singal wrote:

    tcp 0 0 0.0.0.0:50060 0.0.0.0:*
    LISTEN
    tcp 0 0 0.0.0.0:44141 0.0.0.0:*
    LISTEN
    tcp 0 0 0.0.0.0:50030 0.0.0.0:*
    LISTEN
    tcp 0 0 0.0.0.0:111 0.0.0.0:*
    LISTEN
    tcp 0 0 0.0.0.0:50070 0.0.0.0:*
    LISTEN
    tcp 0 0 0.0.0.0:22 0.0.0.0:*
    LISTEN
    tcp 0 0 0.0.0.0:58776 0.0.0.0:*
    LISTEN
    tcp 0 0 0.0.0.0:8888 0.0.0.0:*
    LISTEN
    tcp 0 0 0.0.0.0:50010 0.0.0.0:*
    LISTEN
    tcp 0 0 0.0.0.0:50075 0.0.0.0:*
    LISTEN
    tcp 0 0 0.0.0.0:55035 0.0.0.0:*
    LISTEN
    tcp 0 0 0.0.0.0:55039 0.0.0.0:*
    LISTEN
    tcp 0 0 127.0.0.1:60353 0.0.0.0:*
    LISTEN
    tcp 0 0 0.0.0.0:8002 0.0.0.0:*
    LISTEN
    tcp 0 0 0.0.0.0:8003 0.0.0.0:*
    LISTEN
    tcp 0 0 0.0.0.0:50020 0.0.0.0:*
    LISTEN
    tcp 0 0 127.0.0.1:54310 0.0.0.0:*
    LISTEN
    tcp 0 0 0.0.0.0:40358 0.0.0.0:*
    LISTEN
    tcp 0 0 127.0.0.1:54311 0.0.0.0:*
    LISTEN
    tcp 0 0 0.0.0.0:50090 0.0.0.0:*
    LISTEN
    tcp6 0 0 :::49932 :::*
    LISTEN
    tcp6 0 0 :::111 :::*
    LISTEN
    tcp6 0 0 :::22 :::*
    LISTEN

    On Fri, Aug 9, 2013 at 11:20 AM, Abraham Elmahrek wrote:

    It looks like port 8888 is in use. Maybe 8002 and 8003 are in use as
    well. Could you provide the output of 'netstat -tln'?

    -Abe


    On Fri, Aug 9, 2013 at 11:16 AM, Vinamrata Singal <vsingal5@stanford.edu
    wrote:
    Abe,

    Thanks for all your help and quick replies.

    So I went ahead and fixed everything (including the username as you
    suggested) yet now I'm getting a different error:

    Executing /usr/local/hadoop/bin/hadoop jar
    /usr/share/hue/apps/beeswax/src/beeswax/../../java-lib/BeeswaxServer.jar
    --beeswax 8002 --desktop-host 127.0.0.1 --desktop-port 8888
    --query-lifetime 604800000 --metastore 8003
    (31443) socket 0.0.0.0:8888 already in use, retrying after 0.5
    seconds...
    (31443) socket 0.0.0.0:8888 already in use, retrying after 1.0
    seconds...
    (31443) socket 0.0.0.0:8888 already in use, retrying after 2.0
    seconds...
    (31443) socket 0.0.0.0:8888 already in use, retrying after 4.0
    seconds...
    13/08/09 11:15:37 INFO beeswax.Server: Starting metastore at port 8003
    13/08/09 11:15:37 INFO beeswax.Server: Starting beeswaxd at port 8002
    Exception in thread "main"
    org.apache.thrift.transport.TTransportException: Could not create
    ServerSocket on address 0.0.0.0/0.0.0.0:8002.
    at
    org.apache.thrift.transport.TServerSocket.<init>(TServerSocket.java:93)
    at
    org.apache.thrift.transport.TServerSocket.<init>(TServerSocket.java:75)
    at
    org.apache.thrift.transport.TServerSocket.<init>(TServerSocket.java:68)
    at com.cloudera.beeswax.Server.serveBeeswax(Server.java:250)
    at com.cloudera.beeswax.Server.main(Server.java:214)
    at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
    at
    sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
    at
    sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
    at java.lang.reflect.Method.invoke(Method.java:606)
    at org.apache.hadoop.util.RunJar.main(RunJar.java:160)
    org.apache.thrift.transport.TTransportException: Could not create
    ServerSocket on address 0.0.0.0/0.0.0.0:8003.
    at
    org.apache.thrift.transport.TServerSocket.<init>(TServerSocket.java:93)
    at
    org.apache.thrift.transport.TServerSocket.<init>(TServerSocket.java:75)
    at
    org.apache.thrift.transport.TServerSocket.<init>(TServerSocket.java:68)
    at com.cloudera.beeswax.Server.serveMeta(Server.java:348)
    at com.cloudera.beeswax.Server$1.run(Server.java:200)
    at java.lang.Thread.run(Thread.java:724)


    On Thu, Aug 8, 2013 at 5:53 PM, Abraham Elmahrek wrote:

    Vinamrata,

    I think it will work with the exception of the job browser
    application. It depends on the thrift service code for the job tracker
    packaged in hue-plugins.jar, which only runs on CDH. There have been users
    who've ported that code over to the upstream job tracker in the past though.

    To solve the rest of your issues, take a look at
    apps/beeswax/beeswax_server.sh. There are environment variables that you
    need to set when you start Hue. In particular, "HIVE_LIB" should be set or
    some how grok'd into Hue's environment.

    -Abe


    On Thu, Aug 8, 2013 at 5:43 PM, Vinamrata Singal <
    vsingal5@stanford.edu> wrote:
    I have an Apache installation. Is that an issue for Hue?

    Got it, will look into the hue user installation issues.

    On Thu, Aug 8, 2013 at 5:25 PM, Abraham Elmahrek wrote:

    That error is likely because of the error: find:
    `~/hive-0.10.0/lib': No such file or directory. Are you using a
    custom installation of Hive? Perhaps, the apache distribution rather than
    the CDH distribution?

    Note: It's probably not a good idea to change the user from 'hue' to
    something else.


    On Thu, Aug 8, 2013 at 5:20 PM, Vinamrata Singal <
    vsingal5@stanford.edu> wrote:
    I actually think I fixed the issue by changing the user in the
    supervisor.py script to be the user that owns that directory.

    Question: now I'm getting this error:

    xception in thread "main" java.lang.NoClassDefFoundError:
    org/apache/hadoop/hive/conf/HiveConf
    at java.lang.Class.forName0(Native Method)
    at java.lang.Class.forName(Class.java:270)
    at org.apache.hadoop.util.RunJar.main(RunJar.java:153)
    Caused by: java.lang.ClassNotFoundException:
    org.apache.hadoop.hive.conf.HiveConf
    at java.net.URLClassLoader$1.run(URLClassLoader.java:366)
    at java.net.URLClassLoader$1.run(URLClassLoader.java:355)
    at java.security.AccessController.doPrivileged(Native Method)
    at java.net.URLClassLoader.findClass(URLClassLoader.java:354)
    at java.lang.ClassLoader.loadClass(ClassLoader.java:424)
    at java.lang.ClassLoader.loadClass(ClassLoader.java:357)

    On Thu, Aug 8, 2013 at 5:19 PM, Abraham Elmahrek wrote:

    Your 'hadoop.tmp.dir' doesn't vary per user. I'm not entirely sure
    if that directory is chown'd or if the first user to create it takes
    ownership. Either way, you probably don't have access to that directory
    using the user starting Beeswaxd (hue). Take a look at
    http://hadoop.apache.org/docs/current/hadoop-project-dist/hadoop-common/core-default.xmlfor an example 'hadoop.tmp.dir' that varies by username.

    -Abe


    On Thu, Aug 8, 2013 at 5:15 PM, Vinamrata Singal <
    vsingal5@stanford.edu> wrote:
    <configuration>
    <property>
    <name>hadoop.tmp.dir </name>
    <value>/app/hadoop/tmp</value>
    <description> A base for other temporary directories
    </description>
    </property>

    <property>
    <name>fs.default.name</name>
    <value>hdfs://master:54310</value>
    </property>
    </configuration>


    On Thu, Aug 8, 2013 at 5:11 PM, Abraham Elmahrek <
    abe@cloudera.com> wrote:
    Could you provide your core-site.xml? This should be creating a
    file in 'hadoop.tmp.dir'.

    -Abe


    On Thu, Aug 8, 2013 at 5:03 PM, Vinamrata Singal <
    vsingal5@stanford.edu> wrote:
    File: `/tmp'
    Size: 4096 Blocks: 8 IO Block: 4096
    directory
    Device: fd00h/64768d Inode: 131082 Links: 11
    Access: (1777/drwxrwxrwt) Uid: ( 0/ root) Gid: ( 0/
    root)
    Access: 2013-08-08 22:47:49.435308999 +0000
    Modify: 2013-08-09 00:01:17.251309001 +0000
    Change: 2013-08-09 00:01:17.251309001 +0000
    Birth: -



    On Thu, Aug 8, 2013 at 5:00 PM, Abraham Elmahrek <
    abe@cloudera.com> wrote:
    It looks like Beeswaxd cannot create a temporary directory in
    '/tmp/'. What do your permissions on that directory look like? What is the
    output of "stat /tmp/"?

    On Thu, Aug 8, 2013 at 4:52 PM, wrote:

    I'm getting the following while trying to build Hue:







    (6211) *** Controller starting at Thu Aug 8 11:29:50 2013
    Should start 1 new children
    Controller.spawn_children(number=1)
    $HADOOP_HOME=
    $HADOOP_BIN=/usr/local/hadoop/bin/hadoop
    $HIVE_CONF_DIR=~/hive-0.10.0/conf
    $HIVE_HOME=~/hive-0.10.0
    find: `~/hive-0.10.0/lib': No such file or directory
    $HADOOP_CLASSPATH=:
    $HADOOP_OPTS=-Dlog4j.configuration=log4j.properties
    $HADOOP_CONF_DIR=~/hive-0.10.0/conf:/usr/local/hadoop/conf
    $HADOOP_MAPRED_HOME=/usr/lib/hadoop-0.20-mapreduce
    CWD=/usr/local/hue/desktop/conf
    Executing /usr/local/hadoop/bin/hadoop jar /usr/local/hue/apps/beeswax/src/beeswax/../../java-lib/BeeswaxServer.jar --beeswax 8002 --desktop-host 127.0.0.1 --desktop-port 8888 --query-lifetime 604800000 --metastore 8003
    Exception in thread "main" java.io.IOException: Permission denied
    at java.io.UnixFileSystem.createFileExclusively(Native Method)
    at java.io.File.createTempFile(File.java:1879)
    at org.apache.hadoop.util.RunJar.main(RunJar.java:119)

    I've changed the configuration file so it doesn't use hue but
    the user that I'm logged in as which has read and write permissions in the
    hadoop dfs, hadoop, hive, etc. Not sure why it's doing this...

    --
    Best,
    Vinamrata Singal | BS Computer Science | Stanford Class of 2016
    (c) 650.215.3775 | (w) https://stanford.edu/~vsingal5

    --
    Best,
    Vinamrata Singal | BS Computer Science | Stanford Class of 2016 |
    (c) 650.215.3775 | (w) https://stanford.edu/~vsingal5

    --
    Best,
    Vinamrata Singal | BS Computer Science | Stanford Class of 2016 |
    (c) 650.215.3775 | (w) https://stanford.edu/~vsingal5

    --
    Best,
    Vinamrata Singal | BS Computer Science | Stanford Class of 2016 | (c)
    650.215.3775 | (w) https://stanford.edu/~vsingal5

    --
    Best,
    Vinamrata Singal | BS Computer Science | Stanford Class of 2016 | (c)
    650.215.3775 | (w) https://stanford.edu/~vsingal5

    --
    Best,
    Vinamrata Singal | BS Computer Science | Stanford Class of 2016 | (c)
    650.215.3775 | (w) https://stanford.edu/~vsingal5

    --
    Best,
    Vinamrata Singal | BS Computer Science | Stanford Class of 2016 |
    (c) 650.215.3775 | (w) https://stanford.edu/~vsingal5
  • Vinamrata Singal at Aug 9, 2013 at 6:45 pm
    also no pid's for 8002 and 8003

    On Fri, Aug 9, 2013 at 11:44 AM, Vinamrata Singal wrote:

    I have the following output, can't find port 8888

    Proto Recv-Q Send-Q Local Address Foreign Address State
    PID/Program name
    tcp 0 0 0.0.0.0:50060 0.0.0.0:*
    LISTEN 30843/java
    tcp 0 0 0.0.0.0:44141 0.0.0.0:*
    LISTEN 30312/java
    tcp 0 0 0.0.0.0:50030 0.0.0.0:*
    LISTEN 30697/java
    tcp 0 0 0.0.0.0:111 0.0.0.0:*
    LISTEN -
    tcp 0 0 0.0.0.0:50070 0.0.0.0:*
    LISTEN 30312/java
    tcp 0 0 0.0.0.0:22 0.0.0.0:*
    LISTEN -
    tcp 0 0 0.0.0.0:58776 0.0.0.0:*
    LISTEN 30442/java
    tcp 0 0 0.0.0.0:8888 0.0.0.0:*
    LISTEN 22739/python2.7
    tcp 0 0 0.0.0.0:50010 0.0.0.0:*
    LISTEN 30442/java
    tcp 0 0 0.0.0.0:50075 0.0.0.0:*
    LISTEN 30442/java
    tcp 0 0 0.0.0.0:55035 0.0.0.0:*
    LISTEN 30572/java
    tcp 0 0 0.0.0.0:55039 0.0.0.0:*
    LISTEN 30697/java
    tcp 0 0 127.0.0.1:60353 0.0.0.0:*
    LISTEN 30843/java
    tcp 0 0 0.0.0.0:8002 0.0.0.0:*
    LISTEN -
    tcp 0 0 0.0.0.0:8003 0.0.0.0:*
    LISTEN -
    tcp 0 0 0.0.0.0:50020 0.0.0.0:*
    LISTEN 30442/java
    tcp 0 0 127.0.0.1:54310 0.0.0.0:*
    LISTEN 30312/java
    tcp 0 0 0.0.0.0:40358 0.0.0.0:*
    LISTEN -
    tcp 0 0 127.0.0.1:54311 0.0.0.0:*
    LISTEN 30697/java
    tcp 0 0 0.0.0.0:50090 0.0.0.0:*
    LISTEN 30572/java
    tcp6 0 0 :::49932 :::* LISTEN
    -
    tcp6 0 0 :::111 :::* LISTEN
    -
    tcp6 0 0 :::22 :::* LISTEN
    -

    On Fri, Aug 9, 2013 at 11:27 AM, Abraham Elmahrek wrote:

    It definitely looks like they're in use. The command "netstat -tlnp"
    should show you their PIDs and process names. If they're Hue, you can
    simply 'kill' those processes using "kill <pid1> <pid2> ..." and start Hue
    again.

    On Fri, Aug 9, 2013 at 11:22 AM, Vinamrata Singal wrote:

    tcp 0 0 0.0.0.0:50060 0.0.0.0:*
    LISTEN
    tcp 0 0 0.0.0.0:44141 0.0.0.0:*
    LISTEN
    tcp 0 0 0.0.0.0:50030 0.0.0.0:*
    LISTEN
    tcp 0 0 0.0.0.0:111 0.0.0.0:*
    LISTEN
    tcp 0 0 0.0.0.0:50070 0.0.0.0:*
    LISTEN
    tcp 0 0 0.0.0.0:22 0.0.0.0:*
    LISTEN
    tcp 0 0 0.0.0.0:58776 0.0.0.0:*
    LISTEN
    tcp 0 0 0.0.0.0:8888 0.0.0.0:*
    LISTEN
    tcp 0 0 0.0.0.0:50010 0.0.0.0:*
    LISTEN
    tcp 0 0 0.0.0.0:50075 0.0.0.0:*
    LISTEN
    tcp 0 0 0.0.0.0:55035 0.0.0.0:*
    LISTEN
    tcp 0 0 0.0.0.0:55039 0.0.0.0:*
    LISTEN
    tcp 0 0 127.0.0.1:60353 0.0.0.0:*
    LISTEN
    tcp 0 0 0.0.0.0:8002 0.0.0.0:*
    LISTEN
    tcp 0 0 0.0.0.0:8003 0.0.0.0:*
    LISTEN
    tcp 0 0 0.0.0.0:50020 0.0.0.0:*
    LISTEN
    tcp 0 0 127.0.0.1:54310 0.0.0.0:*
    LISTEN
    tcp 0 0 0.0.0.0:40358 0.0.0.0:*
    LISTEN
    tcp 0 0 127.0.0.1:54311 0.0.0.0:*
    LISTEN
    tcp 0 0 0.0.0.0:50090 0.0.0.0:*
    LISTEN
    tcp6 0 0 :::49932 :::*
    LISTEN
    tcp6 0 0 :::111 :::*
    LISTEN
    tcp6 0 0 :::22 :::*
    LISTEN

    On Fri, Aug 9, 2013 at 11:20 AM, Abraham Elmahrek wrote:

    It looks like port 8888 is in use. Maybe 8002 and 8003 are in use as
    well. Could you provide the output of 'netstat -tln'?

    -Abe


    On Fri, Aug 9, 2013 at 11:16 AM, Vinamrata Singal <
    vsingal5@stanford.edu> wrote:
    Abe,

    Thanks for all your help and quick replies.

    So I went ahead and fixed everything (including the username as you
    suggested) yet now I'm getting a different error:

    Executing /usr/local/hadoop/bin/hadoop jar
    /usr/share/hue/apps/beeswax/src/beeswax/../../java-lib/BeeswaxServer.jar
    --beeswax 8002 --desktop-host 127.0.0.1 --desktop-port 8888
    --query-lifetime 604800000 --metastore 8003
    (31443) socket 0.0.0.0:8888 already in use, retrying after 0.5
    seconds...
    (31443) socket 0.0.0.0:8888 already in use, retrying after 1.0
    seconds...
    (31443) socket 0.0.0.0:8888 already in use, retrying after 2.0
    seconds...
    (31443) socket 0.0.0.0:8888 already in use, retrying after 4.0
    seconds...
    13/08/09 11:15:37 INFO beeswax.Server: Starting metastore at port 8003
    13/08/09 11:15:37 INFO beeswax.Server: Starting beeswaxd at port 8002
    Exception in thread "main"
    org.apache.thrift.transport.TTransportException: Could not create
    ServerSocket on address 0.0.0.0/0.0.0.0:8002.
    at
    org.apache.thrift.transport.TServerSocket.<init>(TServerSocket.java:93)
    at
    org.apache.thrift.transport.TServerSocket.<init>(TServerSocket.java:75)
    at
    org.apache.thrift.transport.TServerSocket.<init>(TServerSocket.java:68)
    at com.cloudera.beeswax.Server.serveBeeswax(Server.java:250)
    at com.cloudera.beeswax.Server.main(Server.java:214)
    at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
    at
    sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
    at
    sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
    at java.lang.reflect.Method.invoke(Method.java:606)
    at org.apache.hadoop.util.RunJar.main(RunJar.java:160)
    org.apache.thrift.transport.TTransportException: Could not create
    ServerSocket on address 0.0.0.0/0.0.0.0:8003.
    at
    org.apache.thrift.transport.TServerSocket.<init>(TServerSocket.java:93)
    at
    org.apache.thrift.transport.TServerSocket.<init>(TServerSocket.java:75)
    at
    org.apache.thrift.transport.TServerSocket.<init>(TServerSocket.java:68)
    at com.cloudera.beeswax.Server.serveMeta(Server.java:348)
    at com.cloudera.beeswax.Server$1.run(Server.java:200)
    at java.lang.Thread.run(Thread.java:724)


    On Thu, Aug 8, 2013 at 5:53 PM, Abraham Elmahrek wrote:

    Vinamrata,

    I think it will work with the exception of the job browser
    application. It depends on the thrift service code for the job tracker
    packaged in hue-plugins.jar, which only runs on CDH. There have been users
    who've ported that code over to the upstream job tracker in the past though.

    To solve the rest of your issues, take a look at
    apps/beeswax/beeswax_server.sh. There are environment variables that you
    need to set when you start Hue. In particular, "HIVE_LIB" should be set or
    some how grok'd into Hue's environment.

    -Abe


    On Thu, Aug 8, 2013 at 5:43 PM, Vinamrata Singal <
    vsingal5@stanford.edu> wrote:
    I have an Apache installation. Is that an issue for Hue?

    Got it, will look into the hue user installation issues.

    On Thu, Aug 8, 2013 at 5:25 PM, Abraham Elmahrek wrote:

    That error is likely because of the error: find:
    `~/hive-0.10.0/lib': No such file or directory. Are you using a
    custom installation of Hive? Perhaps, the apache distribution rather than
    the CDH distribution?

    Note: It's probably not a good idea to change the user from 'hue'
    to something else.


    On Thu, Aug 8, 2013 at 5:20 PM, Vinamrata Singal <
    vsingal5@stanford.edu> wrote:
    I actually think I fixed the issue by changing the user in the
    supervisor.py script to be the user that owns that directory.

    Question: now I'm getting this error:

    xception in thread "main" java.lang.NoClassDefFoundError:
    org/apache/hadoop/hive/conf/HiveConf
    at java.lang.Class.forName0(Native Method)
    at java.lang.Class.forName(Class.java:270)
    at org.apache.hadoop.util.RunJar.main(RunJar.java:153)
    Caused by: java.lang.ClassNotFoundException:
    org.apache.hadoop.hive.conf.HiveConf
    at java.net.URLClassLoader$1.run(URLClassLoader.java:366)
    at java.net.URLClassLoader$1.run(URLClassLoader.java:355)
    at java.security.AccessController.doPrivileged(Native Method)
    at java.net.URLClassLoader.findClass(URLClassLoader.java:354)
    at java.lang.ClassLoader.loadClass(ClassLoader.java:424)
    at java.lang.ClassLoader.loadClass(ClassLoader.java:357)


    On Thu, Aug 8, 2013 at 5:19 PM, Abraham Elmahrek <abe@cloudera.com
    wrote:
    Your 'hadoop.tmp.dir' doesn't vary per user. I'm not entirely
    sure if that directory is chown'd or if the first user to create it takes
    ownership. Either way, you probably don't have access to that directory
    using the user starting Beeswaxd (hue). Take a look at
    http://hadoop.apache.org/docs/current/hadoop-project-dist/hadoop-common/core-default.xmlfor an example 'hadoop.tmp.dir' that varies by username.

    -Abe


    On Thu, Aug 8, 2013 at 5:15 PM, Vinamrata Singal <
    vsingal5@stanford.edu> wrote:
    <configuration>
    <property>
    <name>hadoop.tmp.dir </name>
    <value>/app/hadoop/tmp</value>
    <description> A base for other temporary directories
    </description>
    </property>

    <property>
    <name>fs.default.name</name>
    <value>hdfs://master:54310</value>
    </property>
    </configuration>


    On Thu, Aug 8, 2013 at 5:11 PM, Abraham Elmahrek <
    abe@cloudera.com> wrote:
    Could you provide your core-site.xml? This should be creating a
    file in 'hadoop.tmp.dir'.

    -Abe


    On Thu, Aug 8, 2013 at 5:03 PM, Vinamrata Singal <
    vsingal5@stanford.edu> wrote:
    File: `/tmp'
    Size: 4096 Blocks: 8 IO Block: 4096
    directory
    Device: fd00h/64768d Inode: 131082 Links: 11
    Access: (1777/drwxrwxrwt) Uid: ( 0/ root) Gid: (
    0/ root)
    Access: 2013-08-08 22:47:49.435308999 +0000
    Modify: 2013-08-09 00:01:17.251309001 +0000
    Change: 2013-08-09 00:01:17.251309001 +0000
    Birth: -



    On Thu, Aug 8, 2013 at 5:00 PM, Abraham Elmahrek <
    abe@cloudera.com> wrote:
    It looks like Beeswaxd cannot create a temporary directory in
    '/tmp/'. What do your permissions on that directory look like? What is the
    output of "stat /tmp/"?

    On Thu, Aug 8, 2013 at 4:52 PM, wrote:

    I'm getting the following while trying to build Hue:







    (6211) *** Controller starting at Thu Aug 8 11:29:50 2013
    Should start 1 new children
    Controller.spawn_children(number=1)
    $HADOOP_HOME=
    $HADOOP_BIN=/usr/local/hadoop/bin/hadoop
    $HIVE_CONF_DIR=~/hive-0.10.0/conf
    $HIVE_HOME=~/hive-0.10.0
    find: `~/hive-0.10.0/lib': No such file or directory
    $HADOOP_CLASSPATH=:
    $HADOOP_OPTS=-Dlog4j.configuration=log4j.properties
    $HADOOP_CONF_DIR=~/hive-0.10.0/conf:/usr/local/hadoop/conf
    $HADOOP_MAPRED_HOME=/usr/lib/hadoop-0.20-mapreduce
    CWD=/usr/local/hue/desktop/conf
    Executing /usr/local/hadoop/bin/hadoop jar /usr/local/hue/apps/beeswax/src/beeswax/../../java-lib/BeeswaxServer.jar --beeswax 8002 --desktop-host 127.0.0.1 --desktop-port 8888 --query-lifetime 604800000 --metastore 8003
    Exception in thread "main" java.io.IOException: Permission denied
    at java.io.UnixFileSystem.createFileExclusively(Native Method)
    at java.io.File.createTempFile(File.java:1879)
    at org.apache.hadoop.util.RunJar.main(RunJar.java:119)

    I've changed the configuration file so it doesn't use hue
    but the user that I'm logged in as which has read and write permissions in
    the hadoop dfs, hadoop, hive, etc. Not sure why it's doing this...

    --
    Best,
    Vinamrata Singal | BS Computer Science | Stanford Class of
    2016 | (c) 650.215.3775 | (w) https://stanford.edu/~vsingal5

    --
    Best,
    Vinamrata Singal | BS Computer Science | Stanford Class of 2016
    (c) 650.215.3775 | (w) https://stanford.edu/~vsingal5

    --
    Best,
    Vinamrata Singal | BS Computer Science | Stanford Class of 2016 |
    (c) 650.215.3775 | (w) https://stanford.edu/~vsingal5

    --
    Best,
    Vinamrata Singal | BS Computer Science | Stanford Class of 2016 |
    (c) 650.215.3775 | (w) https://stanford.edu/~vsingal5

    --
    Best,
    Vinamrata Singal | BS Computer Science | Stanford Class of 2016 | (c)
    650.215.3775 | (w) https://stanford.edu/~vsingal5

    --
    Best,
    Vinamrata Singal | BS Computer Science | Stanford Class of 2016 | (c)
    650.215.3775 | (w) https://stanford.edu/~vsingal5

    --
    Best,
    Vinamrata Singal | BS Computer Science | Stanford Class of 2016 | (c)
    650.215.3775 | (w) https://stanford.edu/~vsingal5


    --
    Best,
    Vinamrata Singal | BS Computer Science | Stanford Class of 2016 |
    (c) 650.215.3775 | (w) https://stanford.edu/~vsingal5
  • Abraham Elmahrek at Aug 9, 2013 at 6:51 pm
    Port 8888 is PID 22739. You should be able to execute "ps auxwww" or a
    similar command to find Beeswax and Hue. They should both be running as use
    Hue. Then just kill them both.

    -Abe

    On Fri, Aug 9, 2013 at 11:45 AM, Vinamrata Singal wrote:

    also no pid's for 8002 and 8003

    On Fri, Aug 9, 2013 at 11:44 AM, Vinamrata Singal wrote:

    I have the following output, can't find port 8888

    Proto Recv-Q Send-Q Local Address Foreign Address State
    PID/Program name
    tcp 0 0 0.0.0.0:50060 0.0.0.0:*
    LISTEN 30843/java
    tcp 0 0 0.0.0.0:44141 0.0.0.0:*
    LISTEN 30312/java
    tcp 0 0 0.0.0.0:50030 0.0.0.0:*
    LISTEN 30697/java
    tcp 0 0 0.0.0.0:111 0.0.0.0:*
    LISTEN -
    tcp 0 0 0.0.0.0:50070 0.0.0.0:*
    LISTEN 30312/java
    tcp 0 0 0.0.0.0:22 0.0.0.0:*
    LISTEN -
    tcp 0 0 0.0.0.0:58776 0.0.0.0:*
    LISTEN 30442/java
    tcp 0 0 0.0.0.0:8888 0.0.0.0:*
    LISTEN 22739/python2.7
    tcp 0 0 0.0.0.0:50010 0.0.0.0:*
    LISTEN 30442/java
    tcp 0 0 0.0.0.0:50075 0.0.0.0:*
    LISTEN 30442/java
    tcp 0 0 0.0.0.0:55035 0.0.0.0:*
    LISTEN 30572/java
    tcp 0 0 0.0.0.0:55039 0.0.0.0:*
    LISTEN 30697/java
    tcp 0 0 127.0.0.1:60353 0.0.0.0:*
    LISTEN 30843/java
    tcp 0 0 0.0.0.0:8002 0.0.0.0:*
    LISTEN -
    tcp 0 0 0.0.0.0:8003 0.0.0.0:*
    LISTEN -
    tcp 0 0 0.0.0.0:50020 0.0.0.0:*
    LISTEN 30442/java
    tcp 0 0 127.0.0.1:54310 0.0.0.0:*
    LISTEN 30312/java
    tcp 0 0 0.0.0.0:40358 0.0.0.0:*
    LISTEN -
    tcp 0 0 127.0.0.1:54311 0.0.0.0:*
    LISTEN 30697/java
    tcp 0 0 0.0.0.0:50090 0.0.0.0:*
    LISTEN 30572/java
    tcp6 0 0 :::49932 :::*
    LISTEN -
    tcp6 0 0 :::111 :::*
    LISTEN -
    tcp6 0 0 :::22 :::*
    LISTEN -

    On Fri, Aug 9, 2013 at 11:27 AM, Abraham Elmahrek wrote:

    It definitely looks like they're in use. The command "netstat -tlnp"
    should show you their PIDs and process names. If they're Hue, you can
    simply 'kill' those processes using "kill <pid1> <pid2> ..." and start Hue
    again.


    On Fri, Aug 9, 2013 at 11:22 AM, Vinamrata Singal <vsingal5@stanford.edu
    wrote:
    tcp 0 0 0.0.0.0:50060 0.0.0.0:*
    LISTEN
    tcp 0 0 0.0.0.0:44141 0.0.0.0:*
    LISTEN
    tcp 0 0 0.0.0.0:50030 0.0.0.0:*
    LISTEN
    tcp 0 0 0.0.0.0:111 0.0.0.0:*
    LISTEN
    tcp 0 0 0.0.0.0:50070 0.0.0.0:*
    LISTEN
    tcp 0 0 0.0.0.0:22 0.0.0.0:*
    LISTEN
    tcp 0 0 0.0.0.0:58776 0.0.0.0:*
    LISTEN
    tcp 0 0 0.0.0.0:8888 0.0.0.0:*
    LISTEN
    tcp 0 0 0.0.0.0:50010 0.0.0.0:*
    LISTEN
    tcp 0 0 0.0.0.0:50075 0.0.0.0:*
    LISTEN
    tcp 0 0 0.0.0.0:55035 0.0.0.0:*
    LISTEN
    tcp 0 0 0.0.0.0:55039 0.0.0.0:*
    LISTEN
    tcp 0 0 127.0.0.1:60353 0.0.0.0:*
    LISTEN
    tcp 0 0 0.0.0.0:8002 0.0.0.0:*
    LISTEN
    tcp 0 0 0.0.0.0:8003 0.0.0.0:*
    LISTEN
    tcp 0 0 0.0.0.0:50020 0.0.0.0:*
    LISTEN
    tcp 0 0 127.0.0.1:54310 0.0.0.0:*
    LISTEN
    tcp 0 0 0.0.0.0:40358 0.0.0.0:*
    LISTEN
    tcp 0 0 127.0.0.1:54311 0.0.0.0:*
    LISTEN
    tcp 0 0 0.0.0.0:50090 0.0.0.0:*
    LISTEN
    tcp6 0 0 :::49932 :::*
    LISTEN
    tcp6 0 0 :::111 :::*
    LISTEN
    tcp6 0 0 :::22 :::*
    LISTEN

    On Fri, Aug 9, 2013 at 11:20 AM, Abraham Elmahrek wrote:

    It looks like port 8888 is in use. Maybe 8002 and 8003 are in use as
    well. Could you provide the output of 'netstat -tln'?

    -Abe


    On Fri, Aug 9, 2013 at 11:16 AM, Vinamrata Singal <
    vsingal5@stanford.edu> wrote:
    Abe,

    Thanks for all your help and quick replies.

    So I went ahead and fixed everything (including the username as you
    suggested) yet now I'm getting a different error:

    Executing /usr/local/hadoop/bin/hadoop jar
    /usr/share/hue/apps/beeswax/src/beeswax/../../java-lib/BeeswaxServer.jar
    --beeswax 8002 --desktop-host 127.0.0.1 --desktop-port 8888
    --query-lifetime 604800000 --metastore 8003
    (31443) socket 0.0.0.0:8888 already in use, retrying after 0.5
    seconds...
    (31443) socket 0.0.0.0:8888 already in use, retrying after 1.0
    seconds...
    (31443) socket 0.0.0.0:8888 already in use, retrying after 2.0
    seconds...
    (31443) socket 0.0.0.0:8888 already in use, retrying after 4.0
    seconds...
    13/08/09 11:15:37 INFO beeswax.Server: Starting metastore at port 8003
    13/08/09 11:15:37 INFO beeswax.Server: Starting beeswaxd at port 8002
    Exception in thread "main"
    org.apache.thrift.transport.TTransportException: Could not create
    ServerSocket on address 0.0.0.0/0.0.0.0:8002.
    at
    org.apache.thrift.transport.TServerSocket.<init>(TServerSocket.java:93)
    at
    org.apache.thrift.transport.TServerSocket.<init>(TServerSocket.java:75)
    at
    org.apache.thrift.transport.TServerSocket.<init>(TServerSocket.java:68)
    at com.cloudera.beeswax.Server.serveBeeswax(Server.java:250)
    at com.cloudera.beeswax.Server.main(Server.java:214)
    at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
    at
    sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
    at
    sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
    at java.lang.reflect.Method.invoke(Method.java:606)
    at org.apache.hadoop.util.RunJar.main(RunJar.java:160)
    org.apache.thrift.transport.TTransportException: Could not create
    ServerSocket on address 0.0.0.0/0.0.0.0:8003.
    at
    org.apache.thrift.transport.TServerSocket.<init>(TServerSocket.java:93)
    at
    org.apache.thrift.transport.TServerSocket.<init>(TServerSocket.java:75)
    at
    org.apache.thrift.transport.TServerSocket.<init>(TServerSocket.java:68)
    at com.cloudera.beeswax.Server.serveMeta(Server.java:348)
    at com.cloudera.beeswax.Server$1.run(Server.java:200)
    at java.lang.Thread.run(Thread.java:724)


    On Thu, Aug 8, 2013 at 5:53 PM, Abraham Elmahrek wrote:

    Vinamrata,

    I think it will work with the exception of the job browser
    application. It depends on the thrift service code for the job tracker
    packaged in hue-plugins.jar, which only runs on CDH. There have been users
    who've ported that code over to the upstream job tracker in the past though.

    To solve the rest of your issues, take a look at
    apps/beeswax/beeswax_server.sh. There are environment variables that you
    need to set when you start Hue. In particular, "HIVE_LIB" should be set or
    some how grok'd into Hue's environment.

    -Abe


    On Thu, Aug 8, 2013 at 5:43 PM, Vinamrata Singal <
    vsingal5@stanford.edu> wrote:
    I have an Apache installation. Is that an issue for Hue?

    Got it, will look into the hue user installation issues.

    On Thu, Aug 8, 2013 at 5:25 PM, Abraham Elmahrek wrote:

    That error is likely because of the error: find:
    `~/hive-0.10.0/lib': No such file or directory. Are you using a
    custom installation of Hive? Perhaps, the apache distribution rather than
    the CDH distribution?

    Note: It's probably not a good idea to change the user from 'hue'
    to something else.


    On Thu, Aug 8, 2013 at 5:20 PM, Vinamrata Singal <
    vsingal5@stanford.edu> wrote:
    I actually think I fixed the issue by changing the user in the
    supervisor.py script to be the user that owns that directory.

    Question: now I'm getting this error:

    xception in thread "main" java.lang.NoClassDefFoundError:
    org/apache/hadoop/hive/conf/HiveConf
    at java.lang.Class.forName0(Native Method)
    at java.lang.Class.forName(Class.java:270)
    at org.apache.hadoop.util.RunJar.main(RunJar.java:153)
    Caused by: java.lang.ClassNotFoundException:
    org.apache.hadoop.hive.conf.HiveConf
    at java.net.URLClassLoader$1.run(URLClassLoader.java:366)
    at java.net.URLClassLoader$1.run(URLClassLoader.java:355)
    at java.security.AccessController.doPrivileged(Native Method)
    at java.net.URLClassLoader.findClass(URLClassLoader.java:354)
    at java.lang.ClassLoader.loadClass(ClassLoader.java:424)
    at java.lang.ClassLoader.loadClass(ClassLoader.java:357)


    On Thu, Aug 8, 2013 at 5:19 PM, Abraham Elmahrek <
    abe@cloudera.com> wrote:
    Your 'hadoop.tmp.dir' doesn't vary per user. I'm not entirely
    sure if that directory is chown'd or if the first user to create it takes
    ownership. Either way, you probably don't have access to that directory
    using the user starting Beeswaxd (hue). Take a look at
    http://hadoop.apache.org/docs/current/hadoop-project-dist/hadoop-common/core-default.xmlfor an example 'hadoop.tmp.dir' that varies by username.

    -Abe


    On Thu, Aug 8, 2013 at 5:15 PM, Vinamrata Singal <
    vsingal5@stanford.edu> wrote:
    <configuration>
    <property>
    <name>hadoop.tmp.dir </name>
    <value>/app/hadoop/tmp</value>
    <description> A base for other temporary directories
    </description>
    </property>

    <property>
    <name>fs.default.name</name>
    <value>hdfs://master:54310</value>
    </property>
    </configuration>


    On Thu, Aug 8, 2013 at 5:11 PM, Abraham Elmahrek <
    abe@cloudera.com> wrote:
    Could you provide your core-site.xml? This should be creating
    a file in 'hadoop.tmp.dir'.

    -Abe


    On Thu, Aug 8, 2013 at 5:03 PM, Vinamrata Singal <
    vsingal5@stanford.edu> wrote:
    File: `/tmp'
    Size: 4096 Blocks: 8 IO Block: 4096
    directory
    Device: fd00h/64768d Inode: 131082 Links: 11
    Access: (1777/drwxrwxrwt) Uid: ( 0/ root) Gid: (
    0/ root)
    Access: 2013-08-08 22:47:49.435308999 +0000
    Modify: 2013-08-09 00:01:17.251309001 +0000
    Change: 2013-08-09 00:01:17.251309001 +0000
    Birth: -



    On Thu, Aug 8, 2013 at 5:00 PM, Abraham Elmahrek <
    abe@cloudera.com> wrote:
    It looks like Beeswaxd cannot create a temporary directory
    in '/tmp/'. What do your permissions on that directory look like? What is
    the output of "stat /tmp/"?

    On Thu, Aug 8, 2013 at 4:52 PM, wrote:

    I'm getting the following while trying to build Hue:







    (6211) *** Controller starting at Thu Aug 8 11:29:50 2013
    Should start 1 new children
    Controller.spawn_children(number=1)
    $HADOOP_HOME=
    $HADOOP_BIN=/usr/local/hadoop/bin/hadoop
    $HIVE_CONF_DIR=~/hive-0.10.0/conf
    $HIVE_HOME=~/hive-0.10.0
    find: `~/hive-0.10.0/lib': No such file or directory
    $HADOOP_CLASSPATH=:
    $HADOOP_OPTS=-Dlog4j.configuration=log4j.properties
    $HADOOP_CONF_DIR=~/hive-0.10.0/conf:/usr/local/hadoop/conf
    $HADOOP_MAPRED_HOME=/usr/lib/hadoop-0.20-mapreduce
    CWD=/usr/local/hue/desktop/conf
    Executing /usr/local/hadoop/bin/hadoop jar /usr/local/hue/apps/beeswax/src/beeswax/../../java-lib/BeeswaxServer.jar --beeswax 8002 --desktop-host 127.0.0.1 --desktop-port 8888 --query-lifetime 604800000 --metastore 8003
    Exception in thread "main" java.io.IOException: Permission denied
    at java.io.UnixFileSystem.createFileExclusively(Native Method)
    at java.io.File.createTempFile(File.java:1879)
    at org.apache.hadoop.util.RunJar.main(RunJar.java:119)

    I've changed the configuration file so it doesn't use hue
    but the user that I'm logged in as which has read and write permissions in
    the hadoop dfs, hadoop, hive, etc. Not sure why it's doing this...

    --
    Best,
    Vinamrata Singal | BS Computer Science | Stanford Class of
    2016 | (c) 650.215.3775 | (w) https://stanford.edu/~vsingal5

    --
    Best,
    Vinamrata Singal | BS Computer Science | Stanford Class of 2016
    (c) 650.215.3775 | (w) https://stanford.edu/~vsingal5

    --
    Best,
    Vinamrata Singal | BS Computer Science | Stanford Class of 2016 |
    (c) 650.215.3775 | (w) https://stanford.edu/~vsingal5

    --
    Best,
    Vinamrata Singal | BS Computer Science | Stanford Class of 2016 |
    (c) 650.215.3775 | (w) https://stanford.edu/~vsingal5

    --
    Best,
    Vinamrata Singal | BS Computer Science | Stanford Class of 2016 | (c)
    650.215.3775 | (w) https://stanford.edu/~vsingal5

    --
    Best,
    Vinamrata Singal | BS Computer Science | Stanford Class of 2016 | (c)
    650.215.3775 | (w) https://stanford.edu/~vsingal5

    --
    Best,
    Vinamrata Singal | BS Computer Science | Stanford Class of 2016 | (c)
    650.215.3775 | (w) https://stanford.edu/~vsingal5


    --
    Best,
    Vinamrata Singal | BS Computer Science | Stanford Class of 2016 | (c)
    650.215.3775 | (w) https://stanford.edu/~vsingal5
  • Vinamrata Singal at Aug 9, 2013 at 6:56 pm
    After killing the appropriate processes, I still get the following error:

    java.sql.SQLException: Failed to create database 'metastore_db', see the
    next exception for details.
      at
    org.apache.derby.impl.jdbc.SQLExceptionFactory40.getSQLException(Unknown
    Source)
    at org.apache.derby.impl.jdbc.Util.newEmbedSQLException(Unknown Source)
      at org.apache.derby.impl.jdbc.Util.seeNextException(Unknown Source)
    at org.apache.derby.impl.jdbc.EmbedConnection.createDatabase(Unknown Source)
      at org.apache.derby.impl.jdbc.EmbedConnection.<init>(Unknown Source)
    at org.apache.derby.impl.jdbc.EmbedConnection30.<init>(Unknown Source)
      at org.apache.derby.impl.jdbc.EmbedConnection40.<init>(Unknown Source)
    at org.apache.derby.jdbc.Driver40.getNewEmbedConnection(Unknown Source)
      at org.apache.derby.jdbc.InternalDriver.connect(Unknown Source)
    at org.apache.derby.jdbc.AutoloadedDriver.connect(Unknown Source)
      at java.sql.DriverManager.getConnection(DriverManager.java:571)
    at java.sql.DriverManager.getConnection(DriverManager.java:215)
      at
    org.apache.commons.dbcp.DriverManagerConnectionFactory.createConnection(DriverManagerConnectionFactory.java:75)
    at
    org.apache.commons.dbcp.PoolableConnectionFactory.makeObject(PoolableConnectionFactory.java:582)
      at
    org.apache.commons.pool.impl.GenericObjectPool.borrowObject(GenericObjectPool.java:1148)
    at
    org.apache.commons.dbcp.PoolingDataSource.getConnection(PoolingDataSource.java:106)
      at
    org.datanucleus.store.rdbms.ConnectionFactoryImpl$ManagedConnectionImpl.getConnection(ConnectionFactoryImpl.java:521)
    at
    org.datanucleus.store.rdbms.RDBMSStoreManager.(Native Method)
    at
    sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:57)
      at
    sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
    at java.lang.reflect.Constructor.newInstance(Constructor.java:526)
      at
    org.datanucleus.plugin.NonManagedPluginRegistry.createExecutableExtension(NonManagedPluginRegistry.java:588)
    at
    org.datanucleus.plugin.PluginManager.createExecutableExtension(PluginManager.java:300)
      at
    org.datanucleus.ObjectManagerFactoryImpl.initialiseStoreManager(ObjectManagerFactoryImpl.java:161)
    at
    org.datanucleus.jdo.JDOPersistenceManagerFactory.freezeConfiguration(JDOPersistenceManagerFactory.java:583)
      at
    org.datanucleus.jdo.JDOPersistenceManagerFactory.createPersistenceManagerFactory(JDOPersistenceManagerFactory.java:286)
    at
    org.datanucleus.jdo.JDOPersistenceManagerFactory.getPersistenceManagerFactory(JDOPersistenceManagerFactory.java:182)
      at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
    at
    sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
      at
    sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
    at java.lang.reflect.Method.invoke(Method.java:606)
      at javax.jdo.JDOHelper$16.run(JDOHelper.java:1958)
    at java.security.AccessController.doPrivileged(Native Method)
      at javax.jdo.JDOHelper.invoke(JDOHelper.java:1953)
    at
    javax.jdo.JDOHelper.invokeGetPersistenceManagerFactoryOnImplementation(JDOHelper.java:1159)
      at javax.jdo.JDOHelper.getPersistenceManagerFactory(JDOHelper.java:803)
    at javax.jdo.JDOHelper.getPersistenceManagerFactory(JDOHelper.java:698)
      at
    org.apache.hadoop.hive.metastore.ObjectStore.getPMF(ObjectStore.java:262)
    at
    org.apache.hadoop.hive.metastore.ObjectStore.getPersistenceManager(ObjectStore.java:291)
      at
    org.apache.hadoop.hive.metastore.ObjectStore.initialize(ObjectStore.java:224)
    at
    org.apache.hadoop.hive.metastore.ObjectStore.setConf(ObjectStore.java:199)
      at org.apache.hadoop.util.ReflectionUtils.setConf(ReflectionUtils.java:62)
    at
    org.apache.hadoop.util.ReflectionUtils.newInstance(ReflectionUtils.java:117)
      at
    org.apache.hadoop.hive.metastore.RetryingRawStore.(RetryingRawStore.java:71)
      at
    org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.newRawStore(HiveMetaStore.java:413)
    at
    org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.getMS(HiveMetaStore.java:401)
      at
    org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.createDefaultDB(HiveMetaStore.java:439)
    at
    org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.init(HiveMetaStore.java:325)
      at
    org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.(Server.java:349)
      at com.cloudera.beeswax.Server$1.run(Server.java:200)
    at java.lang.Thread.run(Thread.java:724)
    Caused by: java.sql.SQLException: Failed to create database 'metastore_db',
    see the next exception for details.
      at org.apache.derby.impl.jdbc.SQLExceptionFactory.getSQLException(Unknown
    Source)
    at
    org.apache.derby.impl.jdbc.SQLExceptionFactory40.wrapArgsForTransportAcrossDRDA(Unknown
    Source)
      ... 54 more
    Caused by: java.sql.SQLException: Directory /usr/share/hue/metastore_db
    cannot be created.
    at org.apache.derby.impl.jdbc.SQLExceptionFactory.getSQLException(Unknown
    Source)
      at
    org.apache.derby.impl.jdbc.SQLExceptionFactory40.wrapArgsForTransportAcrossDRDA(Unknown
    Source)
    at org.apache.derby.impl.jdbc.SQLExceptionFactory40.getSQLException(Unknown
    Source)
      at org.apache.derby.impl.jdbc.Util.generateCsSQLException(Unknown Source)
    at
    org.apache.derby.impl.jdbc.TransactionResourceImpl.wrapInSQLException(Unknown
    Source)
      at
    org.apache.derby.impl.jdbc.TransactionResourceImpl.handleException(Unknown
    Source)
    at org.apache.derby.impl.jdbc.EmbedConnection.handleException(Unknown
    Source)
      ... 51 more
    Caused by: ERROR XBM0H: Directory /usr/share/hue/metastore_db cannot be
    created.
    at org.apache.derby.iapi.error.StandardException.newException(Unknown
    Source)
      at
    org.apache.derby.impl.services.monitor.StorageFactoryService$9.run(Unknown
    Source)
    at java.security.AccessController.doPrivileged(Native Method)
      at
    org.apache.derby.impl.services.monitor.StorageFactoryService.createServiceRoot(Unknown
    Source)
    at org.apache.derby.impl.services.monitor.BaseMonitor.bootService(Unknown
    Source)
      at
    org.apache.derby.impl.services.monitor.BaseMonitor.createPersistentService(Unknown
    Source)
    at
    org.apache.derby.iapi.services.monitor.Monitor.createPersistentService(Unknown
    Source)
      ... 51 more
    Exception in thread "MetaServerThread"
    javax.jdo.JDOFatalDataStoreException: Failed to create database
    'metastore_db', see the next exception for details.
      at
    org.datanucleus.jdo.NucleusJDOHelper.getJDOExceptionForNucleusException(NucleusJDOHelper.java:298)
    at
    org.datanucleus.jdo.JDOPersistenceManagerFactory.freezeConfiguration(JDOPersistenceManagerFactory.java:601)
      at
    org.datanucleus.jdo.JDOPersistenceManagerFactory.createPersistenceManagerFactory(JDOPersistenceManagerFactory.java:286)
    at
    org.datanucleus.jdo.JDOPersistenceManagerFactory.getPersistenceManagerFactory(JDOPersistenceManagerFactory.java:182)
      at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
    at
    sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
      at
    sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
    at java.lang.reflect.Method.invoke(Method.java:606)
      at javax.jdo.JDOHelper$16.run(JDOHelper.java:1958)
    at java.security.AccessController.doPrivileged(Native Method)
      at javax.jdo.JDOHelper.invoke(JDOHelper.java:1953)
    at
    javax.jdo.JDOHelper.invokeGetPersistenceManagerFactoryOnImplementation(JDOHelper.java:1159)
      at javax.jdo.JDOHelper.getPersistenceManagerFactory(JDOHelper.java:803)
    at javax.jdo.JDOHelper.getPersistenceManagerFactory(JDOHelper.java:698)
      at
    org.apache.hadoop.hive.metastore.ObjectStore.getPMF(ObjectStore.java:262)
    at
    org.apache.hadoop.hive.metastore.ObjectStore.getPersistenceManager(ObjectStore.java:291)
      at
    org.apache.hadoop.hive.metastore.ObjectStore.initialize(ObjectStore.java:224)
    at
    org.apache.hadoop.hive.metastore.ObjectStore.setConf(ObjectStore.java:199)
      at org.apache.hadoop.util.ReflectionUtils.setConf(ReflectionUtils.java:62)
    at
    org.apache.hadoop.util.ReflectionUtils.newInstance(ReflectionUtils.java:117)
      at
    org.apache.hadoop.hive.metastore.RetryingRawStore.(RetryingRawStore.java:71)
      at
    org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.newRawStore(HiveMetaStore.java:413)
    at
    org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.getMS(HiveMetaStore.java:401)
      at
    org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.createDefaultDB(HiveMetaStore.java:439)
    at
    org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.init(HiveMetaStore.java:325)
      at
    org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.(Server.java:349)
      at com.cloudera.beeswax.Server$1.run(Server.java:200)
    at java.lang.Thread.run(Thread.java:724)
    NestedThrowablesStackTrace:
    java.sql.SQLException: Failed to create database 'metastore_db', see the
    next exception for details.
    at org.apache.derby.impl.jdbc.SQLExceptionFactory40.getSQLException(Unknown
    Source)
      at org.apache.derby.impl.jdbc.Util.newEmbedSQLException(Unknown Source)
    at org.apache.derby.impl.jdbc.Util.seeNextException(Unknown Source)
      at org.apache.derby.impl.jdbc.EmbedConnection.createDatabase(Unknown
    Source)
    at org.apache.derby.impl.jdbc.EmbedConnection.<init>(Unknown Source)
      at org.apache.derby.impl.jdbc.EmbedConnection30.<init>(Unknown Source)
    at org.apache.derby.impl.jdbc.EmbedConnection40.<init>(Unknown Source)
      at org.apache.derby.jdbc.Driver40.getNewEmbedConnection(Unknown Source)
    at org.apache.derby.jdbc.InternalDriver.connect(Unknown Source)
      at org.apache.derby.jdbc.AutoloadedDriver.connect(Unknown Source)
    at java.sql.DriverManager.getConnection(DriverManager.java:571)
      at java.sql.DriverManager.getConnection(DriverManager.java:215)
    at
    org.apache.commons.dbcp.DriverManagerConnectionFactory.createConnection(DriverManagerConnectionFactory.java:75)
      at
    org.apache.commons.dbcp.PoolableConnectionFactory.makeObject(PoolableConnectionFactory.java:582)
    at
    org.apache.commons.pool.impl.GenericObjectPool.borrowObject(GenericObjectPool.java:1148)
      at
    org.apache.commons.dbcp.PoolingDataSource.getConnection(PoolingDataSource.java:106)
    at
    org.datanucleus.store.rdbms.ConnectionFactoryImpl$ManagedConnectionImpl.getConnection(ConnectionFactoryImpl.java:521)
      at
    org.datanucleus.store.rdbms.RDBMSStoreManager.(Native Method)
      at
    sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:57)
    at
    sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
      at java.lang.reflect.Constructor.newInstance(Constructor.java:526)
    at
    org.datanucleus.plugin.NonManagedPluginRegistry.createExecutableExtension(NonManagedPluginRegistry.java:588)
      at
    org.datanucleus.plugin.PluginManager.createExecutableExtension(PluginManager.java:300)
    at
    org.datanucleus.ObjectManagerFactoryImpl.initialiseStoreManager(ObjectManagerFactoryImpl.java:161)
      at
    org.datanucleus.jdo.JDOPersistenceManagerFactory.freezeConfiguration(JDOPersistenceManagerFactory.java:583)
    at
    org.datanucleus.jdo.JDOPersistenceManagerFactory.createPersistenceManagerFactory(JDOPersistenceManagerFactory.java:286)
      at
    org.datanucleus.jdo.JDOPersistenceManagerFactory.getPersistenceManagerFactory(JDOPersistenceManagerFactory.java:182)
    at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
      at
    sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
    at
    sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
      at java.lang.reflect.Method.invoke(Method.java:606)
    at javax.jdo.JDOHelper$16.run(JDOHelper.java:1958)
      at java.security.AccessController.doPrivileged(Native Method)
    at javax.jdo.JDOHelper.invoke(JDOHelper.java:1953)
      at
    javax.jdo.JDOHelper.invokeGetPersistenceManagerFactoryOnImplementation(JDOHelper.java:1159)
    at javax.jdo.JDOHelper.getPersistenceManagerFactory(JDOHelper.java:803)
      at javax.jdo.JDOHelper.getPersistenceManagerFactory(JDOHelper.java:698)
    at org.apache.hadoop.hive.metastore.ObjectStore.getPMF(ObjectStore.java:262)
      at
    org.apache.hadoop.hive.metastore.ObjectStore.getPersistenceManager(ObjectStore.java:291)
    at
    org.apache.hadoop.hive.metastore.ObjectStore.initialize(ObjectStore.java:224)
      at
    org.apache.hadoop.hive.metastore.ObjectStore.setConf(ObjectStore.java:199)
    at org.apache.hadoop.util.ReflectionUtils.setConf(ReflectionUtils.java:62)
      at
    org.apache.hadoop.util.ReflectionUtils.newInstance(ReflectionUtils.java:117)
    at
    org.apache.hadoop.hive.metastore.RetryingRawStore.(RetryingRawStore.java:71)
    at
    org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.newRawStore(HiveMetaStore.java:413)
      at
    org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.getMS(HiveMetaStore.java:401)
    at
    org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.createDefaultDB(HiveMetaStore.java:439)
      at
    org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.init(HiveMetaStore.java:325)
    at
    org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.(Server.java:349)
    at com.cloudera.beeswax.Server$1.run(Server.java:200)
      at java.lang.Thread.run(Thread.java:724)
    Caused by: java.sql.SQLException: Failed to create database 'metastore_db',
    see the next exception for details.
      at org.apache.derby.impl.jdbc.SQLExceptionFactory.getSQLException(Unknown
    Source)
    at
    org.apache.derby.impl.jdbc.SQLExceptionFactory40.wrapArgsForTransportAcrossDRDA(Unknown
    Source)
      ... 54 more
    Caused by: java.sql.SQLException: Directory /usr/share/hue/metastore_db
    cannot be created.
    at org.apache.derby.impl.jdbc.SQLExceptionFactory.getSQLException(Unknown
    Source)
      at
    org.apache.derby.impl.jdbc.SQLExceptionFactory40.wrapArgsForTransportAcrossDRDA(Unknown
    Source)
    at org.apache.derby.impl.jdbc.SQLExceptionFactory40.getSQLException(Unknown
    Source)
      at org.apache.derby.impl.jdbc.Util.generateCsSQLException(Unknown Source)
    at
    org.apache.derby.impl.jdbc.TransactionResourceImpl.wrapInSQLException(Unknown
    Source)
      at
    org.apache.derby.impl.jdbc.TransactionResourceImpl.handleException(Unknown
    Source)
    at org.apache.derby.impl.jdbc.EmbedConnection.handleException(Unknown
    Source)
      ... 51 more
    Caused by: ERROR XBM0H: Directory /usr/share/hue/metastore_db cannot be
    created.
    at org.apache.derby.iapi.error.StandardException.newException(Unknown
    Source)
      at
    org.apache.derby.impl.services.monitor.StorageFactoryService$9.run(Unknown
    Source)
    at java.security.AccessController.doPrivileged(Native Method)
      at
    org.apache.derby.impl.services.monitor.StorageFactoryService.createServiceRoot(Unknown
    Source)
    at org.apache.derby.impl.services.monitor.BaseMonitor.bootService(Unknown
    Source)
      at
    org.apache.derby.impl.services.monitor.BaseMonitor.createPersistentService(Unknown
    Source)
    at
    org.apache.derby.iapi.services.monitor.Monitor.createPersistentService(Unknown
    Source)
      ... 51 more
    (31713) socket 0.0.0.0:8888 already in use, retrying after 4.0 seconds...
    (31713) socket 0.0.0.0:8888 already in use, retrying after 8.0 seconds...
    (31713) socket 0.0.0.0:8888 already in use, retrying after 16.0 seconds...

    On Fri, Aug 9, 2013 at 11:51 AM, Abraham Elmahrek wrote:

    Port 8888 is PID 22739. You should be able to execute "ps auxwww" or a
    similar command to find Beeswax and Hue. They should both be running as use
    Hue. Then just kill them both.

    -Abe

    On Fri, Aug 9, 2013 at 11:45 AM, Vinamrata Singal wrote:

    also no pid's for 8002 and 8003

    On Fri, Aug 9, 2013 at 11:44 AM, Vinamrata Singal wrote:

    I have the following output, can't find port 8888

    Proto Recv-Q Send-Q Local Address Foreign Address
    State PID/Program name
    tcp 0 0 0.0.0.0:50060 0.0.0.0:*
    LISTEN 30843/java
    tcp 0 0 0.0.0.0:44141 0.0.0.0:*
    LISTEN 30312/java
    tcp 0 0 0.0.0.0:50030 0.0.0.0:*
    LISTEN 30697/java
    tcp 0 0 0.0.0.0:111 0.0.0.0:*
    LISTEN -
    tcp 0 0 0.0.0.0:50070 0.0.0.0:*
    LISTEN 30312/java
    tcp 0 0 0.0.0.0:22 0.0.0.0:*
    LISTEN -
    tcp 0 0 0.0.0.0:58776 0.0.0.0:*
    LISTEN 30442/java
    tcp 0 0 0.0.0.0:8888 0.0.0.0:*
    LISTEN 22739/python2.7
    tcp 0 0 0.0.0.0:50010 0.0.0.0:*
    LISTEN 30442/java
    tcp 0 0 0.0.0.0:50075 0.0.0.0:*
    LISTEN 30442/java
    tcp 0 0 0.0.0.0:55035 0.0.0.0:*
    LISTEN 30572/java
    tcp 0 0 0.0.0.0:55039 0.0.0.0:*
    LISTEN 30697/java
    tcp 0 0 127.0.0.1:60353 0.0.0.0:*
    LISTEN 30843/java
    tcp 0 0 0.0.0.0:8002 0.0.0.0:*
    LISTEN -
    tcp 0 0 0.0.0.0:8003 0.0.0.0:*
    LISTEN -
    tcp 0 0 0.0.0.0:50020 0.0.0.0:*
    LISTEN 30442/java
    tcp 0 0 127.0.0.1:54310 0.0.0.0:*
    LISTEN 30312/java
    tcp 0 0 0.0.0.0:40358 0.0.0.0:*
    LISTEN -
    tcp 0 0 127.0.0.1:54311 0.0.0.0:*
    LISTEN 30697/java
    tcp 0 0 0.0.0.0:50090 0.0.0.0:*
    LISTEN 30572/java
    tcp6 0 0 :::49932 :::*
    LISTEN -
    tcp6 0 0 :::111 :::*
    LISTEN -
    tcp6 0 0 :::22 :::*
    LISTEN -

    On Fri, Aug 9, 2013 at 11:27 AM, Abraham Elmahrek wrote:

    It definitely looks like they're in use. The command "netstat -tlnp"
    should show you their PIDs and process names. If they're Hue, you can
    simply 'kill' those processes using "kill <pid1> <pid2> ..." and start Hue
    again.


    On Fri, Aug 9, 2013 at 11:22 AM, Vinamrata Singal <
    vsingal5@stanford.edu> wrote:
    tcp 0 0 0.0.0.0:50060 0.0.0.0:*
    LISTEN
    tcp 0 0 0.0.0.0:44141 0.0.0.0:*
    LISTEN
    tcp 0 0 0.0.0.0:50030 0.0.0.0:*
    LISTEN
    tcp 0 0 0.0.0.0:111 0.0.0.0:*
    LISTEN
    tcp 0 0 0.0.0.0:50070 0.0.0.0:*
    LISTEN
    tcp 0 0 0.0.0.0:22 0.0.0.0:*
    LISTEN
    tcp 0 0 0.0.0.0:58776 0.0.0.0:*
    LISTEN
    tcp 0 0 0.0.0.0:8888 0.0.0.0:*
    LISTEN
    tcp 0 0 0.0.0.0:50010 0.0.0.0:*
    LISTEN
    tcp 0 0 0.0.0.0:50075 0.0.0.0:*
    LISTEN
    tcp 0 0 0.0.0.0:55035 0.0.0.0:*
    LISTEN
    tcp 0 0 0.0.0.0:55039 0.0.0.0:*
    LISTEN
    tcp 0 0 127.0.0.1:60353 0.0.0.0:*
    LISTEN
    tcp 0 0 0.0.0.0:8002 0.0.0.0:*
    LISTEN
    tcp 0 0 0.0.0.0:8003 0.0.0.0:*
    LISTEN
    tcp 0 0 0.0.0.0:50020 0.0.0.0:*
    LISTEN
    tcp 0 0 127.0.0.1:54310 0.0.0.0:*
    LISTEN
    tcp 0 0 0.0.0.0:40358 0.0.0.0:*
    LISTEN
    tcp 0 0 127.0.0.1:54311 0.0.0.0:*
    LISTEN
    tcp 0 0 0.0.0.0:50090 0.0.0.0:*
    LISTEN
    tcp6 0 0 :::49932 :::*
    LISTEN
    tcp6 0 0 :::111 :::*
    LISTEN
    tcp6 0 0 :::22 :::*
    LISTEN

    On Fri, Aug 9, 2013 at 11:20 AM, Abraham Elmahrek wrote:

    It looks like port 8888 is in use. Maybe 8002 and 8003 are in use as
    well. Could you provide the output of 'netstat -tln'?

    -Abe


    On Fri, Aug 9, 2013 at 11:16 AM, Vinamrata Singal <
    vsingal5@stanford.edu> wrote:
    Abe,

    Thanks for all your help and quick replies.

    So I went ahead and fixed everything (including the username as you
    suggested) yet now I'm getting a different error:

    Executing /usr/local/hadoop/bin/hadoop jar
    /usr/share/hue/apps/beeswax/src/beeswax/../../java-lib/BeeswaxServer.jar
    --beeswax 8002 --desktop-host 127.0.0.1 --desktop-port 8888
    --query-lifetime 604800000 --metastore 8003
    (31443) socket 0.0.0.0:8888 already in use, retrying after 0.5
    seconds...
    (31443) socket 0.0.0.0:8888 already in use, retrying after 1.0
    seconds...
    (31443) socket 0.0.0.0:8888 already in use, retrying after 2.0
    seconds...
    (31443) socket 0.0.0.0:8888 already in use, retrying after 4.0
    seconds...
    13/08/09 11:15:37 INFO beeswax.Server: Starting metastore at port
    8003
    13/08/09 11:15:37 INFO beeswax.Server: Starting beeswaxd at port 8002
    Exception in thread "main"
    org.apache.thrift.transport.TTransportException: Could not create
    ServerSocket on address 0.0.0.0/0.0.0.0:8002.
    at
    org.apache.thrift.transport.TServerSocket.<init>(TServerSocket.java:93)
    at
    org.apache.thrift.transport.TServerSocket.<init>(TServerSocket.java:75)
    at
    org.apache.thrift.transport.TServerSocket.<init>(TServerSocket.java:68)
    at com.cloudera.beeswax.Server.serveBeeswax(Server.java:250)
    at com.cloudera.beeswax.Server.main(Server.java:214)
    at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
    at
    sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
    at
    sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
    at java.lang.reflect.Method.invoke(Method.java:606)
    at org.apache.hadoop.util.RunJar.main(RunJar.java:160)
    org.apache.thrift.transport.TTransportException: Could not create
    ServerSocket on address 0.0.0.0/0.0.0.0:8003.
    at
    org.apache.thrift.transport.TServerSocket.<init>(TServerSocket.java:93)
    at
    org.apache.thrift.transport.TServerSocket.<init>(TServerSocket.java:75)
    at
    org.apache.thrift.transport.TServerSocket.<init>(TServerSocket.java:68)
    at com.cloudera.beeswax.Server.serveMeta(Server.java:348)
    at com.cloudera.beeswax.Server$1.run(Server.java:200)
    at java.lang.Thread.run(Thread.java:724)


    On Thu, Aug 8, 2013 at 5:53 PM, Abraham Elmahrek wrote:

    Vinamrata,

    I think it will work with the exception of the job browser
    application. It depends on the thrift service code for the job tracker
    packaged in hue-plugins.jar, which only runs on CDH. There have been users
    who've ported that code over to the upstream job tracker in the past though.

    To solve the rest of your issues, take a look at
    apps/beeswax/beeswax_server.sh. There are environment variables that you
    need to set when you start Hue. In particular, "HIVE_LIB" should be set or
    some how grok'd into Hue's environment.

    -Abe


    On Thu, Aug 8, 2013 at 5:43 PM, Vinamrata Singal <
    vsingal5@stanford.edu> wrote:
    I have an Apache installation. Is that an issue for Hue?

    Got it, will look into the hue user installation issues.


    On Thu, Aug 8, 2013 at 5:25 PM, Abraham Elmahrek <abe@cloudera.com
    wrote:
    That error is likely because of the error: find:
    `~/hive-0.10.0/lib': No such file or directory. Are you using a
    custom installation of Hive? Perhaps, the apache distribution rather than
    the CDH distribution?

    Note: It's probably not a good idea to change the user from 'hue'
    to something else.


    On Thu, Aug 8, 2013 at 5:20 PM, Vinamrata Singal <
    vsingal5@stanford.edu> wrote:
    I actually think I fixed the issue by changing the user in the
    supervisor.py script to be the user that owns that directory.

    Question: now I'm getting this error:

    xception in thread "main" java.lang.NoClassDefFoundError:
    org/apache/hadoop/hive/conf/HiveConf
    at java.lang.Class.forName0(Native Method)
    at java.lang.Class.forName(Class.java:270)
    at org.apache.hadoop.util.RunJar.main(RunJar.java:153)
    Caused by: java.lang.ClassNotFoundException:
    org.apache.hadoop.hive.conf.HiveConf
    at java.net.URLClassLoader$1.run(URLClassLoader.java:366)
    at java.net.URLClassLoader$1.run(URLClassLoader.java:355)
    at java.security.AccessController.doPrivileged(Native Method)
    at java.net.URLClassLoader.findClass(URLClassLoader.java:354)
    at java.lang.ClassLoader.loadClass(ClassLoader.java:424)
    at java.lang.ClassLoader.loadClass(ClassLoader.java:357)


    On Thu, Aug 8, 2013 at 5:19 PM, Abraham Elmahrek <
    abe@cloudera.com> wrote:
    Your 'hadoop.tmp.dir' doesn't vary per user. I'm not entirely
    sure if that directory is chown'd or if the first user to create it takes
    ownership. Either way, you probably don't have access to that directory
    using the user starting Beeswaxd (hue). Take a look at
    http://hadoop.apache.org/docs/current/hadoop-project-dist/hadoop-common/core-default.xmlfor an example 'hadoop.tmp.dir' that varies by username.

    -Abe


    On Thu, Aug 8, 2013 at 5:15 PM, Vinamrata Singal <
    vsingal5@stanford.edu> wrote:
    <configuration>
    <property>
    <name>hadoop.tmp.dir </name>
    <value>/app/hadoop/tmp</value>
    <description> A base for other temporary directories
    </description>
    </property>

    <property>
    <name>fs.default.name</name>
    <value>hdfs://master:54310</value>
    </property>
    </configuration>


    On Thu, Aug 8, 2013 at 5:11 PM, Abraham Elmahrek <
    abe@cloudera.com> wrote:
    Could you provide your core-site.xml? This should be creating
    a file in 'hadoop.tmp.dir'.

    -Abe


    On Thu, Aug 8, 2013 at 5:03 PM, Vinamrata Singal <
    vsingal5@stanford.edu> wrote:
    File: `/tmp'
    Size: 4096 Blocks: 8 IO Block: 4096
    directory
    Device: fd00h/64768d Inode: 131082 Links: 11
    Access: (1777/drwxrwxrwt) Uid: ( 0/ root) Gid: (
    0/ root)
    Access: 2013-08-08 22:47:49.435308999 +0000
    Modify: 2013-08-09 00:01:17.251309001 +0000
    Change: 2013-08-09 00:01:17.251309001 +0000
    Birth: -



    On Thu, Aug 8, 2013 at 5:00 PM, Abraham Elmahrek <
    abe@cloudera.com> wrote:
    It looks like Beeswaxd cannot create a temporary directory
    in '/tmp/'. What do your permissions on that directory look like? What is
    the output of "stat /tmp/"?

    On Thu, Aug 8, 2013 at 4:52 PM, wrote:

    I'm getting the following while trying to build Hue:








    (6211) *** Controller starting at Thu Aug 8 11:29:50 2013
    Should start 1 new children
    Controller.spawn_children(number=1)
    $HADOOP_HOME=
    $HADOOP_BIN=/usr/local/hadoop/bin/hadoop
    $HIVE_CONF_DIR=~/hive-0.10.0/conf
    $HIVE_HOME=~/hive-0.10.0
    find: `~/hive-0.10.0/lib': No such file or directory
    $HADOOP_CLASSPATH=:
    $HADOOP_OPTS=-Dlog4j.configuration=log4j.properties
    $HADOOP_CONF_DIR=~/hive-0.10.0/conf:/usr/local/hadoop/conf
    $HADOOP_MAPRED_HOME=/usr/lib/hadoop-0.20-mapreduce
    CWD=/usr/local/hue/desktop/conf
    Executing /usr/local/hadoop/bin/hadoop jar /usr/local/hue/apps/beeswax/src/beeswax/../../java-lib/BeeswaxServer.jar --beeswax 8002 --desktop-host 127.0.0.1 --desktop-port 8888 --query-lifetime 604800000 --metastore 8003
    Exception in thread "main" java.io.IOException: Permission denied
    at java.io.UnixFileSystem.createFileExclusively(Native Method)
    at java.io.File.createTempFile(File.java:1879)
    at org.apache.hadoop.util.RunJar.main(RunJar.java:119)

    I've changed the configuration file so it doesn't use hue
    but the user that I'm logged in as which has read and write permissions in
    the hadoop dfs, hadoop, hive, etc. Not sure why it's doing this...

    --
    Best,
    Vinamrata Singal | BS Computer Science | Stanford Class of
    2016 | (c) 650.215.3775 | (w) https://stanford.edu/~vsingal5

    --
    Best,
    Vinamrata Singal | BS Computer Science | Stanford Class of
    2016 | (c) 650.215.3775 | (w) https://stanford.edu/~vsingal5

    --
    Best,
    Vinamrata Singal | BS Computer Science | Stanford Class of 2016
    (c) 650.215.3775 | (w) https://stanford.edu/~vsingal5

    --
    Best,
    Vinamrata Singal | BS Computer Science | Stanford Class of 2016 |
    (c) 650.215.3775 | (w) https://stanford.edu/~vsingal5

    --
    Best,
    Vinamrata Singal | BS Computer Science | Stanford Class of 2016 |
    (c) 650.215.3775 | (w) https://stanford.edu/~vsingal5

    --
    Best,
    Vinamrata Singal | BS Computer Science | Stanford Class of 2016 | (c)
    650.215.3775 | (w) https://stanford.edu/~vsingal5

    --
    Best,
    Vinamrata Singal | BS Computer Science | Stanford Class of 2016 | (c)
    650.215.3775 | (w) https://stanford.edu/~vsingal5


    --
    Best,
    Vinamrata Singal | BS Computer Science | Stanford Class of 2016 | (c)
    650.215.3775 | (w) https://stanford.edu/~vsingal5

    --
    Best,
    Vinamrata Singal | BS Computer Science | Stanford Class of 2016 |
    (c) 650.215.3775 | (w) https://stanford.edu/~vsingal5
  • Romain Rigaux at Aug 9, 2013 at 6:59 pm
    Beeswax is trying to create the Hive metastore DB on /usr/share/hue and
    your custom user does not have the permissions:

    chown your_user:your_user /usr/share/hue

    Romain

    On Fri, Aug 9, 2013 at 11:56 AM, Vinamrata Singal wrote:

    After killing the appropriate processes, I still get the following error:

    java.sql.SQLException: Failed to create database 'metastore_db', see the
    next exception for details.
    at
    org.apache.derby.impl.jdbc.SQLExceptionFactory40.getSQLException(Unknown
    Source)
    at org.apache.derby.impl.jdbc.Util.newEmbedSQLException(Unknown Source)
    at org.apache.derby.impl.jdbc.Util.seeNextException(Unknown Source)
    at org.apache.derby.impl.jdbc.EmbedConnection.createDatabase(Unknown
    Source)
    at org.apache.derby.impl.jdbc.EmbedConnection.<init>(Unknown Source)
    at org.apache.derby.impl.jdbc.EmbedConnection30.<init>(Unknown Source)
    at org.apache.derby.impl.jdbc.EmbedConnection40.<init>(Unknown Source)
    at org.apache.derby.jdbc.Driver40.getNewEmbedConnection(Unknown Source)
    at org.apache.derby.jdbc.InternalDriver.connect(Unknown Source)
    at org.apache.derby.jdbc.AutoloadedDriver.connect(Unknown Source)
    at java.sql.DriverManager.getConnection(DriverManager.java:571)
    at java.sql.DriverManager.getConnection(DriverManager.java:215)
    at
    org.apache.commons.dbcp.DriverManagerConnectionFactory.createConnection(DriverManagerConnectionFactory.java:75)
    at
    org.apache.commons.dbcp.PoolableConnectionFactory.makeObject(PoolableConnectionFactory.java:582)
    at
    org.apache.commons.pool.impl.GenericObjectPool.borrowObject(GenericObjectPool.java:1148)
    at
    org.apache.commons.dbcp.PoolingDataSource.getConnection(PoolingDataSource.java:106)
    at
    org.datanucleus.store.rdbms.ConnectionFactoryImpl$ManagedConnectionImpl.getConnection(ConnectionFactoryImpl.java:521)
    at
    org.datanucleus.store.rdbms.RDBMSStoreManager.<init>(RDBMSStoreManager.java:290)
    at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
    at
    sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:57)
    at
    sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
    at java.lang.reflect.Constructor.newInstance(Constructor.java:526)
    at
    org.datanucleus.plugin.NonManagedPluginRegistry.createExecutableExtension(NonManagedPluginRegistry.java:588)
    at
    org.datanucleus.plugin.PluginManager.createExecutableExtension(PluginManager.java:300)
    at
    org.datanucleus.ObjectManagerFactoryImpl.initialiseStoreManager(ObjectManagerFactoryImpl.java:161)
    at
    org.datanucleus.jdo.JDOPersistenceManagerFactory.freezeConfiguration(JDOPersistenceManagerFactory.java:583)
    at
    org.datanucleus.jdo.JDOPersistenceManagerFactory.createPersistenceManagerFactory(JDOPersistenceManagerFactory.java:286)
    at
    org.datanucleus.jdo.JDOPersistenceManagerFactory.getPersistenceManagerFactory(JDOPersistenceManagerFactory.java:182)
    at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
    at
    sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
    at
    sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
    at java.lang.reflect.Method.invoke(Method.java:606)
    at javax.jdo.JDOHelper$16.run(JDOHelper.java:1958)
    at java.security.AccessController.doPrivileged(Native Method)
    at javax.jdo.JDOHelper.invoke(JDOHelper.java:1953)
    at
    javax.jdo.JDOHelper.invokeGetPersistenceManagerFactoryOnImplementation(JDOHelper.java:1159)
    at javax.jdo.JDOHelper.getPersistenceManagerFactory(JDOHelper.java:803)
    at javax.jdo.JDOHelper.getPersistenceManagerFactory(JDOHelper.java:698)
    at
    org.apache.hadoop.hive.metastore.ObjectStore.getPMF(ObjectStore.java:262)
    at
    org.apache.hadoop.hive.metastore.ObjectStore.getPersistenceManager(ObjectStore.java:291)
    at
    org.apache.hadoop.hive.metastore.ObjectStore.initialize(ObjectStore.java:224)
    at
    org.apache.hadoop.hive.metastore.ObjectStore.setConf(ObjectStore.java:199)
    at
    org.apache.hadoop.util.ReflectionUtils.setConf(ReflectionUtils.java:62)
    at
    org.apache.hadoop.util.ReflectionUtils.newInstance(ReflectionUtils.java:117)
    at
    org.apache.hadoop.hive.metastore.RetryingRawStore.<init>(RetryingRawStore.java:62)
    at
    org.apache.hadoop.hive.metastore.RetryingRawStore.getProxy(RetryingRawStore.java:71)
    at
    org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.newRawStore(HiveMetaStore.java:413)
    at
    org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.getMS(HiveMetaStore.java:401)
    at
    org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.createDefaultDB(HiveMetaStore.java:439)
    at
    org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.init(HiveMetaStore.java:325)
    at
    org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.<init>(HiveMetaStore.java:279)
    at com.cloudera.beeswax.Server.serveMeta(Server.java:349)
    at com.cloudera.beeswax.Server$1.run(Server.java:200)
    at java.lang.Thread.run(Thread.java:724)
    Caused by: java.sql.SQLException: Failed to create database
    'metastore_db', see the next exception for details.
    at
    org.apache.derby.impl.jdbc.SQLExceptionFactory.getSQLException(Unknown
    Source)
    at
    org.apache.derby.impl.jdbc.SQLExceptionFactory40.wrapArgsForTransportAcrossDRDA(Unknown
    Source)
    ... 54 more
    Caused by: java.sql.SQLException: Directory /usr/share/hue/metastore_db
    cannot be created.
    at org.apache.derby.impl.jdbc.SQLExceptionFactory.getSQLException(Unknown
    Source)
    at
    org.apache.derby.impl.jdbc.SQLExceptionFactory40.wrapArgsForTransportAcrossDRDA(Unknown
    Source)
    at
    org.apache.derby.impl.jdbc.SQLExceptionFactory40.getSQLException(Unknown
    Source)
    at org.apache.derby.impl.jdbc.Util.generateCsSQLException(Unknown Source)
    at
    org.apache.derby.impl.jdbc.TransactionResourceImpl.wrapInSQLException(Unknown
    Source)
    at
    org.apache.derby.impl.jdbc.TransactionResourceImpl.handleException(Unknown
    Source)
    at org.apache.derby.impl.jdbc.EmbedConnection.handleException(Unknown
    Source)
    ... 51 more
    Caused by: ERROR XBM0H: Directory /usr/share/hue/metastore_db cannot be
    created.
    at org.apache.derby.iapi.error.StandardException.newException(Unknown
    Source)
    at
    org.apache.derby.impl.services.monitor.StorageFactoryService$9.run(Unknown
    Source)
    at java.security.AccessController.doPrivileged(Native Method)
    at
    org.apache.derby.impl.services.monitor.StorageFactoryService.createServiceRoot(Unknown
    Source)
    at org.apache.derby.impl.services.monitor.BaseMonitor.bootService(Unknown
    Source)
    at
    org.apache.derby.impl.services.monitor.BaseMonitor.createPersistentService(Unknown
    Source)
    at
    org.apache.derby.iapi.services.monitor.Monitor.createPersistentService(Unknown
    Source)
    ... 51 more
    Exception in thread "MetaServerThread"
    javax.jdo.JDOFatalDataStoreException: Failed to create database
    'metastore_db', see the next exception for details.
    at
    org.datanucleus.jdo.NucleusJDOHelper.getJDOExceptionForNucleusException(NucleusJDOHelper.java:298)
    at
    org.datanucleus.jdo.JDOPersistenceManagerFactory.freezeConfiguration(JDOPersistenceManagerFactory.java:601)
    at
    org.datanucleus.jdo.JDOPersistenceManagerFactory.createPersistenceManagerFactory(JDOPersistenceManagerFactory.java:286)
    at
    org.datanucleus.jdo.JDOPersistenceManagerFactory.getPersistenceManagerFactory(JDOPersistenceManagerFactory.java:182)
    at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
    at
    sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
    at
    sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
    at java.lang.reflect.Method.invoke(Method.java:606)
    at javax.jdo.JDOHelper$16.run(JDOHelper.java:1958)
    at java.security.AccessController.doPrivileged(Native Method)
    at javax.jdo.JDOHelper.invoke(JDOHelper.java:1953)
    at
    javax.jdo.JDOHelper.invokeGetPersistenceManagerFactoryOnImplementation(JDOHelper.java:1159)
    at javax.jdo.JDOHelper.getPersistenceManagerFactory(JDOHelper.java:803)
    at javax.jdo.JDOHelper.getPersistenceManagerFactory(JDOHelper.java:698)
    at
    org.apache.hadoop.hive.metastore.ObjectStore.getPMF(ObjectStore.java:262)
    at
    org.apache.hadoop.hive.metastore.ObjectStore.getPersistenceManager(ObjectStore.java:291)
    at
    org.apache.hadoop.hive.metastore.ObjectStore.initialize(ObjectStore.java:224)
    at
    org.apache.hadoop.hive.metastore.ObjectStore.setConf(ObjectStore.java:199)
    at
    org.apache.hadoop.util.ReflectionUtils.setConf(ReflectionUtils.java:62)
    at
    org.apache.hadoop.util.ReflectionUtils.newInstance(ReflectionUtils.java:117)
    at
    org.apache.hadoop.hive.metastore.RetryingRawStore.<init>(RetryingRawStore.java:62)
    at
    org.apache.hadoop.hive.metastore.RetryingRawStore.getProxy(RetryingRawStore.java:71)
    at
    org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.newRawStore(HiveMetaStore.java:413)
    at
    org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.getMS(HiveMetaStore.java:401)
    at
    org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.createDefaultDB(HiveMetaStore.java:439)
    at
    org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.init(HiveMetaStore.java:325)
    at
    org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.<init>(HiveMetaStore.java:279)
    at com.cloudera.beeswax.Server.serveMeta(Server.java:349)
    at com.cloudera.beeswax.Server$1.run(Server.java:200)
    at java.lang.Thread.run(Thread.java:724)
    NestedThrowablesStackTrace:
    java.sql.SQLException: Failed to create database 'metastore_db', see the
    next exception for details.
    at
    org.apache.derby.impl.jdbc.SQLExceptionFactory40.getSQLException(Unknown
    Source)
    at org.apache.derby.impl.jdbc.Util.newEmbedSQLException(Unknown Source)
    at org.apache.derby.impl.jdbc.Util.seeNextException(Unknown Source)
    at org.apache.derby.impl.jdbc.EmbedConnection.createDatabase(Unknown
    Source)
    at org.apache.derby.impl.jdbc.EmbedConnection.<init>(Unknown Source)
    at org.apache.derby.impl.jdbc.EmbedConnection30.<init>(Unknown Source)
    at org.apache.derby.impl.jdbc.EmbedConnection40.<init>(Unknown Source)
    at org.apache.derby.jdbc.Driver40.getNewEmbedConnection(Unknown Source)
    at org.apache.derby.jdbc.InternalDriver.connect(Unknown Source)
    at org.apache.derby.jdbc.AutoloadedDriver.connect(Unknown Source)
    at java.sql.DriverManager.getConnection(DriverManager.java:571)
    at java.sql.DriverManager.getConnection(DriverManager.java:215)
    at
    org.apache.commons.dbcp.DriverManagerConnectionFactory.createConnection(DriverManagerConnectionFactory.java:75)
    at
    org.apache.commons.dbcp.PoolableConnectionFactory.makeObject(PoolableConnectionFactory.java:582)
    at
    org.apache.commons.pool.impl.GenericObjectPool.borrowObject(GenericObjectPool.java:1148)
    at
    org.apache.commons.dbcp.PoolingDataSource.getConnection(PoolingDataSource.java:106)
    at
    org.datanucleus.store.rdbms.ConnectionFactoryImpl$ManagedConnectionImpl.getConnection(ConnectionFactoryImpl.java:521)
    at
    org.datanucleus.store.rdbms.RDBMSStoreManager.<init>(RDBMSStoreManager.java:290)
    at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
    at
    sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:57)
    at
    sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
    at java.lang.reflect.Constructor.newInstance(Constructor.java:526)
    at
    org.datanucleus.plugin.NonManagedPluginRegistry.createExecutableExtension(NonManagedPluginRegistry.java:588)
    at
    org.datanucleus.plugin.PluginManager.createExecutableExtension(PluginManager.java:300)
    at
    org.datanucleus.ObjectManagerFactoryImpl.initialiseStoreManager(ObjectManagerFactoryImpl.java:161)
    at
    org.datanucleus.jdo.JDOPersistenceManagerFactory.freezeConfiguration(JDOPersistenceManagerFactory.java:583)
    at
    org.datanucleus.jdo.JDOPersistenceManagerFactory.createPersistenceManagerFactory(JDOPersistenceManagerFactory.java:286)
    at
    org.datanucleus.jdo.JDOPersistenceManagerFactory.getPersistenceManagerFactory(JDOPersistenceManagerFactory.java:182)
    at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
    at
    sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
    at
    sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
    at java.lang.reflect.Method.invoke(Method.java:606)
    at javax.jdo.JDOHelper$16.run(JDOHelper.java:1958)
    at java.security.AccessController.doPrivileged(Native Method)
    at javax.jdo.JDOHelper.invoke(JDOHelper.java:1953)
    at
    javax.jdo.JDOHelper.invokeGetPersistenceManagerFactoryOnImplementation(JDOHelper.java:1159)
    at javax.jdo.JDOHelper.getPersistenceManagerFactory(JDOHelper.java:803)
    at javax.jdo.JDOHelper.getPersistenceManagerFactory(JDOHelper.java:698)
    at
    org.apache.hadoop.hive.metastore.ObjectStore.getPMF(ObjectStore.java:262)
    at
    org.apache.hadoop.hive.metastore.ObjectStore.getPersistenceManager(ObjectStore.java:291)
    at
    org.apache.hadoop.hive.metastore.ObjectStore.initialize(ObjectStore.java:224)
    at
    org.apache.hadoop.hive.metastore.ObjectStore.setConf(ObjectStore.java:199)
    at org.apache.hadoop.util.ReflectionUtils.setConf(ReflectionUtils.java:62)
    at
    org.apache.hadoop.util.ReflectionUtils.newInstance(ReflectionUtils.java:117)
    at
    org.apache.hadoop.hive.metastore.RetryingRawStore.<init>(RetryingRawStore.java:62)
    at
    org.apache.hadoop.hive.metastore.RetryingRawStore.getProxy(RetryingRawStore.java:71)
    at
    org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.newRawStore(HiveMetaStore.java:413)
    at
    org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.getMS(HiveMetaStore.java:401)
    at
    org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.createDefaultDB(HiveMetaStore.java:439)
    at
    org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.init(HiveMetaStore.java:325)
    at
    org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.<init>(HiveMetaStore.java:279)
    at com.cloudera.beeswax.Server.serveMeta(Server.java:349)
    at com.cloudera.beeswax.Server$1.run(Server.java:200)
    at java.lang.Thread.run(Thread.java:724)
    Caused by: java.sql.SQLException: Failed to create database
    'metastore_db', see the next exception for details.
    at
    org.apache.derby.impl.jdbc.SQLExceptionFactory.getSQLException(Unknown
    Source)
    at
    org.apache.derby.impl.jdbc.SQLExceptionFactory40.wrapArgsForTransportAcrossDRDA(Unknown
    Source)
    ... 54 more
    Caused by: java.sql.SQLException: Directory /usr/share/hue/metastore_db
    cannot be created.
    at org.apache.derby.impl.jdbc.SQLExceptionFactory.getSQLException(Unknown
    Source)
    at
    org.apache.derby.impl.jdbc.SQLExceptionFactory40.wrapArgsForTransportAcrossDRDA(Unknown
    Source)
    at
    org.apache.derby.impl.jdbc.SQLExceptionFactory40.getSQLException(Unknown
    Source)
    at org.apache.derby.impl.jdbc.Util.generateCsSQLException(Unknown Source)
    at
    org.apache.derby.impl.jdbc.TransactionResourceImpl.wrapInSQLException(Unknown
    Source)
    at
    org.apache.derby.impl.jdbc.TransactionResourceImpl.handleException(Unknown
    Source)
    at org.apache.derby.impl.jdbc.EmbedConnection.handleException(Unknown
    Source)
    ... 51 more
    Caused by: ERROR XBM0H: Directory /usr/share/hue/metastore_db cannot be
    created.
    at org.apache.derby.iapi.error.StandardException.newException(Unknown
    Source)
    at
    org.apache.derby.impl.services.monitor.StorageFactoryService$9.run(Unknown
    Source)
    at java.security.AccessController.doPrivileged(Native Method)
    at
    org.apache.derby.impl.services.monitor.StorageFactoryService.createServiceRoot(Unknown
    Source)
    at org.apache.derby.impl.services.monitor.BaseMonitor.bootService(Unknown
    Source)
    at
    org.apache.derby.impl.services.monitor.BaseMonitor.createPersistentService(Unknown
    Source)
    at
    org.apache.derby.iapi.services.monitor.Monitor.createPersistentService(Unknown
    Source)
    ... 51 more
    (31713) socket 0.0.0.0:8888 already in use, retrying after 4.0 seconds...
    (31713) socket 0.0.0.0:8888 already in use, retrying after 8.0 seconds...
    (31713) socket 0.0.0.0:8888 already in use, retrying after 16.0 seconds...

    On Fri, Aug 9, 2013 at 11:51 AM, Abraham Elmahrek wrote:

    Port 8888 is PID 22739. You should be able to execute "ps auxwww" or a
    similar command to find Beeswax and Hue. They should both be running as use
    Hue. Then just kill them both.

    -Abe

    On Fri, Aug 9, 2013 at 11:45 AM, Vinamrata Singal wrote:

    also no pid's for 8002 and 8003


    On Fri, Aug 9, 2013 at 11:44 AM, Vinamrata Singal <vsingal5@stanford.edu
    wrote:
    I have the following output, can't find port 8888

    Proto Recv-Q Send-Q Local Address Foreign Address
    State PID/Program name
    tcp 0 0 0.0.0.0:50060 0.0.0.0:*
    LISTEN 30843/java
    tcp 0 0 0.0.0.0:44141 0.0.0.0:*
    LISTEN 30312/java
    tcp 0 0 0.0.0.0:50030 0.0.0.0:*
    LISTEN 30697/java
    tcp 0 0 0.0.0.0:111 0.0.0.0:*
    LISTEN -
    tcp 0 0 0.0.0.0:50070 0.0.0.0:*
    LISTEN 30312/java
    tcp 0 0 0.0.0.0:22 0.0.0.0:*
    LISTEN -
    tcp 0 0 0.0.0.0:58776 0.0.0.0:*
    LISTEN 30442/java
    tcp 0 0 0.0.0.0:8888 0.0.0.0:*
    LISTEN 22739/python2.7
    tcp 0 0 0.0.0.0:50010 0.0.0.0:*
    LISTEN 30442/java
    tcp 0 0 0.0.0.0:50075 0.0.0.0:*
    LISTEN 30442/java
    tcp 0 0 0.0.0.0:55035 0.0.0.0:*
    LISTEN 30572/java
    tcp 0 0 0.0.0.0:55039 0.0.0.0:*
    LISTEN 30697/java
    tcp 0 0 127.0.0.1:60353 0.0.0.0:*
    LISTEN 30843/java
    tcp 0 0 0.0.0.0:8002 0.0.0.0:*
    LISTEN -
    tcp 0 0 0.0.0.0:8003 0.0.0.0:*
    LISTEN -
    tcp 0 0 0.0.0.0:50020 0.0.0.0:*
    LISTEN 30442/java
    tcp 0 0 127.0.0.1:54310 0.0.0.0:*
    LISTEN 30312/java
    tcp 0 0 0.0.0.0:40358 0.0.0.0:*
    LISTEN -
    tcp 0 0 127.0.0.1:54311 0.0.0.0:*
    LISTEN 30697/java
    tcp 0 0 0.0.0.0:50090 0.0.0.0:*
    LISTEN 30572/java
    tcp6 0 0 :::49932 :::*
    LISTEN -
    tcp6 0 0 :::111 :::*
    LISTEN -
    tcp6 0 0 :::22 :::*
    LISTEN -

    On Fri, Aug 9, 2013 at 11:27 AM, Abraham Elmahrek wrote:

    It definitely looks like they're in use. The command "netstat -tlnp"
    should show you their PIDs and process names. If they're Hue, you can
    simply 'kill' those processes using "kill <pid1> <pid2> ..." and start Hue
    again.


    On Fri, Aug 9, 2013 at 11:22 AM, Vinamrata Singal <
    vsingal5@stanford.edu> wrote:
    tcp 0 0 0.0.0.0:50060 0.0.0.0:*
    LISTEN
    tcp 0 0 0.0.0.0:44141 0.0.0.0:*
    LISTEN
    tcp 0 0 0.0.0.0:50030 0.0.0.0:*
    LISTEN
    tcp 0 0 0.0.0.0:111 0.0.0.0:*
    LISTEN
    tcp 0 0 0.0.0.0:50070 0.0.0.0:*
    LISTEN
    tcp 0 0 0.0.0.0:22 0.0.0.0:*
    LISTEN
    tcp 0 0 0.0.0.0:58776 0.0.0.0:*
    LISTEN
    tcp 0 0 0.0.0.0:8888 0.0.0.0:*
    LISTEN
    tcp 0 0 0.0.0.0:50010 0.0.0.0:*
    LISTEN
    tcp 0 0 0.0.0.0:50075 0.0.0.0:*
    LISTEN
    tcp 0 0 0.0.0.0:55035 0.0.0.0:*
    LISTEN
    tcp 0 0 0.0.0.0:55039 0.0.0.0:*
    LISTEN
    tcp 0 0 127.0.0.1:60353 0.0.0.0:*
    LISTEN
    tcp 0 0 0.0.0.0:8002 0.0.0.0:*
    LISTEN
    tcp 0 0 0.0.0.0:8003 0.0.0.0:*
    LISTEN
    tcp 0 0 0.0.0.0:50020 0.0.0.0:*
    LISTEN
    tcp 0 0 127.0.0.1:54310 0.0.0.0:*
    LISTEN
    tcp 0 0 0.0.0.0:40358 0.0.0.0:*
    LISTEN
    tcp 0 0 127.0.0.1:54311 0.0.0.0:*
    LISTEN
    tcp 0 0 0.0.0.0:50090 0.0.0.0:*
    LISTEN
    tcp6 0 0 :::49932 :::*
    LISTEN
    tcp6 0 0 :::111 :::*
    LISTEN
    tcp6 0 0 :::22 :::*
    LISTEN

    On Fri, Aug 9, 2013 at 11:20 AM, Abraham Elmahrek wrote:

    It looks like port 8888 is in use. Maybe 8002 and 8003 are in use as
    well. Could you provide the output of 'netstat -tln'?

    -Abe


    On Fri, Aug 9, 2013 at 11:16 AM, Vinamrata Singal <
    vsingal5@stanford.edu> wrote:
    Abe,

    Thanks for all your help and quick replies.

    So I went ahead and fixed everything (including the username as you
    suggested) yet now I'm getting a different error:

    Executing /usr/local/hadoop/bin/hadoop jar
    /usr/share/hue/apps/beeswax/src/beeswax/../../java-lib/BeeswaxServer.jar
    --beeswax 8002 --desktop-host 127.0.0.1 --desktop-port 8888
    --query-lifetime 604800000 --metastore 8003
    (31443) socket 0.0.0.0:8888 already in use, retrying after 0.5
    seconds...
    (31443) socket 0.0.0.0:8888 already in use, retrying after 1.0
    seconds...
    (31443) socket 0.0.0.0:8888 already in use, retrying after 2.0
    seconds...
    (31443) socket 0.0.0.0:8888 already in use, retrying after 4.0
    seconds...
    13/08/09 11:15:37 INFO beeswax.Server: Starting metastore at port
    8003
    13/08/09 11:15:37 INFO beeswax.Server: Starting beeswaxd at port
    8002
    Exception in thread "main"
    org.apache.thrift.transport.TTransportException: Could not create
    ServerSocket on address 0.0.0.0/0.0.0.0:8002.
    at
    org.apache.thrift.transport.TServerSocket.<init>(TServerSocket.java:93)
    at
    org.apache.thrift.transport.TServerSocket.<init>(TServerSocket.java:75)
    at
    org.apache.thrift.transport.TServerSocket.<init>(TServerSocket.java:68)
    at com.cloudera.beeswax.Server.serveBeeswax(Server.java:250)
    at com.cloudera.beeswax.Server.main(Server.java:214)
    at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
    at
    sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
    at
    sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
    at java.lang.reflect.Method.invoke(Method.java:606)
    at org.apache.hadoop.util.RunJar.main(RunJar.java:160)
    org.apache.thrift.transport.TTransportException: Could not create
    ServerSocket on address 0.0.0.0/0.0.0.0:8003.
    at
    org.apache.thrift.transport.TServerSocket.<init>(TServerSocket.java:93)
    at
    org.apache.thrift.transport.TServerSocket.<init>(TServerSocket.java:75)
    at
    org.apache.thrift.transport.TServerSocket.<init>(TServerSocket.java:68)
    at com.cloudera.beeswax.Server.serveMeta(Server.java:348)
    at com.cloudera.beeswax.Server$1.run(Server.java:200)
    at java.lang.Thread.run(Thread.java:724)


    On Thu, Aug 8, 2013 at 5:53 PM, Abraham Elmahrek wrote:

    Vinamrata,

    I think it will work with the exception of the job browser
    application. It depends on the thrift service code for the job tracker
    packaged in hue-plugins.jar, which only runs on CDH. There have been users
    who've ported that code over to the upstream job tracker in the past though.

    To solve the rest of your issues, take a look at
    apps/beeswax/beeswax_server.sh. There are environment variables that you
    need to set when you start Hue. In particular, "HIVE_LIB" should be set or
    some how grok'd into Hue's environment.

    -Abe


    On Thu, Aug 8, 2013 at 5:43 PM, Vinamrata Singal <
    vsingal5@stanford.edu> wrote:
    I have an Apache installation. Is that an issue for Hue?

    Got it, will look into the hue user installation issues.


    On Thu, Aug 8, 2013 at 5:25 PM, Abraham Elmahrek <
    abe@cloudera.com> wrote:
    That error is likely because of the error: find:
    `~/hive-0.10.0/lib': No such file or directory. Are you using a
    custom installation of Hive? Perhaps, the apache distribution rather than
    the CDH distribution?

    Note: It's probably not a good idea to change the user from
    'hue' to something else.


    On Thu, Aug 8, 2013 at 5:20 PM, Vinamrata Singal <
    vsingal5@stanford.edu> wrote:
    I actually think I fixed the issue by changing the user in the
    supervisor.py script to be the user that owns that directory.

    Question: now I'm getting this error:

    xception in thread "main" java.lang.NoClassDefFoundError:
    org/apache/hadoop/hive/conf/HiveConf
    at java.lang.Class.forName0(Native Method)
    at java.lang.Class.forName(Class.java:270)
    at org.apache.hadoop.util.RunJar.main(RunJar.java:153)
    Caused by: java.lang.ClassNotFoundException:
    org.apache.hadoop.hive.conf.HiveConf
    at java.net.URLClassLoader$1.run(URLClassLoader.java:366)
    at java.net.URLClassLoader$1.run(URLClassLoader.java:355)
    at java.security.AccessController.doPrivileged(Native Method)
    at java.net.URLClassLoader.findClass(URLClassLoader.java:354)
    at java.lang.ClassLoader.loadClass(ClassLoader.java:424)
    at java.lang.ClassLoader.loadClass(ClassLoader.java:357)


    On Thu, Aug 8, 2013 at 5:19 PM, Abraham Elmahrek <
    abe@cloudera.com> wrote:
    Your 'hadoop.tmp.dir' doesn't vary per user. I'm not entirely
    sure if that directory is chown'd or if the first user to create it takes
    ownership. Either way, you probably don't have access to that directory
    using the user starting Beeswaxd (hue). Take a look at
    http://hadoop.apache.org/docs/current/hadoop-project-dist/hadoop-common/core-default.xmlfor an example 'hadoop.tmp.dir' that varies by username.

    -Abe


    On Thu, Aug 8, 2013 at 5:15 PM, Vinamrata Singal <
    vsingal5@stanford.edu> wrote:
    <configuration>
    <property>
    <name>hadoop.tmp.dir </name>
    <value>/app/hadoop/tmp</value>
    <description> A base for other temporary directories
    </description>
    </property>

    <property>
    <name>fs.default.name</name>
    <value>hdfs://master:54310</value>
    </property>
    </configuration>


    On Thu, Aug 8, 2013 at 5:11 PM, Abraham Elmahrek <
    abe@cloudera.com> wrote:
    Could you provide your core-site.xml? This should be
    creating a file in 'hadoop.tmp.dir'.

    -Abe


    On Thu, Aug 8, 2013 at 5:03 PM, Vinamrata Singal <
    vsingal5@stanford.edu> wrote:
    File: `/tmp'
    Size: 4096 Blocks: 8 IO Block: 4096
    directory
    Device: fd00h/64768d Inode: 131082 Links: 11
    Access: (1777/drwxrwxrwt) Uid: ( 0/ root) Gid: (
    0/ root)
    Access: 2013-08-08 22:47:49.435308999 +0000
    Modify: 2013-08-09 00:01:17.251309001 +0000
    Change: 2013-08-09 00:01:17.251309001 +0000
    Birth: -



    On Thu, Aug 8, 2013 at 5:00 PM, Abraham Elmahrek <
    abe@cloudera.com> wrote:
    It looks like Beeswaxd cannot create a temporary directory
    in '/tmp/'. What do your permissions on that directory look like? What is
    the output of "stat /tmp/"?

    On Thu, Aug 8, 2013 at 4:52 PM, wrote:

    I'm getting the following while trying to build Hue:









    (6211) *** Controller starting at Thu Aug 8 11:29:50 2013
    Should start 1 new children
    Controller.spawn_children(number=1)
    $HADOOP_HOME=
    $HADOOP_BIN=/usr/local/hadoop/bin/hadoop
    $HIVE_CONF_DIR=~/hive-0.10.0/conf
    $HIVE_HOME=~/hive-0.10.0
    find: `~/hive-0.10.0/lib': No such file or directory
    $HADOOP_CLASSPATH=:
    $HADOOP_OPTS=-Dlog4j.configuration=log4j.properties
    $HADOOP_CONF_DIR=~/hive-0.10.0/conf:/usr/local/hadoop/conf
    $HADOOP_MAPRED_HOME=/usr/lib/hadoop-0.20-mapreduce
    CWD=/usr/local/hue/desktop/conf
    Executing /usr/local/hadoop/bin/hadoop jar /usr/local/hue/apps/beeswax/src/beeswax/../../java-lib/BeeswaxServer.jar --beeswax 8002 --desktop-host 127.0.0.1 --desktop-port 8888 --query-lifetime 604800000 --metastore 8003
    Exception in thread "main" java.io.IOException: Permission denied
    at java.io.UnixFileSystem.createFileExclusively(Native Method)
    at java.io.File.createTempFile(File.java:1879)
    at org.apache.hadoop.util.RunJar.main(RunJar.java:119)

    I've changed the configuration file so it doesn't use hue
    but the user that I'm logged in as which has read and write permissions in
    the hadoop dfs, hadoop, hive, etc. Not sure why it's doing this...

    --
    Best,
    Vinamrata Singal | BS Computer Science | Stanford Class of
    2016 | (c) 650.215.3775 | (w)
    https://stanford.edu/~vsingal5

    --
    Best,
    Vinamrata Singal | BS Computer Science | Stanford Class of
    2016 | (c) 650.215.3775 | (w) https://stanford.edu/~vsingal5

    --
    Best,
    Vinamrata Singal | BS Computer Science | Stanford Class of 2016
    (c) 650.215.3775 | (w) https://stanford.edu/~vsingal5

    --
    Best,
    Vinamrata Singal | BS Computer Science | Stanford Class of 2016 |
    (c) 650.215.3775 | (w) https://stanford.edu/~vsingal5

    --
    Best,
    Vinamrata Singal | BS Computer Science | Stanford Class of 2016 |
    (c) 650.215.3775 | (w) https://stanford.edu/~vsingal5

    --
    Best,
    Vinamrata Singal | BS Computer Science | Stanford Class of 2016 | (c)
    650.215.3775 | (w) https://stanford.edu/~vsingal5

    --
    Best,
    Vinamrata Singal | BS Computer Science | Stanford Class of 2016 | (c)
    650.215.3775 | (w) https://stanford.edu/~vsingal5


    --
    Best,
    Vinamrata Singal | BS Computer Science | Stanford Class of 2016 | (c)
    650.215.3775 | (w) https://stanford.edu/~vsingal5

    --
    Best,
    Vinamrata Singal | BS Computer Science | Stanford Class of 2016 | (c)
    650.215.3775 | (w) https://stanford.edu/~vsingal5
  • Vinamrata Singal at Aug 9, 2013 at 7:08 pm
    Done, but I still get a bind to socket error, but when I run netstat there
    is no pid associated with the port:

    13/08/09 12:04:09 INFO beeswax.Server: Starting metastore at port 8003
    13/08/09 12:04:09 INFO beeswax.Server: Starting beeswaxd at port 8002
    Exception in thread "main" org.apache.thrift.transport.TTransportException:
    Could not create ServerSocket on address 0.0.0.0/0.0.0.0:8002.
      at org.apache.thrift.transport.TServerSocket.(TServerSocket.java:75)
      at org.apache.thrift.transport.TServerSocket.(Server.java:250)
      at com.cloudera.beeswax.Server.main(Server.java:214)
    at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
      at
    sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
    at
    sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
      at java.lang.reflect.Method.invoke(Method.java:606)
    at org.apache.hadoop.util.RunJar.main(RunJar.java:160)
    org.apache.thrift.transport.TTransportException: Could not create
    ServerSocket on address 0.0.0.0/0.0.0.0:8003.
    at org.apache.thrift.transport.TServerSocket.(TServerSocket.java:75)
    at org.apache.thrift.transport.TServerSocket.(Server.java:348)
    at com.cloudera.beeswax.Server$1.run(Server.java:200)
      at java.lang.Thread.run(Thread.java:724)


    On Fri, Aug 9, 2013 at 11:59 AM, Romain Rigaux wrote:

    Beeswax is trying to create the Hive metastore DB on /usr/share/hue and
    your custom user does not have the permissions:

    chown your_user:your_user /usr/share/hue

    Romain

    On Fri, Aug 9, 2013 at 11:56 AM, Vinamrata Singal wrote:

    After killing the appropriate processes, I still get the following error:

    java.sql.SQLException: Failed to create database 'metastore_db', see the
    next exception for details.
    at
    org.apache.derby.impl.jdbc.SQLExceptionFactory40.getSQLException(Unknown
    Source)
    at org.apache.derby.impl.jdbc.Util.newEmbedSQLException(Unknown Source)
    at org.apache.derby.impl.jdbc.Util.seeNextException(Unknown Source)
    at org.apache.derby.impl.jdbc.EmbedConnection.createDatabase(Unknown
    Source)
    at org.apache.derby.impl.jdbc.EmbedConnection.<init>(Unknown Source)
    at org.apache.derby.impl.jdbc.EmbedConnection30.<init>(Unknown Source)
    at org.apache.derby.impl.jdbc.EmbedConnection40.<init>(Unknown Source)
    at org.apache.derby.jdbc.Driver40.getNewEmbedConnection(Unknown Source)
    at org.apache.derby.jdbc.InternalDriver.connect(Unknown Source)
    at org.apache.derby.jdbc.AutoloadedDriver.connect(Unknown Source)
    at java.sql.DriverManager.getConnection(DriverManager.java:571)
    at java.sql.DriverManager.getConnection(DriverManager.java:215)
    at
    org.apache.commons.dbcp.DriverManagerConnectionFactory.createConnection(DriverManagerConnectionFactory.java:75)
    at
    org.apache.commons.dbcp.PoolableConnectionFactory.makeObject(PoolableConnectionFactory.java:582)
    at
    org.apache.commons.pool.impl.GenericObjectPool.borrowObject(GenericObjectPool.java:1148)
    at
    org.apache.commons.dbcp.PoolingDataSource.getConnection(PoolingDataSource.java:106)
    at
    org.datanucleus.store.rdbms.ConnectionFactoryImpl$ManagedConnectionImpl.getConnection(ConnectionFactoryImpl.java:521)
    at
    org.datanucleus.store.rdbms.RDBMSStoreManager.<init>(RDBMSStoreManager.java:290)
    at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
    at
    sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:57)
    at
    sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
    at java.lang.reflect.Constructor.newInstance(Constructor.java:526)
    at
    org.datanucleus.plugin.NonManagedPluginRegistry.createExecutableExtension(NonManagedPluginRegistry.java:588)
    at
    org.datanucleus.plugin.PluginManager.createExecutableExtension(PluginManager.java:300)
    at
    org.datanucleus.ObjectManagerFactoryImpl.initialiseStoreManager(ObjectManagerFactoryImpl.java:161)
    at
    org.datanucleus.jdo.JDOPersistenceManagerFactory.freezeConfiguration(JDOPersistenceManagerFactory.java:583)
    at
    org.datanucleus.jdo.JDOPersistenceManagerFactory.createPersistenceManagerFactory(JDOPersistenceManagerFactory.java:286)
    at
    org.datanucleus.jdo.JDOPersistenceManagerFactory.getPersistenceManagerFactory(JDOPersistenceManagerFactory.java:182)
    at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
    at
    sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
    at
    sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
    at java.lang.reflect.Method.invoke(Method.java:606)
    at javax.jdo.JDOHelper$16.run(JDOHelper.java:1958)
    at java.security.AccessController.doPrivileged(Native Method)
    at javax.jdo.JDOHelper.invoke(JDOHelper.java:1953)
    at
    javax.jdo.JDOHelper.invokeGetPersistenceManagerFactoryOnImplementation(JDOHelper.java:1159)
    at javax.jdo.JDOHelper.getPersistenceManagerFactory(JDOHelper.java:803)
    at javax.jdo.JDOHelper.getPersistenceManagerFactory(JDOHelper.java:698)
    at
    org.apache.hadoop.hive.metastore.ObjectStore.getPMF(ObjectStore.java:262)
    at
    org.apache.hadoop.hive.metastore.ObjectStore.getPersistenceManager(ObjectStore.java:291)
    at
    org.apache.hadoop.hive.metastore.ObjectStore.initialize(ObjectStore.java:224)
    at
    org.apache.hadoop.hive.metastore.ObjectStore.setConf(ObjectStore.java:199)
    at
    org.apache.hadoop.util.ReflectionUtils.setConf(ReflectionUtils.java:62)
    at
    org.apache.hadoop.util.ReflectionUtils.newInstance(ReflectionUtils.java:117)
    at
    org.apache.hadoop.hive.metastore.RetryingRawStore.<init>(RetryingRawStore.java:62)
    at
    org.apache.hadoop.hive.metastore.RetryingRawStore.getProxy(RetryingRawStore.java:71)
    at
    org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.newRawStore(HiveMetaStore.java:413)
    at
    org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.getMS(HiveMetaStore.java:401)
    at
    org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.createDefaultDB(HiveMetaStore.java:439)
    at
    org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.init(HiveMetaStore.java:325)
    at
    org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.<init>(HiveMetaStore.java:279)
    at com.cloudera.beeswax.Server.serveMeta(Server.java:349)
    at com.cloudera.beeswax.Server$1.run(Server.java:200)
    at java.lang.Thread.run(Thread.java:724)
    Caused by: java.sql.SQLException: Failed to create database
    'metastore_db', see the next exception for details.
    at
    org.apache.derby.impl.jdbc.SQLExceptionFactory.getSQLException(Unknown
    Source)
    at
    org.apache.derby.impl.jdbc.SQLExceptionFactory40.wrapArgsForTransportAcrossDRDA(Unknown
    Source)
    ... 54 more
    Caused by: java.sql.SQLException: Directory /usr/share/hue/metastore_db
    cannot be created.
    at org.apache.derby.impl.jdbc.SQLExceptionFactory.getSQLException(Unknown
    Source)
    at
    org.apache.derby.impl.jdbc.SQLExceptionFactory40.wrapArgsForTransportAcrossDRDA(Unknown
    Source)
    at
    org.apache.derby.impl.jdbc.SQLExceptionFactory40.getSQLException(Unknown
    Source)
    at org.apache.derby.impl.jdbc.Util.generateCsSQLException(Unknown
    Source)
    at
    org.apache.derby.impl.jdbc.TransactionResourceImpl.wrapInSQLException(Unknown
    Source)
    at
    org.apache.derby.impl.jdbc.TransactionResourceImpl.handleException(Unknown
    Source)
    at org.apache.derby.impl.jdbc.EmbedConnection.handleException(Unknown
    Source)
    ... 51 more
    Caused by: ERROR XBM0H: Directory /usr/share/hue/metastore_db cannot be
    created.
    at org.apache.derby.iapi.error.StandardException.newException(Unknown
    Source)
    at
    org.apache.derby.impl.services.monitor.StorageFactoryService$9.run(Unknown
    Source)
    at java.security.AccessController.doPrivileged(Native Method)
    at
    org.apache.derby.impl.services.monitor.StorageFactoryService.createServiceRoot(Unknown
    Source)
    at org.apache.derby.impl.services.monitor.BaseMonitor.bootService(Unknown
    Source)
    at
    org.apache.derby.impl.services.monitor.BaseMonitor.createPersistentService(Unknown
    Source)
    at
    org.apache.derby.iapi.services.monitor.Monitor.createPersistentService(Unknown
    Source)
    ... 51 more
    Exception in thread "MetaServerThread"
    javax.jdo.JDOFatalDataStoreException: Failed to create database
    'metastore_db', see the next exception for details.
    at
    org.datanucleus.jdo.NucleusJDOHelper.getJDOExceptionForNucleusException(NucleusJDOHelper.java:298)
    at
    org.datanucleus.jdo.JDOPersistenceManagerFactory.freezeConfiguration(JDOPersistenceManagerFactory.java:601)
    at
    org.datanucleus.jdo.JDOPersistenceManagerFactory.createPersistenceManagerFactory(JDOPersistenceManagerFactory.java:286)
    at
    org.datanucleus.jdo.JDOPersistenceManagerFactory.getPersistenceManagerFactory(JDOPersistenceManagerFactory.java:182)
    at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
    at
    sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
    at
    sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
    at java.lang.reflect.Method.invoke(Method.java:606)
    at javax.jdo.JDOHelper$16.run(JDOHelper.java:1958)
    at java.security.AccessController.doPrivileged(Native Method)
    at javax.jdo.JDOHelper.invoke(JDOHelper.java:1953)
    at
    javax.jdo.JDOHelper.invokeGetPersistenceManagerFactoryOnImplementation(JDOHelper.java:1159)
    at javax.jdo.JDOHelper.getPersistenceManagerFactory(JDOHelper.java:803)
    at javax.jdo.JDOHelper.getPersistenceManagerFactory(JDOHelper.java:698)
    at
    org.apache.hadoop.hive.metastore.ObjectStore.getPMF(ObjectStore.java:262)
    at
    org.apache.hadoop.hive.metastore.ObjectStore.getPersistenceManager(ObjectStore.java:291)
    at
    org.apache.hadoop.hive.metastore.ObjectStore.initialize(ObjectStore.java:224)
    at
    org.apache.hadoop.hive.metastore.ObjectStore.setConf(ObjectStore.java:199)
    at
    org.apache.hadoop.util.ReflectionUtils.setConf(ReflectionUtils.java:62)
    at
    org.apache.hadoop.util.ReflectionUtils.newInstance(ReflectionUtils.java:117)
    at
    org.apache.hadoop.hive.metastore.RetryingRawStore.<init>(RetryingRawStore.java:62)
    at
    org.apache.hadoop.hive.metastore.RetryingRawStore.getProxy(RetryingRawStore.java:71)
    at
    org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.newRawStore(HiveMetaStore.java:413)
    at
    org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.getMS(HiveMetaStore.java:401)
    at
    org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.createDefaultDB(HiveMetaStore.java:439)
    at
    org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.init(HiveMetaStore.java:325)
    at
    org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.<init>(HiveMetaStore.java:279)
    at com.cloudera.beeswax.Server.serveMeta(Server.java:349)
    at com.cloudera.beeswax.Server$1.run(Server.java:200)
    at java.lang.Thread.run(Thread.java:724)
    NestedThrowablesStackTrace:
    java.sql.SQLException: Failed to create database 'metastore_db', see the
    next exception for details.
    at
    org.apache.derby.impl.jdbc.SQLExceptionFactory40.getSQLException(Unknown
    Source)
    at org.apache.derby.impl.jdbc.Util.newEmbedSQLException(Unknown Source)
    at org.apache.derby.impl.jdbc.Util.seeNextException(Unknown Source)
    at org.apache.derby.impl.jdbc.EmbedConnection.createDatabase(Unknown
    Source)
    at org.apache.derby.impl.jdbc.EmbedConnection.<init>(Unknown Source)
    at org.apache.derby.impl.jdbc.EmbedConnection30.<init>(Unknown Source)
    at org.apache.derby.impl.jdbc.EmbedConnection40.<init>(Unknown Source)
    at org.apache.derby.jdbc.Driver40.getNewEmbedConnection(Unknown Source)
    at org.apache.derby.jdbc.InternalDriver.connect(Unknown Source)
    at org.apache.derby.jdbc.AutoloadedDriver.connect(Unknown Source)
    at java.sql.DriverManager.getConnection(DriverManager.java:571)
    at java.sql.DriverManager.getConnection(DriverManager.java:215)
    at
    org.apache.commons.dbcp.DriverManagerConnectionFactory.createConnection(DriverManagerConnectionFactory.java:75)
    at
    org.apache.commons.dbcp.PoolableConnectionFactory.makeObject(PoolableConnectionFactory.java:582)
    at
    org.apache.commons.pool.impl.GenericObjectPool.borrowObject(GenericObjectPool.java:1148)
    at
    org.apache.commons.dbcp.PoolingDataSource.getConnection(PoolingDataSource.java:106)
    at
    org.datanucleus.store.rdbms.ConnectionFactoryImpl$ManagedConnectionImpl.getConnection(ConnectionFactoryImpl.java:521)
    at
    org.datanucleus.store.rdbms.RDBMSStoreManager.<init>(RDBMSStoreManager.java:290)
    at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
    at
    sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:57)
    at
    sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
    at java.lang.reflect.Constructor.newInstance(Constructor.java:526)
    at
    org.datanucleus.plugin.NonManagedPluginRegistry.createExecutableExtension(NonManagedPluginRegistry.java:588)
    at
    org.datanucleus.plugin.PluginManager.createExecutableExtension(PluginManager.java:300)
    at
    org.datanucleus.ObjectManagerFactoryImpl.initialiseStoreManager(ObjectManagerFactoryImpl.java:161)
    at
    org.datanucleus.jdo.JDOPersistenceManagerFactory.freezeConfiguration(JDOPersistenceManagerFactory.java:583)
    at
    org.datanucleus.jdo.JDOPersistenceManagerFactory.createPersistenceManagerFactory(JDOPersistenceManagerFactory.java:286)
    at
    org.datanucleus.jdo.JDOPersistenceManagerFactory.getPersistenceManagerFactory(JDOPersistenceManagerFactory.java:182)
    at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
    at
    sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
    at
    sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
    at java.lang.reflect.Method.invoke(Method.java:606)
    at javax.jdo.JDOHelper$16.run(JDOHelper.java:1958)
    at java.security.AccessController.doPrivileged(Native Method)
    at javax.jdo.JDOHelper.invoke(JDOHelper.java:1953)
    at
    javax.jdo.JDOHelper.invokeGetPersistenceManagerFactoryOnImplementation(JDOHelper.java:1159)
    at javax.jdo.JDOHelper.getPersistenceManagerFactory(JDOHelper.java:803)
    at javax.jdo.JDOHelper.getPersistenceManagerFactory(JDOHelper.java:698)
    at
    org.apache.hadoop.hive.metastore.ObjectStore.getPMF(ObjectStore.java:262)
    at
    org.apache.hadoop.hive.metastore.ObjectStore.getPersistenceManager(ObjectStore.java:291)
    at
    org.apache.hadoop.hive.metastore.ObjectStore.initialize(ObjectStore.java:224)
    at
    org.apache.hadoop.hive.metastore.ObjectStore.setConf(ObjectStore.java:199)
    at org.apache.hadoop.util.ReflectionUtils.setConf(ReflectionUtils.java:62)
    at
    org.apache.hadoop.util.ReflectionUtils.newInstance(ReflectionUtils.java:117)
    at
    org.apache.hadoop.hive.metastore.RetryingRawStore.<init>(RetryingRawStore.java:62)
    at
    org.apache.hadoop.hive.metastore.RetryingRawStore.getProxy(RetryingRawStore.java:71)
    at
    org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.newRawStore(HiveMetaStore.java:413)
    at
    org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.getMS(HiveMetaStore.java:401)
    at
    org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.createDefaultDB(HiveMetaStore.java:439)
    at
    org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.init(HiveMetaStore.java:325)
    at
    org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.<init>(HiveMetaStore.java:279)
    at com.cloudera.beeswax.Server.serveMeta(Server.java:349)
    at com.cloudera.beeswax.Server$1.run(Server.java:200)
    at java.lang.Thread.run(Thread.java:724)
    Caused by: java.sql.SQLException: Failed to create database
    'metastore_db', see the next exception for details.
    at
    org.apache.derby.impl.jdbc.SQLExceptionFactory.getSQLException(Unknown
    Source)
    at
    org.apache.derby.impl.jdbc.SQLExceptionFactory40.wrapArgsForTransportAcrossDRDA(Unknown
    Source)
    ... 54 more
    Caused by: java.sql.SQLException: Directory /usr/share/hue/metastore_db
    cannot be created.
    at org.apache.derby.impl.jdbc.SQLExceptionFactory.getSQLException(Unknown
    Source)
    at
    org.apache.derby.impl.jdbc.SQLExceptionFactory40.wrapArgsForTransportAcrossDRDA(Unknown
    Source)
    at
    org.apache.derby.impl.jdbc.SQLExceptionFactory40.getSQLException(Unknown
    Source)
    at org.apache.derby.impl.jdbc.Util.generateCsSQLException(Unknown
    Source)
    at
    org.apache.derby.impl.jdbc.TransactionResourceImpl.wrapInSQLException(Unknown
    Source)
    at
    org.apache.derby.impl.jdbc.TransactionResourceImpl.handleException(Unknown
    Source)
    at org.apache.derby.impl.jdbc.EmbedConnection.handleException(Unknown
    Source)
    ... 51 more
    Caused by: ERROR XBM0H: Directory /usr/share/hue/metastore_db cannot be
    created.
    at org.apache.derby.iapi.error.StandardException.newException(Unknown
    Source)
    at
    org.apache.derby.impl.services.monitor.StorageFactoryService$9.run(Unknown
    Source)
    at java.security.AccessController.doPrivileged(Native Method)
    at
    org.apache.derby.impl.services.monitor.StorageFactoryService.createServiceRoot(Unknown
    Source)
    at org.apache.derby.impl.services.monitor.BaseMonitor.bootService(Unknown
    Source)
    at
    org.apache.derby.impl.services.monitor.BaseMonitor.createPersistentService(Unknown
    Source)
    at
    org.apache.derby.iapi.services.monitor.Monitor.createPersistentService(Unknown
    Source)
    ... 51 more
    (31713) socket 0.0.0.0:8888 already in use, retrying after 4.0 seconds...
    (31713) socket 0.0.0.0:8888 already in use, retrying after 8.0 seconds...
    (31713) socket 0.0.0.0:8888 already in use, retrying after 16.0
    seconds...

    On Fri, Aug 9, 2013 at 11:51 AM, Abraham Elmahrek wrote:

    Port 8888 is PID 22739. You should be able to execute "ps auxwww" or a
    similar command to find Beeswax and Hue. They should both be running as use
    Hue. Then just kill them both.

    -Abe


    On Fri, Aug 9, 2013 at 11:45 AM, Vinamrata Singal <vsingal5@stanford.edu
    wrote:
    also no pid's for 8002 and 8003


    On Fri, Aug 9, 2013 at 11:44 AM, Vinamrata Singal <
    vsingal5@stanford.edu> wrote:
    I have the following output, can't find port 8888

    Proto Recv-Q Send-Q Local Address Foreign Address
    State PID/Program name
    tcp 0 0 0.0.0.0:50060 0.0.0.0:*
    LISTEN 30843/java
    tcp 0 0 0.0.0.0:44141 0.0.0.0:*
    LISTEN 30312/java
    tcp 0 0 0.0.0.0:50030 0.0.0.0:*
    LISTEN 30697/java
    tcp 0 0 0.0.0.0:111 0.0.0.0:*
    LISTEN -
    tcp 0 0 0.0.0.0:50070 0.0.0.0:*
    LISTEN 30312/java
    tcp 0 0 0.0.0.0:22 0.0.0.0:*
    LISTEN -
    tcp 0 0 0.0.0.0:58776 0.0.0.0:*
    LISTEN 30442/java
    tcp 0 0 0.0.0.0:8888 0.0.0.0:*
    LISTEN 22739/python2.7
    tcp 0 0 0.0.0.0:50010 0.0.0.0:*
    LISTEN 30442/java
    tcp 0 0 0.0.0.0:50075 0.0.0.0:*
    LISTEN 30442/java
    tcp 0 0 0.0.0.0:55035 0.0.0.0:*
    LISTEN 30572/java
    tcp 0 0 0.0.0.0:55039 0.0.0.0:*
    LISTEN 30697/java
    tcp 0 0 127.0.0.1:60353 0.0.0.0:*
    LISTEN 30843/java
    tcp 0 0 0.0.0.0:8002 0.0.0.0:*
    LISTEN -
    tcp 0 0 0.0.0.0:8003 0.0.0.0:*
    LISTEN -
    tcp 0 0 0.0.0.0:50020 0.0.0.0:*
    LISTEN 30442/java
    tcp 0 0 127.0.0.1:54310 0.0.0.0:*
    LISTEN 30312/java
    tcp 0 0 0.0.0.0:40358 0.0.0.0:*
    LISTEN -
    tcp 0 0 127.0.0.1:54311 0.0.0.0:*
    LISTEN 30697/java
    tcp 0 0 0.0.0.0:50090 0.0.0.0:*
    LISTEN 30572/java
    tcp6 0 0 :::49932 :::*
    LISTEN -
    tcp6 0 0 :::111 :::*
    LISTEN -
    tcp6 0 0 :::22 :::*
    LISTEN -

    On Fri, Aug 9, 2013 at 11:27 AM, Abraham Elmahrek wrote:

    It definitely looks like they're in use. The command "netstat -tlnp"
    should show you their PIDs and process names. If they're Hue, you can
    simply 'kill' those processes using "kill <pid1> <pid2> ..." and start Hue
    again.


    On Fri, Aug 9, 2013 at 11:22 AM, Vinamrata Singal <
    vsingal5@stanford.edu> wrote:
    tcp 0 0 0.0.0.0:50060 0.0.0.0:*
    LISTEN
    tcp 0 0 0.0.0.0:44141 0.0.0.0:*
    LISTEN
    tcp 0 0 0.0.0.0:50030 0.0.0.0:*
    LISTEN
    tcp 0 0 0.0.0.0:111 0.0.0.0:*
    LISTEN
    tcp 0 0 0.0.0.0:50070 0.0.0.0:*
    LISTEN
    tcp 0 0 0.0.0.0:22 0.0.0.0:*
    LISTEN
    tcp 0 0 0.0.0.0:58776 0.0.0.0:*
    LISTEN
    tcp 0 0 0.0.0.0:8888 0.0.0.0:*
    LISTEN
    tcp 0 0 0.0.0.0:50010 0.0.0.0:*
    LISTEN
    tcp 0 0 0.0.0.0:50075 0.0.0.0:*
    LISTEN
    tcp 0 0 0.0.0.0:55035 0.0.0.0:*
    LISTEN
    tcp 0 0 0.0.0.0:55039 0.0.0.0:*
    LISTEN
    tcp 0 0 127.0.0.1:60353 0.0.0.0:*
    LISTEN
    tcp 0 0 0.0.0.0:8002 0.0.0.0:*
    LISTEN
    tcp 0 0 0.0.0.0:8003 0.0.0.0:*
    LISTEN
    tcp 0 0 0.0.0.0:50020 0.0.0.0:*
    LISTEN
    tcp 0 0 127.0.0.1:54310 0.0.0.0:*
    LISTEN
    tcp 0 0 0.0.0.0:40358 0.0.0.0:*
    LISTEN
    tcp 0 0 127.0.0.1:54311 0.0.0.0:*
    LISTEN
    tcp 0 0 0.0.0.0:50090 0.0.0.0:*
    LISTEN
    tcp6 0 0 :::49932 :::*
    LISTEN
    tcp6 0 0 :::111 :::*
    LISTEN
    tcp6 0 0 :::22 :::*
    LISTEN

    On Fri, Aug 9, 2013 at 11:20 AM, Abraham Elmahrek wrote:

    It looks like port 8888 is in use. Maybe 8002 and 8003 are in use
    as well. Could you provide the output of 'netstat -tln'?

    -Abe


    On Fri, Aug 9, 2013 at 11:16 AM, Vinamrata Singal <
    vsingal5@stanford.edu> wrote:
    Abe,

    Thanks for all your help and quick replies.

    So I went ahead and fixed everything (including the username as
    you suggested) yet now I'm getting a different error:

    Executing /usr/local/hadoop/bin/hadoop jar
    /usr/share/hue/apps/beeswax/src/beeswax/../../java-lib/BeeswaxServer.jar
    --beeswax 8002 --desktop-host 127.0.0.1 --desktop-port 8888
    --query-lifetime 604800000 --metastore 8003
    (31443) socket 0.0.0.0:8888 already in use, retrying after 0.5
    seconds...
    (31443) socket 0.0.0.0:8888 already in use, retrying after 1.0
    seconds...
    (31443) socket 0.0.0.0:8888 already in use, retrying after 2.0
    seconds...
    (31443) socket 0.0.0.0:8888 already in use, retrying after 4.0
    seconds...
    13/08/09 11:15:37 INFO beeswax.Server: Starting metastore at port
    8003
    13/08/09 11:15:37 INFO beeswax.Server: Starting beeswaxd at port
    8002
    Exception in thread "main"
    org.apache.thrift.transport.TTransportException: Could not create
    ServerSocket on address 0.0.0.0/0.0.0.0:8002.
    at
    org.apache.thrift.transport.TServerSocket.<init>(TServerSocket.java:93)
    at
    org.apache.thrift.transport.TServerSocket.<init>(TServerSocket.java:75)
    at
    org.apache.thrift.transport.TServerSocket.<init>(TServerSocket.java:68)
    at com.cloudera.beeswax.Server.serveBeeswax(Server.java:250)
    at com.cloudera.beeswax.Server.main(Server.java:214)
    at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
    at
    sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
    at
    sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
    at java.lang.reflect.Method.invoke(Method.java:606)
    at org.apache.hadoop.util.RunJar.main(RunJar.java:160)
    org.apache.thrift.transport.TTransportException: Could not create
    ServerSocket on address 0.0.0.0/0.0.0.0:8003.
    at
    org.apache.thrift.transport.TServerSocket.<init>(TServerSocket.java:93)
    at
    org.apache.thrift.transport.TServerSocket.<init>(TServerSocket.java:75)
    at
    org.apache.thrift.transport.TServerSocket.<init>(TServerSocket.java:68)
    at com.cloudera.beeswax.Server.serveMeta(Server.java:348)
    at com.cloudera.beeswax.Server$1.run(Server.java:200)
    at java.lang.Thread.run(Thread.java:724)



    On Thu, Aug 8, 2013 at 5:53 PM, Abraham Elmahrek <abe@cloudera.com
    wrote:
    Vinamrata,

    I think it will work with the exception of the job browser
    application. It depends on the thrift service code for the job tracker
    packaged in hue-plugins.jar, which only runs on CDH. There have been users
    who've ported that code over to the upstream job tracker in the past though.

    To solve the rest of your issues, take a look at
    apps/beeswax/beeswax_server.sh. There are environment variables that you
    need to set when you start Hue. In particular, "HIVE_LIB" should be set or
    some how grok'd into Hue's environment.

    -Abe


    On Thu, Aug 8, 2013 at 5:43 PM, Vinamrata Singal <
    vsingal5@stanford.edu> wrote:
    I have an Apache installation. Is that an issue for Hue?

    Got it, will look into the hue user installation issues.


    On Thu, Aug 8, 2013 at 5:25 PM, Abraham Elmahrek <
    abe@cloudera.com> wrote:
    That error is likely because of the error: find:
    `~/hive-0.10.0/lib': No such file or directory. Are you using
    a custom installation of Hive? Perhaps, the apache distribution rather than
    the CDH distribution?

    Note: It's probably not a good idea to change the user from
    'hue' to something else.


    On Thu, Aug 8, 2013 at 5:20 PM, Vinamrata Singal <
    vsingal5@stanford.edu> wrote:
    I actually think I fixed the issue by changing the user in the
    supervisor.py script to be the user that owns that directory.

    Question: now I'm getting this error:

    xception in thread "main" java.lang.NoClassDefFoundError:
    org/apache/hadoop/hive/conf/HiveConf
    at java.lang.Class.forName0(Native Method)
    at java.lang.Class.forName(Class.java:270)
    at org.apache.hadoop.util.RunJar.main(RunJar.java:153)
    Caused by: java.lang.ClassNotFoundException:
    org.apache.hadoop.hive.conf.HiveConf
    at java.net.URLClassLoader$1.run(URLClassLoader.java:366)
    at java.net.URLClassLoader$1.run(URLClassLoader.java:355)
    at java.security.AccessController.doPrivileged(Native Method)
    at java.net.URLClassLoader.findClass(URLClassLoader.java:354)
    at java.lang.ClassLoader.loadClass(ClassLoader.java:424)
    at java.lang.ClassLoader.loadClass(ClassLoader.java:357)


    On Thu, Aug 8, 2013 at 5:19 PM, Abraham Elmahrek <
    abe@cloudera.com> wrote:
    Your 'hadoop.tmp.dir' doesn't vary per user. I'm not entirely
    sure if that directory is chown'd or if the first user to create it takes
    ownership. Either way, you probably don't have access to that directory
    using the user starting Beeswaxd (hue). Take a look at
    http://hadoop.apache.org/docs/current/hadoop-project-dist/hadoop-common/core-default.xmlfor an example 'hadoop.tmp.dir' that varies by username.

    -Abe


    On Thu, Aug 8, 2013 at 5:15 PM, Vinamrata Singal <
    vsingal5@stanford.edu> wrote:
    <configuration>
    <property>
    <name>hadoop.tmp.dir </name>
    <value>/app/hadoop/tmp</value>
    <description> A base for other temporary directories
    </description>
    </property>

    <property>
    <name>fs.default.name</name>
    <value>hdfs://master:54310</value>
    </property>
    </configuration>


    On Thu, Aug 8, 2013 at 5:11 PM, Abraham Elmahrek <
    abe@cloudera.com> wrote:
    Could you provide your core-site.xml? This should be
    creating a file in 'hadoop.tmp.dir'.

    -Abe


    On Thu, Aug 8, 2013 at 5:03 PM, Vinamrata Singal <
    vsingal5@stanford.edu> wrote:
    File: `/tmp'
    Size: 4096 Blocks: 8 IO Block: 4096
    directory
    Device: fd00h/64768d Inode: 131082 Links: 11
    Access: (1777/drwxrwxrwt) Uid: ( 0/ root) Gid: (
    0/ root)
    Access: 2013-08-08 22:47:49.435308999 +0000
    Modify: 2013-08-09 00:01:17.251309001 +0000
    Change: 2013-08-09 00:01:17.251309001 +0000
    Birth: -



    On Thu, Aug 8, 2013 at 5:00 PM, Abraham Elmahrek <
    abe@cloudera.com> wrote:
    It looks like Beeswaxd cannot create a temporary
    directory in '/tmp/'. What do your permissions on that directory look like?
    What is the output of "stat /tmp/"?

    On Thu, Aug 8, 2013 at 4:52 PM, wrote:

    I'm getting the following while trying to build Hue:










    (6211) *** Controller starting at Thu Aug 8 11:29:50 2013
    Should start 1 new children
    Controller.spawn_children(number=1)
    $HADOOP_HOME=
    $HADOOP_BIN=/usr/local/hadoop/bin/hadoop
    $HIVE_CONF_DIR=~/hive-0.10.0/conf
    $HIVE_HOME=~/hive-0.10.0
    find: `~/hive-0.10.0/lib': No such file or directory
    $HADOOP_CLASSPATH=:
    $HADOOP_OPTS=-Dlog4j.configuration=log4j.properties
    $HADOOP_CONF_DIR=~/hive-0.10.0/conf:/usr/local/hadoop/conf
    $HADOOP_MAPRED_HOME=/usr/lib/hadoop-0.20-mapreduce
    CWD=/usr/local/hue/desktop/conf
    Executing /usr/local/hadoop/bin/hadoop jar /usr/local/hue/apps/beeswax/src/beeswax/../../java-lib/BeeswaxServer.jar --beeswax 8002 --desktop-host 127.0.0.1 --desktop-port 8888 --query-lifetime 604800000 --metastore 8003
    Exception in thread "main" java.io.IOException: Permission denied
    at java.io.UnixFileSystem.createFileExclusively(Native Method)
    at java.io.File.createTempFile(File.java:1879)
    at org.apache.hadoop.util.RunJar.main(RunJar.java:119)

    I've changed the configuration file so it doesn't use
    hue but the user that I'm logged in as which has read and write permissions
    in the hadoop dfs, hadoop, hive, etc. Not sure why it's doing this...

    --
    Best,
    Vinamrata Singal | BS Computer Science | Stanford Class of
    2016 | (c) 650.215.3775 | (w)
    https://stanford.edu/~vsingal5

    --
    Best,
    Vinamrata Singal | BS Computer Science | Stanford Class of
    2016 | (c) 650.215.3775 | (w) https://stanford.edu/~vsingal5

    --
    Best,
    Vinamrata Singal | BS Computer Science | Stanford Class of
    2016 | (c) 650.215.3775 | (w) https://stanford.edu/~vsingal5

    --
    Best,
    Vinamrata Singal | BS Computer Science | Stanford Class of 2016
    (c) 650.215.3775 | (w) https://stanford.edu/~vsingal5

    --
    Best,
    Vinamrata Singal | BS Computer Science | Stanford Class of 2016 |
    (c) 650.215.3775 | (w) https://stanford.edu/~vsingal5

    --
    Best,
    Vinamrata Singal | BS Computer Science | Stanford Class of 2016 |
    (c) 650.215.3775 | (w) https://stanford.edu/~vsingal5

    --
    Best,
    Vinamrata Singal | BS Computer Science | Stanford Class of 2016 | (c)
    650.215.3775 | (w) https://stanford.edu/~vsingal5


    --
    Best,
    Vinamrata Singal | BS Computer Science | Stanford Class of 2016 | (c)
    650.215.3775 | (w) https://stanford.edu/~vsingal5

    --
    Best,
    Vinamrata Singal | BS Computer Science | Stanford Class of 2016 | (c)
    650.215.3775 | (w) https://stanford.edu/~vsingal5

    --
    Best,
    Vinamrata Singal | BS Computer Science | Stanford Class of 2016 |
    (c) 650.215.3775 | (w) https://stanford.edu/~vsingal5
  • Abraham Elmahrek at Aug 9, 2013 at 7:14 pm
    The logs you've just posted mean that beeswaxd is starting :).

    On Fri, Aug 9, 2013 at 12:07 PM, Vinamrata Singal wrote:

    Done, but I still get a bind to socket error, but when I run netstat there
    is no pid associated with the port:

    13/08/09 12:04:09 INFO beeswax.Server: Starting metastore at port 8003
    13/08/09 12:04:09 INFO beeswax.Server: Starting beeswaxd at port 8002
    Exception in thread "main"
    org.apache.thrift.transport.TTransportException: Could not create
    ServerSocket on address 0.0.0.0/0.0.0.0:8002.
    at
    org.apache.thrift.transport.TServerSocket.<init>(TServerSocket.java:93)
    at org.apache.thrift.transport.TServerSocket.<init>(TServerSocket.java:75)
    at
    org.apache.thrift.transport.TServerSocket.<init>(TServerSocket.java:68)
    at com.cloudera.beeswax.Server.serveBeeswax(Server.java:250)
    at com.cloudera.beeswax.Server.main(Server.java:214)
    at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
    at
    sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
    at
    sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
    at java.lang.reflect.Method.invoke(Method.java:606)
    at org.apache.hadoop.util.RunJar.main(RunJar.java:160)
    org.apache.thrift.transport.TTransportException: Could not create
    ServerSocket on address 0.0.0.0/0.0.0.0:8003.
    at org.apache.thrift.transport.TServerSocket.<init>(TServerSocket.java:93)
    at
    org.apache.thrift.transport.TServerSocket.<init>(TServerSocket.java:75)
    at org.apache.thrift.transport.TServerSocket.<init>(TServerSocket.java:68)
    at com.cloudera.beeswax.Server.serveMeta(Server.java:348)
    at com.cloudera.beeswax.Server$1.run(Server.java:200)
    at java.lang.Thread.run(Thread.java:724)


    On Fri, Aug 9, 2013 at 11:59 AM, Romain Rigaux wrote:

    Beeswax is trying to create the Hive metastore DB on /usr/share/hue and
    your custom user does not have the permissions:

    chown your_user:your_user /usr/share/hue

    Romain

    On Fri, Aug 9, 2013 at 11:56 AM, Vinamrata Singal wrote:

    After killing the appropriate processes, I still get the following
    error:

    java.sql.SQLException: Failed to create database 'metastore_db', see the
    next exception for details.
    at
    org.apache.derby.impl.jdbc.SQLExceptionFactory40.getSQLException(Unknown
    Source)
    at org.apache.derby.impl.jdbc.Util.newEmbedSQLException(Unknown Source)
    at org.apache.derby.impl.jdbc.Util.seeNextException(Unknown Source)
    at org.apache.derby.impl.jdbc.EmbedConnection.createDatabase(Unknown
    Source)
    at org.apache.derby.impl.jdbc.EmbedConnection.<init>(Unknown Source)
    at org.apache.derby.impl.jdbc.EmbedConnection30.<init>(Unknown Source)
    at org.apache.derby.impl.jdbc.EmbedConnection40.<init>(Unknown Source)
    at org.apache.derby.jdbc.Driver40.getNewEmbedConnection(Unknown Source)
    at org.apache.derby.jdbc.InternalDriver.connect(Unknown Source)
    at org.apache.derby.jdbc.AutoloadedDriver.connect(Unknown Source)
    at java.sql.DriverManager.getConnection(DriverManager.java:571)
    at java.sql.DriverManager.getConnection(DriverManager.java:215)
    at
    org.apache.commons.dbcp.DriverManagerConnectionFactory.createConnection(DriverManagerConnectionFactory.java:75)
    at
    org.apache.commons.dbcp.PoolableConnectionFactory.makeObject(PoolableConnectionFactory.java:582)
    at
    org.apache.commons.pool.impl.GenericObjectPool.borrowObject(GenericObjectPool.java:1148)
    at
    org.apache.commons.dbcp.PoolingDataSource.getConnection(PoolingDataSource.java:106)
    at
    org.datanucleus.store.rdbms.ConnectionFactoryImpl$ManagedConnectionImpl.getConnection(ConnectionFactoryImpl.java:521)
    at
    org.datanucleus.store.rdbms.RDBMSStoreManager.<init>(RDBMSStoreManager.java:290)
    at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native
    Method)
    at
    sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:57)
    at
    sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
    at java.lang.reflect.Constructor.newInstance(Constructor.java:526)
    at
    org.datanucleus.plugin.NonManagedPluginRegistry.createExecutableExtension(NonManagedPluginRegistry.java:588)
    at
    org.datanucleus.plugin.PluginManager.createExecutableExtension(PluginManager.java:300)
    at
    org.datanucleus.ObjectManagerFactoryImpl.initialiseStoreManager(ObjectManagerFactoryImpl.java:161)
    at
    org.datanucleus.jdo.JDOPersistenceManagerFactory.freezeConfiguration(JDOPersistenceManagerFactory.java:583)
    at
    org.datanucleus.jdo.JDOPersistenceManagerFactory.createPersistenceManagerFactory(JDOPersistenceManagerFactory.java:286)
    at
    org.datanucleus.jdo.JDOPersistenceManagerFactory.getPersistenceManagerFactory(JDOPersistenceManagerFactory.java:182)
    at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
    at
    sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
    at
    sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
    at java.lang.reflect.Method.invoke(Method.java:606)
    at javax.jdo.JDOHelper$16.run(JDOHelper.java:1958)
    at java.security.AccessController.doPrivileged(Native Method)
    at javax.jdo.JDOHelper.invoke(JDOHelper.java:1953)
    at
    javax.jdo.JDOHelper.invokeGetPersistenceManagerFactoryOnImplementation(JDOHelper.java:1159)
    at javax.jdo.JDOHelper.getPersistenceManagerFactory(JDOHelper.java:803)
    at javax.jdo.JDOHelper.getPersistenceManagerFactory(JDOHelper.java:698)
    at
    org.apache.hadoop.hive.metastore.ObjectStore.getPMF(ObjectStore.java:262)
    at
    org.apache.hadoop.hive.metastore.ObjectStore.getPersistenceManager(ObjectStore.java:291)
    at
    org.apache.hadoop.hive.metastore.ObjectStore.initialize(ObjectStore.java:224)
    at
    org.apache.hadoop.hive.metastore.ObjectStore.setConf(ObjectStore.java:199)
    at
    org.apache.hadoop.util.ReflectionUtils.setConf(ReflectionUtils.java:62)
    at
    org.apache.hadoop.util.ReflectionUtils.newInstance(ReflectionUtils.java:117)
    at
    org.apache.hadoop.hive.metastore.RetryingRawStore.<init>(RetryingRawStore.java:62)
    at
    org.apache.hadoop.hive.metastore.RetryingRawStore.getProxy(RetryingRawStore.java:71)
    at
    org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.newRawStore(HiveMetaStore.java:413)
    at
    org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.getMS(HiveMetaStore.java:401)
    at
    org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.createDefaultDB(HiveMetaStore.java:439)
    at
    org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.init(HiveMetaStore.java:325)
    at
    org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.<init>(HiveMetaStore.java:279)
    at com.cloudera.beeswax.Server.serveMeta(Server.java:349)
    at com.cloudera.beeswax.Server$1.run(Server.java:200)
    at java.lang.Thread.run(Thread.java:724)
    Caused by: java.sql.SQLException: Failed to create database
    'metastore_db', see the next exception for details.
    at
    org.apache.derby.impl.jdbc.SQLExceptionFactory.getSQLException(Unknown
    Source)
    at
    org.apache.derby.impl.jdbc.SQLExceptionFactory40.wrapArgsForTransportAcrossDRDA(Unknown
    Source)
    ... 54 more
    Caused by: java.sql.SQLException: Directory /usr/share/hue/metastore_db
    cannot be created.
    at
    org.apache.derby.impl.jdbc.SQLExceptionFactory.getSQLException(Unknown
    Source)
    at
    org.apache.derby.impl.jdbc.SQLExceptionFactory40.wrapArgsForTransportAcrossDRDA(Unknown
    Source)
    at
    org.apache.derby.impl.jdbc.SQLExceptionFactory40.getSQLException(Unknown
    Source)
    at org.apache.derby.impl.jdbc.Util.generateCsSQLException(Unknown
    Source)
    at
    org.apache.derby.impl.jdbc.TransactionResourceImpl.wrapInSQLException(Unknown
    Source)
    at
    org.apache.derby.impl.jdbc.TransactionResourceImpl.handleException(Unknown
    Source)
    at org.apache.derby.impl.jdbc.EmbedConnection.handleException(Unknown
    Source)
    ... 51 more
    Caused by: ERROR XBM0H: Directory /usr/share/hue/metastore_db cannot be
    created.
    at org.apache.derby.iapi.error.StandardException.newException(Unknown
    Source)
    at
    org.apache.derby.impl.services.monitor.StorageFactoryService$9.run(Unknown
    Source)
    at java.security.AccessController.doPrivileged(Native Method)
    at
    org.apache.derby.impl.services.monitor.StorageFactoryService.createServiceRoot(Unknown
    Source)
    at
    org.apache.derby.impl.services.monitor.BaseMonitor.bootService(Unknown
    Source)
    at
    org.apache.derby.impl.services.monitor.BaseMonitor.createPersistentService(Unknown
    Source)
    at
    org.apache.derby.iapi.services.monitor.Monitor.createPersistentService(Unknown
    Source)
    ... 51 more
    Exception in thread "MetaServerThread"
    javax.jdo.JDOFatalDataStoreException: Failed to create database
    'metastore_db', see the next exception for details.
    at
    org.datanucleus.jdo.NucleusJDOHelper.getJDOExceptionForNucleusException(NucleusJDOHelper.java:298)
    at
    org.datanucleus.jdo.JDOPersistenceManagerFactory.freezeConfiguration(JDOPersistenceManagerFactory.java:601)
    at
    org.datanucleus.jdo.JDOPersistenceManagerFactory.createPersistenceManagerFactory(JDOPersistenceManagerFactory.java:286)
    at
    org.datanucleus.jdo.JDOPersistenceManagerFactory.getPersistenceManagerFactory(JDOPersistenceManagerFactory.java:182)
    at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
    at
    sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
    at
    sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
    at java.lang.reflect.Method.invoke(Method.java:606)
    at javax.jdo.JDOHelper$16.run(JDOHelper.java:1958)
    at java.security.AccessController.doPrivileged(Native Method)
    at javax.jdo.JDOHelper.invoke(JDOHelper.java:1953)
    at
    javax.jdo.JDOHelper.invokeGetPersistenceManagerFactoryOnImplementation(JDOHelper.java:1159)
    at javax.jdo.JDOHelper.getPersistenceManagerFactory(JDOHelper.java:803)
    at javax.jdo.JDOHelper.getPersistenceManagerFactory(JDOHelper.java:698)
    at
    org.apache.hadoop.hive.metastore.ObjectStore.getPMF(ObjectStore.java:262)
    at
    org.apache.hadoop.hive.metastore.ObjectStore.getPersistenceManager(ObjectStore.java:291)
    at
    org.apache.hadoop.hive.metastore.ObjectStore.initialize(ObjectStore.java:224)
    at
    org.apache.hadoop.hive.metastore.ObjectStore.setConf(ObjectStore.java:199)
    at
    org.apache.hadoop.util.ReflectionUtils.setConf(ReflectionUtils.java:62)
    at
    org.apache.hadoop.util.ReflectionUtils.newInstance(ReflectionUtils.java:117)
    at
    org.apache.hadoop.hive.metastore.RetryingRawStore.<init>(RetryingRawStore.java:62)
    at
    org.apache.hadoop.hive.metastore.RetryingRawStore.getProxy(RetryingRawStore.java:71)
    at
    org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.newRawStore(HiveMetaStore.java:413)
    at
    org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.getMS(HiveMetaStore.java:401)
    at
    org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.createDefaultDB(HiveMetaStore.java:439)
    at
    org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.init(HiveMetaStore.java:325)
    at
    org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.<init>(HiveMetaStore.java:279)
    at com.cloudera.beeswax.Server.serveMeta(Server.java:349)
    at com.cloudera.beeswax.Server$1.run(Server.java:200)
    at java.lang.Thread.run(Thread.java:724)
    NestedThrowablesStackTrace:
    java.sql.SQLException: Failed to create database 'metastore_db', see the
    next exception for details.
    at
    org.apache.derby.impl.jdbc.SQLExceptionFactory40.getSQLException(Unknown
    Source)
    at org.apache.derby.impl.jdbc.Util.newEmbedSQLException(Unknown Source)
    at org.apache.derby.impl.jdbc.Util.seeNextException(Unknown Source)
    at org.apache.derby.impl.jdbc.EmbedConnection.createDatabase(Unknown
    Source)
    at org.apache.derby.impl.jdbc.EmbedConnection.<init>(Unknown Source)
    at org.apache.derby.impl.jdbc.EmbedConnection30.<init>(Unknown Source)
    at org.apache.derby.impl.jdbc.EmbedConnection40.<init>(Unknown Source)
    at org.apache.derby.jdbc.Driver40.getNewEmbedConnection(Unknown Source)
    at org.apache.derby.jdbc.InternalDriver.connect(Unknown Source)
    at org.apache.derby.jdbc.AutoloadedDriver.connect(Unknown Source)
    at java.sql.DriverManager.getConnection(DriverManager.java:571)
    at java.sql.DriverManager.getConnection(DriverManager.java:215)
    at
    org.apache.commons.dbcp.DriverManagerConnectionFactory.createConnection(DriverManagerConnectionFactory.java:75)
    at
    org.apache.commons.dbcp.PoolableConnectionFactory.makeObject(PoolableConnectionFactory.java:582)
    at
    org.apache.commons.pool.impl.GenericObjectPool.borrowObject(GenericObjectPool.java:1148)
    at
    org.apache.commons.dbcp.PoolingDataSource.getConnection(PoolingDataSource.java:106)
    at
    org.datanucleus.store.rdbms.ConnectionFactoryImpl$ManagedConnectionImpl.getConnection(ConnectionFactoryImpl.java:521)
    at
    org.datanucleus.store.rdbms.RDBMSStoreManager.<init>(RDBMSStoreManager.java:290)
    at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
    at
    sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:57)
    at
    sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
    at java.lang.reflect.Constructor.newInstance(Constructor.java:526)
    at
    org.datanucleus.plugin.NonManagedPluginRegistry.createExecutableExtension(NonManagedPluginRegistry.java:588)
    at
    org.datanucleus.plugin.PluginManager.createExecutableExtension(PluginManager.java:300)
    at
    org.datanucleus.ObjectManagerFactoryImpl.initialiseStoreManager(ObjectManagerFactoryImpl.java:161)
    at
    org.datanucleus.jdo.JDOPersistenceManagerFactory.freezeConfiguration(JDOPersistenceManagerFactory.java:583)
    at
    org.datanucleus.jdo.JDOPersistenceManagerFactory.createPersistenceManagerFactory(JDOPersistenceManagerFactory.java:286)
    at
    org.datanucleus.jdo.JDOPersistenceManagerFactory.getPersistenceManagerFactory(JDOPersistenceManagerFactory.java:182)
    at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
    at
    sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
    at
    sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
    at java.lang.reflect.Method.invoke(Method.java:606)
    at javax.jdo.JDOHelper$16.run(JDOHelper.java:1958)
    at java.security.AccessController.doPrivileged(Native Method)
    at javax.jdo.JDOHelper.invoke(JDOHelper.java:1953)
    at
    javax.jdo.JDOHelper.invokeGetPersistenceManagerFactoryOnImplementation(JDOHelper.java:1159)
    at javax.jdo.JDOHelper.getPersistenceManagerFactory(JDOHelper.java:803)
    at javax.jdo.JDOHelper.getPersistenceManagerFactory(JDOHelper.java:698)
    at
    org.apache.hadoop.hive.metastore.ObjectStore.getPMF(ObjectStore.java:262)
    at
    org.apache.hadoop.hive.metastore.ObjectStore.getPersistenceManager(ObjectStore.java:291)
    at
    org.apache.hadoop.hive.metastore.ObjectStore.initialize(ObjectStore.java:224)
    at
    org.apache.hadoop.hive.metastore.ObjectStore.setConf(ObjectStore.java:199)
    at
    org.apache.hadoop.util.ReflectionUtils.setConf(ReflectionUtils.java:62)
    at
    org.apache.hadoop.util.ReflectionUtils.newInstance(ReflectionUtils.java:117)
    at
    org.apache.hadoop.hive.metastore.RetryingRawStore.<init>(RetryingRawStore.java:62)
    at
    org.apache.hadoop.hive.metastore.RetryingRawStore.getProxy(RetryingRawStore.java:71)
    at
    org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.newRawStore(HiveMetaStore.java:413)
    at
    org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.getMS(HiveMetaStore.java:401)
    at
    org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.createDefaultDB(HiveMetaStore.java:439)
    at
    org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.init(HiveMetaStore.java:325)
    at
    org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.<init>(HiveMetaStore.java:279)
    at com.cloudera.beeswax.Server.serveMeta(Server.java:349)
    at com.cloudera.beeswax.Server$1.run(Server.java:200)
    at java.lang.Thread.run(Thread.java:724)
    Caused by: java.sql.SQLException: Failed to create database
    'metastore_db', see the next exception for details.
    at
    org.apache.derby.impl.jdbc.SQLExceptionFactory.getSQLException(Unknown
    Source)
    at
    org.apache.derby.impl.jdbc.SQLExceptionFactory40.wrapArgsForTransportAcrossDRDA(Unknown
    Source)
    ... 54 more
    Caused by: java.sql.SQLException: Directory /usr/share/hue/metastore_db
    cannot be created.
    at
    org.apache.derby.impl.jdbc.SQLExceptionFactory.getSQLException(Unknown
    Source)
    at
    org.apache.derby.impl.jdbc.SQLExceptionFactory40.wrapArgsForTransportAcrossDRDA(Unknown
    Source)
    at
    org.apache.derby.impl.jdbc.SQLExceptionFactory40.getSQLException(Unknown
    Source)
    at org.apache.derby.impl.jdbc.Util.generateCsSQLException(Unknown
    Source)
    at
    org.apache.derby.impl.jdbc.TransactionResourceImpl.wrapInSQLException(Unknown
    Source)
    at
    org.apache.derby.impl.jdbc.TransactionResourceImpl.handleException(Unknown
    Source)
    at org.apache.derby.impl.jdbc.EmbedConnection.handleException(Unknown
    Source)
    ... 51 more
    Caused by: ERROR XBM0H: Directory /usr/share/hue/metastore_db cannot be
    created.
    at org.apache.derby.iapi.error.StandardException.newException(Unknown
    Source)
    at
    org.apache.derby.impl.services.monitor.StorageFactoryService$9.run(Unknown
    Source)
    at java.security.AccessController.doPrivileged(Native Method)
    at
    org.apache.derby.impl.services.monitor.StorageFactoryService.createServiceRoot(Unknown
    Source)
    at
    org.apache.derby.impl.services.monitor.BaseMonitor.bootService(Unknown
    Source)
    at
    org.apache.derby.impl.services.monitor.BaseMonitor.createPersistentService(Unknown
    Source)
    at
    org.apache.derby.iapi.services.monitor.Monitor.createPersistentService(Unknown
    Source)
    ... 51 more
    (31713) socket 0.0.0.0:8888 already in use, retrying after 4.0
    seconds...
    (31713) socket 0.0.0.0:8888 already in use, retrying after 8.0
    seconds...
    (31713) socket 0.0.0.0:8888 already in use, retrying after 16.0
    seconds...

    On Fri, Aug 9, 2013 at 11:51 AM, Abraham Elmahrek wrote:

    Port 8888 is PID 22739. You should be able to execute "ps auxwww" or a
    similar command to find Beeswax and Hue. They should both be running as use
    Hue. Then just kill them both.

    -Abe


    On Fri, Aug 9, 2013 at 11:45 AM, Vinamrata Singal <
    vsingal5@stanford.edu> wrote:
    also no pid's for 8002 and 8003


    On Fri, Aug 9, 2013 at 11:44 AM, Vinamrata Singal <
    vsingal5@stanford.edu> wrote:
    I have the following output, can't find port 8888

    Proto Recv-Q Send-Q Local Address Foreign Address
    State PID/Program name
    tcp 0 0 0.0.0.0:50060 0.0.0.0:*
    LISTEN 30843/java
    tcp 0 0 0.0.0.0:44141 0.0.0.0:*
    LISTEN 30312/java
    tcp 0 0 0.0.0.0:50030 0.0.0.0:*
    LISTEN 30697/java
    tcp 0 0 0.0.0.0:111 0.0.0.0:*
    LISTEN -
    tcp 0 0 0.0.0.0:50070 0.0.0.0:*
    LISTEN 30312/java
    tcp 0 0 0.0.0.0:22 0.0.0.0:*
    LISTEN -
    tcp 0 0 0.0.0.0:58776 0.0.0.0:*
    LISTEN 30442/java
    tcp 0 0 0.0.0.0:8888 0.0.0.0:*
    LISTEN 22739/python2.7
    tcp 0 0 0.0.0.0:50010 0.0.0.0:*
    LISTEN 30442/java
    tcp 0 0 0.0.0.0:50075 0.0.0.0:*
    LISTEN 30442/java
    tcp 0 0 0.0.0.0:55035 0.0.0.0:*
    LISTEN 30572/java
    tcp 0 0 0.0.0.0:55039 0.0.0.0:*
    LISTEN 30697/java
    tcp 0 0 127.0.0.1:60353 0.0.0.0:*
    LISTEN 30843/java
    tcp 0 0 0.0.0.0:8002 0.0.0.0:*
    LISTEN -
    tcp 0 0 0.0.0.0:8003 0.0.0.0:*
    LISTEN -
    tcp 0 0 0.0.0.0:50020 0.0.0.0:*
    LISTEN 30442/java
    tcp 0 0 127.0.0.1:54310 0.0.0.0:*
    LISTEN 30312/java
    tcp 0 0 0.0.0.0:40358 0.0.0.0:*
    LISTEN -
    tcp 0 0 127.0.0.1:54311 0.0.0.0:*
    LISTEN 30697/java
    tcp 0 0 0.0.0.0:50090 0.0.0.0:*
    LISTEN 30572/java
    tcp6 0 0 :::49932 :::*
    LISTEN -
    tcp6 0 0 :::111 :::*
    LISTEN -
    tcp6 0 0 :::22 :::*
    LISTEN -

    On Fri, Aug 9, 2013 at 11:27 AM, Abraham Elmahrek wrote:

    It definitely looks like they're in use. The command "netstat -tlnp"
    should show you their PIDs and process names. If they're Hue, you can
    simply 'kill' those processes using "kill <pid1> <pid2> ..." and start Hue
    again.


    On Fri, Aug 9, 2013 at 11:22 AM, Vinamrata Singal <
    vsingal5@stanford.edu> wrote:
    tcp 0 0 0.0.0.0:50060 0.0.0.0:*
    LISTEN
    tcp 0 0 0.0.0.0:44141 0.0.0.0:*
    LISTEN
    tcp 0 0 0.0.0.0:50030 0.0.0.0:*
    LISTEN
    tcp 0 0 0.0.0.0:111 0.0.0.0:*
    LISTEN
    tcp 0 0 0.0.0.0:50070 0.0.0.0:*
    LISTEN
    tcp 0 0 0.0.0.0:22 0.0.0.0:*
    LISTEN
    tcp 0 0 0.0.0.0:58776 0.0.0.0:*
    LISTEN
    tcp 0 0 0.0.0.0:8888 0.0.0.0:*
    LISTEN
    tcp 0 0 0.0.0.0:50010 0.0.0.0:*
    LISTEN
    tcp 0 0 0.0.0.0:50075 0.0.0.0:*
    LISTEN
    tcp 0 0 0.0.0.0:55035 0.0.0.0:*
    LISTEN
    tcp 0 0 0.0.0.0:55039 0.0.0.0:*
    LISTEN
    tcp 0 0 127.0.0.1:60353 0.0.0.0:*
    LISTEN
    tcp 0 0 0.0.0.0:8002 0.0.0.0:*
    LISTEN
    tcp 0 0 0.0.0.0:8003 0.0.0.0:*
    LISTEN
    tcp 0 0 0.0.0.0:50020 0.0.0.0:*
    LISTEN
    tcp 0 0 127.0.0.1:54310 0.0.0.0:*
    LISTEN
    tcp 0 0 0.0.0.0:40358 0.0.0.0:*
    LISTEN
    tcp 0 0 127.0.0.1:54311 0.0.0.0:*
    LISTEN
    tcp 0 0 0.0.0.0:50090 0.0.0.0:*
    LISTEN
    tcp6 0 0 :::49932 :::*
    LISTEN
    tcp6 0 0 :::111 :::*
    LISTEN
    tcp6 0 0 :::22 :::*
    LISTEN


    On Fri, Aug 9, 2013 at 11:20 AM, Abraham Elmahrek <abe@cloudera.com
    wrote:
    It looks like port 8888 is in use. Maybe 8002 and 8003 are in use
    as well. Could you provide the output of 'netstat -tln'?

    -Abe


    On Fri, Aug 9, 2013 at 11:16 AM, Vinamrata Singal <
    vsingal5@stanford.edu> wrote:
    Abe,

    Thanks for all your help and quick replies.

    So I went ahead and fixed everything (including the username as
    you suggested) yet now I'm getting a different error:

    Executing /usr/local/hadoop/bin/hadoop jar
    /usr/share/hue/apps/beeswax/src/beeswax/../../java-lib/BeeswaxServer.jar
    --beeswax 8002 --desktop-host 127.0.0.1 --desktop-port 8888
    --query-lifetime 604800000 --metastore 8003
    (31443) socket 0.0.0.0:8888 already in use, retrying after 0.5
    seconds...
    (31443) socket 0.0.0.0:8888 already in use, retrying after 1.0
    seconds...
    (31443) socket 0.0.0.0:8888 already in use, retrying after 2.0
    seconds...
    (31443) socket 0.0.0.0:8888 already in use, retrying after 4.0
    seconds...
    13/08/09 11:15:37 INFO beeswax.Server: Starting metastore at port
    8003
    13/08/09 11:15:37 INFO beeswax.Server: Starting beeswaxd at port
    8002
    Exception in thread "main"
    org.apache.thrift.transport.TTransportException: Could not create
    ServerSocket on address 0.0.0.0/0.0.0.0:8002.
    at
    org.apache.thrift.transport.TServerSocket.<init>(TServerSocket.java:93)
    at
    org.apache.thrift.transport.TServerSocket.<init>(TServerSocket.java:75)
    at
    org.apache.thrift.transport.TServerSocket.<init>(TServerSocket.java:68)
    at com.cloudera.beeswax.Server.serveBeeswax(Server.java:250)
    at com.cloudera.beeswax.Server.main(Server.java:214)
    at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
    at
    sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
    at
    sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
    at java.lang.reflect.Method.invoke(Method.java:606)
    at org.apache.hadoop.util.RunJar.main(RunJar.java:160)
    org.apache.thrift.transport.TTransportException: Could not create
    ServerSocket on address 0.0.0.0/0.0.0.0:8003.
    at
    org.apache.thrift.transport.TServerSocket.<init>(TServerSocket.java:93)
    at
    org.apache.thrift.transport.TServerSocket.<init>(TServerSocket.java:75)
    at
    org.apache.thrift.transport.TServerSocket.<init>(TServerSocket.java:68)
    at com.cloudera.beeswax.Server.serveMeta(Server.java:348)
    at com.cloudera.beeswax.Server$1.run(Server.java:200)
    at java.lang.Thread.run(Thread.java:724)



    On Thu, Aug 8, 2013 at 5:53 PM, Abraham Elmahrek <
    abe@cloudera.com> wrote:
    Vinamrata,

    I think it will work with the exception of the job browser
    application. It depends on the thrift service code for the job tracker
    packaged in hue-plugins.jar, which only runs on CDH. There have been users
    who've ported that code over to the upstream job tracker in the past though.

    To solve the rest of your issues, take a look at
    apps/beeswax/beeswax_server.sh. There are environment variables that you
    need to set when you start Hue. In particular, "HIVE_LIB" should be set or
    some how grok'd into Hue's environment.

    -Abe


    On Thu, Aug 8, 2013 at 5:43 PM, Vinamrata Singal <
    vsingal5@stanford.edu> wrote:
    I have an Apache installation. Is that an issue for Hue?

    Got it, will look into the hue user installation issues.


    On Thu, Aug 8, 2013 at 5:25 PM, Abraham Elmahrek <
    abe@cloudera.com> wrote:
    That error is likely because of the error: find:
    `~/hive-0.10.0/lib': No such file or directory. Are you using
    a custom installation of Hive? Perhaps, the apache distribution rather than
    the CDH distribution?

    Note: It's probably not a good idea to change the user from
    'hue' to something else.


    On Thu, Aug 8, 2013 at 5:20 PM, Vinamrata Singal <
    vsingal5@stanford.edu> wrote:
    I actually think I fixed the issue by changing the user in
    the supervisor.py script to be the user that owns that directory.

    Question: now I'm getting this error:

    xception in thread "main" java.lang.NoClassDefFoundError:
    org/apache/hadoop/hive/conf/HiveConf
    at java.lang.Class.forName0(Native Method)
    at java.lang.Class.forName(Class.java:270)
    at org.apache.hadoop.util.RunJar.main(RunJar.java:153)
    Caused by: java.lang.ClassNotFoundException:
    org.apache.hadoop.hive.conf.HiveConf
    at java.net.URLClassLoader$1.run(URLClassLoader.java:366)
    at java.net.URLClassLoader$1.run(URLClassLoader.java:355)
    at java.security.AccessController.doPrivileged(Native
    Method)
    at java.net.URLClassLoader.findClass(URLClassLoader.java:354)
    at java.lang.ClassLoader.loadClass(ClassLoader.java:424)
    at java.lang.ClassLoader.loadClass(ClassLoader.java:357)


    On Thu, Aug 8, 2013 at 5:19 PM, Abraham Elmahrek <
    abe@cloudera.com> wrote:
    Your 'hadoop.tmp.dir' doesn't vary per user. I'm not
    entirely sure if that directory is chown'd or if the first user to create
    it takes ownership. Either way, you probably don't have access to that
    directory using the user starting Beeswaxd (hue). Take a look at
    http://hadoop.apache.org/docs/current/hadoop-project-dist/hadoop-common/core-default.xmlfor an example 'hadoop.tmp.dir' that varies by username.

    -Abe


    On Thu, Aug 8, 2013 at 5:15 PM, Vinamrata Singal <
    vsingal5@stanford.edu> wrote:
    <configuration>
    <property>
    <name>hadoop.tmp.dir </name>
    <value>/app/hadoop/tmp</value>
    <description> A base for other temporary
    directories </description>
    </property>

    <property>
    <name>fs.default.name</name>
    <value>hdfs://master:54310</value>
    </property>
    </configuration>


    On Thu, Aug 8, 2013 at 5:11 PM, Abraham Elmahrek <
    abe@cloudera.com> wrote:
    Could you provide your core-site.xml? This should be
    creating a file in 'hadoop.tmp.dir'.

    -Abe


    On Thu, Aug 8, 2013 at 5:03 PM, Vinamrata Singal <
    vsingal5@stanford.edu> wrote:
    File: `/tmp'
    Size: 4096 Blocks: 8 IO Block: 4096
    directory
    Device: fd00h/64768d Inode: 131082 Links: 11
    Access: (1777/drwxrwxrwt) Uid: ( 0/ root) Gid: (
    0/ root)
    Access: 2013-08-08 22:47:49.435308999 +0000
    Modify: 2013-08-09 00:01:17.251309001 +0000
    Change: 2013-08-09 00:01:17.251309001 +0000
    Birth: -



    On Thu, Aug 8, 2013 at 5:00 PM, Abraham Elmahrek <
    abe@cloudera.com> wrote:
    It looks like Beeswaxd cannot create a temporary
    directory in '/tmp/'. What do your permissions on that directory look like?
    What is the output of "stat /tmp/"?

    On Thu, Aug 8, 2013 at 4:52 PM, wrote:

    I'm getting the following while trying to build Hue:










    (6211) *** Controller starting at Thu Aug 8 11:29:50 2013
    Should start 1 new children
    Controller.spawn_children(number=1)
    $HADOOP_HOME=
    $HADOOP_BIN=/usr/local/hadoop/bin/hadoop
    $HIVE_CONF_DIR=~/hive-0.10.0/conf
    $HIVE_HOME=~/hive-0.10.0
    find: `~/hive-0.10.0/lib': No such file or directory
    $HADOOP_CLASSPATH=:
    $HADOOP_OPTS=-Dlog4j.configuration=log4j.properties
    $HADOOP_CONF_DIR=~/hive-0.10.0/conf:/usr/local/hadoop/conf
    $HADOOP_MAPRED_HOME=/usr/lib/hadoop-0.20-mapreduce
    CWD=/usr/local/hue/desktop/conf
    Executing /usr/local/hadoop/bin/hadoop jar /usr/local/hue/apps/beeswax/src/beeswax/../../java-lib/BeeswaxServer.jar --beeswax 8002 --desktop-host 127.0.0.1 --desktop-port 8888 --query-lifetime 604800000 --metastore 8003
    Exception in thread "main" java.io.IOException: Permission denied
    at java.io.UnixFileSystem.createFileExclusively(Native Method)
    at java.io.File.createTempFile(File.java:1879)
    at org.apache.hadoop.util.RunJar.main(RunJar.java:119)

    I've changed the configuration file so it doesn't use
    hue but the user that I'm logged in as which has read and write permissions
    in the hadoop dfs, hadoop, hive, etc. Not sure why it's doing this...

    --
    Best,
    Vinamrata Singal | BS Computer Science | Stanford Class
    of 2016 | (c) 650.215.3775 | (w)
    https://stanford.edu/~vsingal5

    --
    Best,
    Vinamrata Singal | BS Computer Science | Stanford Class of
    2016 | (c) 650.215.3775 | (w)
    https://stanford.edu/~vsingal5

    --
    Best,
    Vinamrata Singal | BS Computer Science | Stanford Class of
    2016 | (c) 650.215.3775 | (w) https://stanford.edu/~vsingal5

    --
    Best,
    Vinamrata Singal | BS Computer Science | Stanford Class of 2016
    (c) 650.215.3775 | (w) https://stanford.edu/~vsingal5

    --
    Best,
    Vinamrata Singal | BS Computer Science | Stanford Class of 2016 |
    (c) 650.215.3775 | (w) https://stanford.edu/~vsingal5

    --
    Best,
    Vinamrata Singal | BS Computer Science | Stanford Class of 2016 |
    (c) 650.215.3775 | (w) https://stanford.edu/~vsingal5

    --
    Best,
    Vinamrata Singal | BS Computer Science | Stanford Class of 2016 | (c)
    650.215.3775 | (w) https://stanford.edu/~vsingal5


    --
    Best,
    Vinamrata Singal | BS Computer Science | Stanford Class of 2016 | (c)
    650.215.3775 | (w) https://stanford.edu/~vsingal5

    --
    Best,
    Vinamrata Singal | BS Computer Science | Stanford Class of 2016 | (c)
    650.215.3775 | (w) https://stanford.edu/~vsingal5

    --
    Best,
    Vinamrata Singal | BS Computer Science | Stanford Class of 2016 | (c)
    650.215.3775 | (w) https://stanford.edu/~vsingal5
  • Vinamrata Singal at Aug 9, 2013 at 7:16 pm
    It didn't send this part:

    Exception in thread "main" org.apache.thrift.transport.TTransportException:
    Could not create ServerSocket on address0.0.0.0/0.0.0.0:8002.
      at org.apache.thrift.transport.TServerSocket.(TServerSocket.java:75)
    at org.apache.thrift.transport.TServerSocket.(Server.java:250)
      at com.cloudera.beeswax.Server.main(Server.java:214)
    at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
      at
    sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
      at
    sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
    at java.lang.reflect.Method.invoke(Method.java:606)
      at org.apache.hadoop.util.RunJar.main(RunJar.java:160)
    org.apache.thrift.transport.TTransportException: Could not create
    ServerSocket on address 0.0.0.0/0.0.0.0:8003.
      at org.apache.thrift.transport.TServerSocket.(TServerSocket.java:75)
      at org.apache.thrift.transport.TServerSocket.(Server.java:348)
    at com.cloudera.beeswax.Server$1.run(Server.java:200)
      at java.lang.Thread.run(Thread.java:724)

    On Fri, Aug 9, 2013 at 12:14 PM, Abraham Elmahrek wrote:

    The logs you've just posted mean that beeswaxd is starting :).

    On Fri, Aug 9, 2013 at 12:07 PM, Vinamrata Singal wrote:

    Done, but I still get a bind to socket error, but when I run netstat
    there is no pid associated with the port:

    13/08/09 12:04:09 INFO beeswax.Server: Starting metastore at port 8003
    13/08/09 12:04:09 INFO beeswax.Server: Starting beeswaxd at port 8002
    Exception in thread "main"
    org.apache.thrift.transport.TTransportException: Could not create
    ServerSocket on address 0.0.0.0/0.0.0.0:8002.
    at
    org.apache.thrift.transport.TServerSocket.<init>(TServerSocket.java:93)
    at org.apache.thrift.transport.TServerSocket.<init>(TServerSocket.java:75)
    at
    org.apache.thrift.transport.TServerSocket.<init>(TServerSocket.java:68)
    at com.cloudera.beeswax.Server.serveBeeswax(Server.java:250)
    at com.cloudera.beeswax.Server.main(Server.java:214)
    at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
    at
    sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
    at
    sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
    at java.lang.reflect.Method.invoke(Method.java:606)
    at org.apache.hadoop.util.RunJar.main(RunJar.java:160)
    org.apache.thrift.transport.TTransportException: Could not create
    ServerSocket on address 0.0.0.0/0.0.0.0:8003.
    at org.apache.thrift.transport.TServerSocket.<init>(TServerSocket.java:93)
    at
    org.apache.thrift.transport.TServerSocket.<init>(TServerSocket.java:75)
    at org.apache.thrift.transport.TServerSocket.<init>(TServerSocket.java:68)
    at com.cloudera.beeswax.Server.serveMeta(Server.java:348)
    at com.cloudera.beeswax.Server$1.run(Server.java:200)
    at java.lang.Thread.run(Thread.java:724)


    On Fri, Aug 9, 2013 at 11:59 AM, Romain Rigaux wrote:

    Beeswax is trying to create the Hive metastore DB on /usr/share/hue and
    your custom user does not have the permissions:

    chown your_user:your_user /usr/share/hue

    Romain


    On Fri, Aug 9, 2013 at 11:56 AM, Vinamrata Singal <vsingal5@stanford.edu
    wrote:
    After killing the appropriate processes, I still get the following
    error:

    java.sql.SQLException: Failed to create database 'metastore_db', see
    the next exception for details.
    at
    org.apache.derby.impl.jdbc.SQLExceptionFactory40.getSQLException(Unknown
    Source)
    at org.apache.derby.impl.jdbc.Util.newEmbedSQLException(Unknown Source)
    at org.apache.derby.impl.jdbc.Util.seeNextException(Unknown Source)
    at org.apache.derby.impl.jdbc.EmbedConnection.createDatabase(Unknown
    Source)
    at org.apache.derby.impl.jdbc.EmbedConnection.<init>(Unknown Source)
    at org.apache.derby.impl.jdbc.EmbedConnection30.<init>(Unknown Source)
    at org.apache.derby.impl.jdbc.EmbedConnection40.<init>(Unknown Source)
    at org.apache.derby.jdbc.Driver40.getNewEmbedConnection(Unknown Source)
    at org.apache.derby.jdbc.InternalDriver.connect(Unknown Source)
    at org.apache.derby.jdbc.AutoloadedDriver.connect(Unknown Source)
    at java.sql.DriverManager.getConnection(DriverManager.java:571)
    at java.sql.DriverManager.getConnection(DriverManager.java:215)
    at
    org.apache.commons.dbcp.DriverManagerConnectionFactory.createConnection(DriverManagerConnectionFactory.java:75)
    at
    org.apache.commons.dbcp.PoolableConnectionFactory.makeObject(PoolableConnectionFactory.java:582)
    at
    org.apache.commons.pool.impl.GenericObjectPool.borrowObject(GenericObjectPool.java:1148)
    at
    org.apache.commons.dbcp.PoolingDataSource.getConnection(PoolingDataSource.java:106)
    at
    org.datanucleus.store.rdbms.ConnectionFactoryImpl$ManagedConnectionImpl.getConnection(ConnectionFactoryImpl.java:521)
    at
    org.datanucleus.store.rdbms.RDBMSStoreManager.<init>(RDBMSStoreManager.java:290)
    at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native
    Method)
    at
    sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:57)
    at
    sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
    at java.lang.reflect.Constructor.newInstance(Constructor.java:526)
    at
    org.datanucleus.plugin.NonManagedPluginRegistry.createExecutableExtension(NonManagedPluginRegistry.java:588)
    at
    org.datanucleus.plugin.PluginManager.createExecutableExtension(PluginManager.java:300)
    at
    org.datanucleus.ObjectManagerFactoryImpl.initialiseStoreManager(ObjectManagerFactoryImpl.java:161)
    at
    org.datanucleus.jdo.JDOPersistenceManagerFactory.freezeConfiguration(JDOPersistenceManagerFactory.java:583)
    at
    org.datanucleus.jdo.JDOPersistenceManagerFactory.createPersistenceManagerFactory(JDOPersistenceManagerFactory.java:286)
    at
    org.datanucleus.jdo.JDOPersistenceManagerFactory.getPersistenceManagerFactory(JDOPersistenceManagerFactory.java:182)
    at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
    at
    sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
    at
    sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
    at java.lang.reflect.Method.invoke(Method.java:606)
    at javax.jdo.JDOHelper$16.run(JDOHelper.java:1958)
    at java.security.AccessController.doPrivileged(Native Method)
    at javax.jdo.JDOHelper.invoke(JDOHelper.java:1953)
    at
    javax.jdo.JDOHelper.invokeGetPersistenceManagerFactoryOnImplementation(JDOHelper.java:1159)
    at
    javax.jdo.JDOHelper.getPersistenceManagerFactory(JDOHelper.java:803)
    at javax.jdo.JDOHelper.getPersistenceManagerFactory(JDOHelper.java:698)
    at
    org.apache.hadoop.hive.metastore.ObjectStore.getPMF(ObjectStore.java:262)
    at
    org.apache.hadoop.hive.metastore.ObjectStore.getPersistenceManager(ObjectStore.java:291)
    at
    org.apache.hadoop.hive.metastore.ObjectStore.initialize(ObjectStore.java:224)
    at
    org.apache.hadoop.hive.metastore.ObjectStore.setConf(ObjectStore.java:199)
    at
    org.apache.hadoop.util.ReflectionUtils.setConf(ReflectionUtils.java:62)
    at
    org.apache.hadoop.util.ReflectionUtils.newInstance(ReflectionUtils.java:117)
    at
    org.apache.hadoop.hive.metastore.RetryingRawStore.<init>(RetryingRawStore.java:62)
    at
    org.apache.hadoop.hive.metastore.RetryingRawStore.getProxy(RetryingRawStore.java:71)
    at
    org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.newRawStore(HiveMetaStore.java:413)
    at
    org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.getMS(HiveMetaStore.java:401)
    at
    org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.createDefaultDB(HiveMetaStore.java:439)
    at
    org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.init(HiveMetaStore.java:325)
    at
    org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.<init>(HiveMetaStore.java:279)
    at com.cloudera.beeswax.Server.serveMeta(Server.java:349)
    at com.cloudera.beeswax.Server$1.run(Server.java:200)
    at java.lang.Thread.run(Thread.java:724)
    Caused by: java.sql.SQLException: Failed to create database
    'metastore_db', see the next exception for details.
    at
    org.apache.derby.impl.jdbc.SQLExceptionFactory.getSQLException(Unknown
    Source)
    at
    org.apache.derby.impl.jdbc.SQLExceptionFactory40.wrapArgsForTransportAcrossDRDA(Unknown
    Source)
    ... 54 more
    Caused by: java.sql.SQLException: Directory /usr/share/hue/metastore_db
    cannot be created.
    at
    org.apache.derby.impl.jdbc.SQLExceptionFactory.getSQLException(Unknown
    Source)
    at
    org.apache.derby.impl.jdbc.SQLExceptionFactory40.wrapArgsForTransportAcrossDRDA(Unknown
    Source)
    at
    org.apache.derby.impl.jdbc.SQLExceptionFactory40.getSQLException(Unknown
    Source)
    at org.apache.derby.impl.jdbc.Util.generateCsSQLException(Unknown
    Source)
    at
    org.apache.derby.impl.jdbc.TransactionResourceImpl.wrapInSQLException(Unknown
    Source)
    at
    org.apache.derby.impl.jdbc.TransactionResourceImpl.handleException(Unknown
    Source)
    at org.apache.derby.impl.jdbc.EmbedConnection.handleException(Unknown
    Source)
    ... 51 more
    Caused by: ERROR XBM0H: Directory /usr/share/hue/metastore_db cannot be
    created.
    at org.apache.derby.iapi.error.StandardException.newException(Unknown
    Source)
    at
    org.apache.derby.impl.services.monitor.StorageFactoryService$9.run(Unknown
    Source)
    at java.security.AccessController.doPrivileged(Native Method)
    at
    org.apache.derby.impl.services.monitor.StorageFactoryService.createServiceRoot(Unknown
    Source)
    at
    org.apache.derby.impl.services.monitor.BaseMonitor.bootService(Unknown
    Source)
    at
    org.apache.derby.impl.services.monitor.BaseMonitor.createPersistentService(Unknown
    Source)
    at
    org.apache.derby.iapi.services.monitor.Monitor.createPersistentService(Unknown
    Source)
    ... 51 more
    Exception in thread "MetaServerThread"
    javax.jdo.JDOFatalDataStoreException: Failed to create database
    'metastore_db', see the next exception for details.
    at
    org.datanucleus.jdo.NucleusJDOHelper.getJDOExceptionForNucleusException(NucleusJDOHelper.java:298)
    at
    org.datanucleus.jdo.JDOPersistenceManagerFactory.freezeConfiguration(JDOPersistenceManagerFactory.java:601)
    at
    org.datanucleus.jdo.JDOPersistenceManagerFactory.createPersistenceManagerFactory(JDOPersistenceManagerFactory.java:286)
    at
    org.datanucleus.jdo.JDOPersistenceManagerFactory.getPersistenceManagerFactory(JDOPersistenceManagerFactory.java:182)
    at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
    at
    sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
    at
    sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
    at java.lang.reflect.Method.invoke(Method.java:606)
    at javax.jdo.JDOHelper$16.run(JDOHelper.java:1958)
    at java.security.AccessController.doPrivileged(Native Method)
    at javax.jdo.JDOHelper.invoke(JDOHelper.java:1953)
    at
    javax.jdo.JDOHelper.invokeGetPersistenceManagerFactoryOnImplementation(JDOHelper.java:1159)
    at
    javax.jdo.JDOHelper.getPersistenceManagerFactory(JDOHelper.java:803)
    at javax.jdo.JDOHelper.getPersistenceManagerFactory(JDOHelper.java:698)
    at
    org.apache.hadoop.hive.metastore.ObjectStore.getPMF(ObjectStore.java:262)
    at
    org.apache.hadoop.hive.metastore.ObjectStore.getPersistenceManager(ObjectStore.java:291)
    at
    org.apache.hadoop.hive.metastore.ObjectStore.initialize(ObjectStore.java:224)
    at
    org.apache.hadoop.hive.metastore.ObjectStore.setConf(ObjectStore.java:199)
    at
    org.apache.hadoop.util.ReflectionUtils.setConf(ReflectionUtils.java:62)
    at
    org.apache.hadoop.util.ReflectionUtils.newInstance(ReflectionUtils.java:117)
    at
    org.apache.hadoop.hive.metastore.RetryingRawStore.<init>(RetryingRawStore.java:62)
    at
    org.apache.hadoop.hive.metastore.RetryingRawStore.getProxy(RetryingRawStore.java:71)
    at
    org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.newRawStore(HiveMetaStore.java:413)
    at
    org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.getMS(HiveMetaStore.java:401)
    at
    org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.createDefaultDB(HiveMetaStore.java:439)
    at
    org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.init(HiveMetaStore.java:325)
    at
    org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.<init>(HiveMetaStore.java:279)
    at com.cloudera.beeswax.Server.serveMeta(Server.java:349)
    at com.cloudera.beeswax.Server$1.run(Server.java:200)
    at java.lang.Thread.run(Thread.java:724)
    NestedThrowablesStackTrace:
    java.sql.SQLException: Failed to create database 'metastore_db', see
    the next exception for details.
    at
    org.apache.derby.impl.jdbc.SQLExceptionFactory40.getSQLException(Unknown
    Source)
    at org.apache.derby.impl.jdbc.Util.newEmbedSQLException(Unknown
    Source)
    at org.apache.derby.impl.jdbc.Util.seeNextException(Unknown Source)
    at org.apache.derby.impl.jdbc.EmbedConnection.createDatabase(Unknown
    Source)
    at org.apache.derby.impl.jdbc.EmbedConnection.<init>(Unknown Source)
    at org.apache.derby.impl.jdbc.EmbedConnection30.<init>(Unknown Source)
    at org.apache.derby.impl.jdbc.EmbedConnection40.<init>(Unknown Source)
    at org.apache.derby.jdbc.Driver40.getNewEmbedConnection(Unknown
    Source)
    at org.apache.derby.jdbc.InternalDriver.connect(Unknown Source)
    at org.apache.derby.jdbc.AutoloadedDriver.connect(Unknown Source)
    at java.sql.DriverManager.getConnection(DriverManager.java:571)
    at java.sql.DriverManager.getConnection(DriverManager.java:215)
    at
    org.apache.commons.dbcp.DriverManagerConnectionFactory.createConnection(DriverManagerConnectionFactory.java:75)
    at
    org.apache.commons.dbcp.PoolableConnectionFactory.makeObject(PoolableConnectionFactory.java:582)
    at
    org.apache.commons.pool.impl.GenericObjectPool.borrowObject(GenericObjectPool.java:1148)
    at
    org.apache.commons.dbcp.PoolingDataSource.getConnection(PoolingDataSource.java:106)
    at
    org.datanucleus.store.rdbms.ConnectionFactoryImpl$ManagedConnectionImpl.getConnection(ConnectionFactoryImpl.java:521)
    at
    org.datanucleus.store.rdbms.RDBMSStoreManager.<init>(RDBMSStoreManager.java:290)
    at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
    at
    sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:57)
    at
    sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
    at java.lang.reflect.Constructor.newInstance(Constructor.java:526)
    at
    org.datanucleus.plugin.NonManagedPluginRegistry.createExecutableExtension(NonManagedPluginRegistry.java:588)
    at
    org.datanucleus.plugin.PluginManager.createExecutableExtension(PluginManager.java:300)
    at
    org.datanucleus.ObjectManagerFactoryImpl.initialiseStoreManager(ObjectManagerFactoryImpl.java:161)
    at
    org.datanucleus.jdo.JDOPersistenceManagerFactory.freezeConfiguration(JDOPersistenceManagerFactory.java:583)
    at
    org.datanucleus.jdo.JDOPersistenceManagerFactory.createPersistenceManagerFactory(JDOPersistenceManagerFactory.java:286)
    at
    org.datanucleus.jdo.JDOPersistenceManagerFactory.getPersistenceManagerFactory(JDOPersistenceManagerFactory.java:182)
    at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
    at
    sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
    at
    sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
    at java.lang.reflect.Method.invoke(Method.java:606)
    at javax.jdo.JDOHelper$16.run(JDOHelper.java:1958)
    at java.security.AccessController.doPrivileged(Native Method)
    at javax.jdo.JDOHelper.invoke(JDOHelper.java:1953)
    at
    javax.jdo.JDOHelper.invokeGetPersistenceManagerFactoryOnImplementation(JDOHelper.java:1159)
    at javax.jdo.JDOHelper.getPersistenceManagerFactory(JDOHelper.java:803)
    at
    javax.jdo.JDOHelper.getPersistenceManagerFactory(JDOHelper.java:698)
    at
    org.apache.hadoop.hive.metastore.ObjectStore.getPMF(ObjectStore.java:262)
    at
    org.apache.hadoop.hive.metastore.ObjectStore.getPersistenceManager(ObjectStore.java:291)
    at
    org.apache.hadoop.hive.metastore.ObjectStore.initialize(ObjectStore.java:224)
    at
    org.apache.hadoop.hive.metastore.ObjectStore.setConf(ObjectStore.java:199)
    at
    org.apache.hadoop.util.ReflectionUtils.setConf(ReflectionUtils.java:62)
    at
    org.apache.hadoop.util.ReflectionUtils.newInstance(ReflectionUtils.java:117)
    at
    org.apache.hadoop.hive.metastore.RetryingRawStore.<init>(RetryingRawStore.java:62)
    at
    org.apache.hadoop.hive.metastore.RetryingRawStore.getProxy(RetryingRawStore.java:71)
    at
    org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.newRawStore(HiveMetaStore.java:413)
    at
    org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.getMS(HiveMetaStore.java:401)
    at
    org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.createDefaultDB(HiveMetaStore.java:439)
    at
    org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.init(HiveMetaStore.java:325)
    at
    org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.<init>(HiveMetaStore.java:279)
    at com.cloudera.beeswax.Server.serveMeta(Server.java:349)
    at com.cloudera.beeswax.Server$1.run(Server.java:200)
    at java.lang.Thread.run(Thread.java:724)
    Caused by: java.sql.SQLException: Failed to create database
    'metastore_db', see the next exception for details.
    at
    org.apache.derby.impl.jdbc.SQLExceptionFactory.getSQLException(Unknown
    Source)
    at
    org.apache.derby.impl.jdbc.SQLExceptionFactory40.wrapArgsForTransportAcrossDRDA(Unknown
    Source)
    ... 54 more
    Caused by: java.sql.SQLException: Directory /usr/share/hue/metastore_db
    cannot be created.
    at
    org.apache.derby.impl.jdbc.SQLExceptionFactory.getSQLException(Unknown
    Source)
    at
    org.apache.derby.impl.jdbc.SQLExceptionFactory40.wrapArgsForTransportAcrossDRDA(Unknown
    Source)
    at
    org.apache.derby.impl.jdbc.SQLExceptionFactory40.getSQLException(Unknown
    Source)
    at org.apache.derby.impl.jdbc.Util.generateCsSQLException(Unknown
    Source)
    at
    org.apache.derby.impl.jdbc.TransactionResourceImpl.wrapInSQLException(Unknown
    Source)
    at
    org.apache.derby.impl.jdbc.TransactionResourceImpl.handleException(Unknown
    Source)
    at org.apache.derby.impl.jdbc.EmbedConnection.handleException(Unknown
    Source)
    ... 51 more
    Caused by: ERROR XBM0H: Directory /usr/share/hue/metastore_db cannot be
    created.
    at org.apache.derby.iapi.error.StandardException.newException(Unknown
    Source)
    at
    org.apache.derby.impl.services.monitor.StorageFactoryService$9.run(Unknown
    Source)
    at java.security.AccessController.doPrivileged(Native Method)
    at
    org.apache.derby.impl.services.monitor.StorageFactoryService.createServiceRoot(Unknown
    Source)
    at
    org.apache.derby.impl.services.monitor.BaseMonitor.bootService(Unknown
    Source)
    at
    org.apache.derby.impl.services.monitor.BaseMonitor.createPersistentService(Unknown
    Source)
    at
    org.apache.derby.iapi.services.monitor.Monitor.createPersistentService(Unknown
    Source)
    ... 51 more
    (31713) socket 0.0.0.0:8888 already in use, retrying after 4.0
    seconds...
    (31713) socket 0.0.0.0:8888 already in use, retrying after 8.0
    seconds...
    (31713) socket 0.0.0.0:8888 already in use, retrying after 16.0
    seconds...

    On Fri, Aug 9, 2013 at 11:51 AM, Abraham Elmahrek wrote:

    Port 8888 is PID 22739. You should be able to execute "ps auxwww" or
    a similar command to find Beeswax and Hue. They should both be running as
    use Hue. Then just kill them both.

    -Abe


    On Fri, Aug 9, 2013 at 11:45 AM, Vinamrata Singal <
    vsingal5@stanford.edu> wrote:
    also no pid's for 8002 and 8003


    On Fri, Aug 9, 2013 at 11:44 AM, Vinamrata Singal <
    vsingal5@stanford.edu> wrote:
    I have the following output, can't find port 8888

    Proto Recv-Q Send-Q Local Address Foreign Address
    State PID/Program name
    tcp 0 0 0.0.0.0:50060 0.0.0.0:*
    LISTEN 30843/java
    tcp 0 0 0.0.0.0:44141 0.0.0.0:*
    LISTEN 30312/java
    tcp 0 0 0.0.0.0:50030 0.0.0.0:*
    LISTEN 30697/java
    tcp 0 0 0.0.0.0:111 0.0.0.0:*
    LISTEN -
    tcp 0 0 0.0.0.0:50070 0.0.0.0:*
    LISTEN 30312/java
    tcp 0 0 0.0.0.0:22 0.0.0.0:*
    LISTEN -
    tcp 0 0 0.0.0.0:58776 0.0.0.0:*
    LISTEN 30442/java
    tcp 0 0 0.0.0.0:8888 0.0.0.0:*
    LISTEN 22739/python2.7
    tcp 0 0 0.0.0.0:50010 0.0.0.0:*
    LISTEN 30442/java
    tcp 0 0 0.0.0.0:50075 0.0.0.0:*
    LISTEN 30442/java
    tcp 0 0 0.0.0.0:55035 0.0.0.0:*
    LISTEN 30572/java
    tcp 0 0 0.0.0.0:55039 0.0.0.0:*
    LISTEN 30697/java
    tcp 0 0 127.0.0.1:60353 0.0.0.0:*
    LISTEN 30843/java
    tcp 0 0 0.0.0.0:8002 0.0.0.0:*
    LISTEN -
    tcp 0 0 0.0.0.0:8003 0.0.0.0:*
    LISTEN -
    tcp 0 0 0.0.0.0:50020 0.0.0.0:*
    LISTEN 30442/java
    tcp 0 0 127.0.0.1:54310 0.0.0.0:*
    LISTEN 30312/java
    tcp 0 0 0.0.0.0:40358 0.0.0.0:*
    LISTEN -
    tcp 0 0 127.0.0.1:54311 0.0.0.0:*
    LISTEN 30697/java
    tcp 0 0 0.0.0.0:50090 0.0.0.0:*
    LISTEN 30572/java
    tcp6 0 0 :::49932 :::*
    LISTEN -
    tcp6 0 0 :::111 :::*
    LISTEN -
    tcp6 0 0 :::22 :::*
    LISTEN -

    On Fri, Aug 9, 2013 at 11:27 AM, Abraham Elmahrek wrote:

    It definitely looks like they're in use. The command "netstat
    -tlnp" should show you their PIDs and process names. If they're Hue, you
    can simply 'kill' those processes using "kill <pid1> <pid2> ..." and start
    Hue again.


    On Fri, Aug 9, 2013 at 11:22 AM, Vinamrata Singal <
    vsingal5@stanford.edu> wrote:
    tcp 0 0 0.0.0.0:50060 0.0.0.0:*
    LISTEN
    tcp 0 0 0.0.0.0:44141 0.0.0.0:*
    LISTEN
    tcp 0 0 0.0.0.0:50030 0.0.0.0:*
    LISTEN
    tcp 0 0 0.0.0.0:111 0.0.0.0:*
    LISTEN
    tcp 0 0 0.0.0.0:50070 0.0.0.0:*
    LISTEN
    tcp 0 0 0.0.0.0:22 0.0.0.0:*
    LISTEN
    tcp 0 0 0.0.0.0:58776 0.0.0.0:*
    LISTEN
    tcp 0 0 0.0.0.0:8888 0.0.0.0:*
    LISTEN
    tcp 0 0 0.0.0.0:50010 0.0.0.0:*
    LISTEN
    tcp 0 0 0.0.0.0:50075 0.0.0.0:*
    LISTEN
    tcp 0 0 0.0.0.0:55035 0.0.0.0:*
    LISTEN
    tcp 0 0 0.0.0.0:55039 0.0.0.0:*
    LISTEN
    tcp 0 0 127.0.0.1:60353 0.0.0.0:*
    LISTEN
    tcp 0 0 0.0.0.0:8002 0.0.0.0:*
    LISTEN
    tcp 0 0 0.0.0.0:8003 0.0.0.0:*
    LISTEN
    tcp 0 0 0.0.0.0:50020 0.0.0.0:*
    LISTEN
    tcp 0 0 127.0.0.1:54310 0.0.0.0:*
    LISTEN
    tcp 0 0 0.0.0.0:40358 0.0.0.0:*
    LISTEN
    tcp 0 0 127.0.0.1:54311 0.0.0.0:*
    LISTEN
    tcp 0 0 0.0.0.0:50090 0.0.0.0:*
    LISTEN
    tcp6 0 0 :::49932 :::*
    LISTEN
    tcp6 0 0 :::111 :::*
    LISTEN
    tcp6 0 0 :::22 :::*
    LISTEN


    On Fri, Aug 9, 2013 at 11:20 AM, Abraham Elmahrek <
    abe@cloudera.com> wrote:
    It looks like port 8888 is in use. Maybe 8002 and 8003 are in use
    as well. Could you provide the output of 'netstat -tln'?

    -Abe


    On Fri, Aug 9, 2013 at 11:16 AM, Vinamrata Singal <
    vsingal5@stanford.edu> wrote:
    Abe,

    Thanks for all your help and quick replies.

    So I went ahead and fixed everything (including the username as
    you suggested) yet now I'm getting a different error:

    Executing /usr/local/hadoop/bin/hadoop jar
    /usr/share/hue/apps/beeswax/src/beeswax/../../java-lib/BeeswaxServer.jar
    --beeswax 8002 --desktop-host 127.0.0.1 --desktop-port 8888
    --query-lifetime 604800000 --metastore 8003
    (31443) socket 0.0.0.0:8888 already in use, retrying after 0.5
    seconds...
    (31443) socket 0.0.0.0:8888 already in use, retrying after 1.0
    seconds...
    (31443) socket 0.0.0.0:8888 already in use, retrying after 2.0
    seconds...
    (31443) socket 0.0.0.0:8888 already in use, retrying after 4.0
    seconds...
    13/08/09 11:15:37 INFO beeswax.Server: Starting metastore at
    port 8003
    13/08/09 11:15:37 INFO beeswax.Server: Starting beeswaxd at port
    8002
    Exception in thread "main"
    org.apache.thrift.transport.TTransportException: Could not create
    ServerSocket on address 0.0.0.0/0.0.0.0:8002.
    at
    org.apache.thrift.transport.TServerSocket.<init>(TServerSocket.java:93)
    at
    org.apache.thrift.transport.TServerSocket.<init>(TServerSocket.java:75)
    at
    org.apache.thrift.transport.TServerSocket.<init>(TServerSocket.java:68)
    at com.cloudera.beeswax.Server.serveBeeswax(Server.java:250)
    at com.cloudera.beeswax.Server.main(Server.java:214)
    at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
    at
    sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
    at
    sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
    at java.lang.reflect.Method.invoke(Method.java:606)
    at org.apache.hadoop.util.RunJar.main(RunJar.java:160)
    org.apache.thrift.transport.TTransportException: Could not
    create ServerSocket on address 0.0.0.0/0.0.0.0:8003.
    at
    org.apache.thrift.transport.TServerSocket.<init>(TServerSocket.java:93)
    at
    org.apache.thrift.transport.TServerSocket.<init>(TServerSocket.java:75)
    at
    org.apache.thrift.transport.TServerSocket.<init>(TServerSocket.java:68)
    at com.cloudera.beeswax.Server.serveMeta(Server.java:348)
    at com.cloudera.beeswax.Server$1.run(Server.java:200)
    at java.lang.Thread.run(Thread.java:724)



    On Thu, Aug 8, 2013 at 5:53 PM, Abraham Elmahrek <
    abe@cloudera.com> wrote:
    Vinamrata,

    I think it will work with the exception of the job browser
    application. It depends on the thrift service code for the job tracker
    packaged in hue-plugins.jar, which only runs on CDH. There have been users
    who've ported that code over to the upstream job tracker in the past though.

    To solve the rest of your issues, take a look at
    apps/beeswax/beeswax_server.sh. There are environment variables that you
    need to set when you start Hue. In particular, "HIVE_LIB" should be set or
    some how grok'd into Hue's environment.

    -Abe


    On Thu, Aug 8, 2013 at 5:43 PM, Vinamrata Singal <
    vsingal5@stanford.edu> wrote:
    I have an Apache installation. Is that an issue for Hue?

    Got it, will look into the hue user installation issues.


    On Thu, Aug 8, 2013 at 5:25 PM, Abraham Elmahrek <
    abe@cloudera.com> wrote:
    That error is likely because of the error: find:
    `~/hive-0.10.0/lib': No such file or directory. Are you
    using a custom installation of Hive? Perhaps, the apache distribution
    rather than the CDH distribution?

    Note: It's probably not a good idea to change the user from
    'hue' to something else.


    On Thu, Aug 8, 2013 at 5:20 PM, Vinamrata Singal <
    vsingal5@stanford.edu> wrote:
    I actually think I fixed the issue by changing the user in
    the supervisor.py script to be the user that owns that directory.

    Question: now I'm getting this error:

    xception in thread "main" java.lang.NoClassDefFoundError:
    org/apache/hadoop/hive/conf/HiveConf
    at java.lang.Class.forName0(Native Method)
    at java.lang.Class.forName(Class.java:270)
    at org.apache.hadoop.util.RunJar.main(RunJar.java:153)
    Caused by: java.lang.ClassNotFoundException:
    org.apache.hadoop.hive.conf.HiveConf
    at java.net.URLClassLoader$1.run(URLClassLoader.java:366)
    at java.net.URLClassLoader$1.run(URLClassLoader.java:355)
    at java.security.AccessController.doPrivileged(Native
    Method)
    at java.net.URLClassLoader.findClass(URLClassLoader.java:354)
    at java.lang.ClassLoader.loadClass(ClassLoader.java:424)
    at java.lang.ClassLoader.loadClass(ClassLoader.java:357)


    On Thu, Aug 8, 2013 at 5:19 PM, Abraham Elmahrek <
    abe@cloudera.com> wrote:
    Your 'hadoop.tmp.dir' doesn't vary per user. I'm not
    entirely sure if that directory is chown'd or if the first user to create
    it takes ownership. Either way, you probably don't have access to that
    directory using the user starting Beeswaxd (hue). Take a look at
    http://hadoop.apache.org/docs/current/hadoop-project-dist/hadoop-common/core-default.xmlfor an example 'hadoop.tmp.dir' that varies by username.

    -Abe


    On Thu, Aug 8, 2013 at 5:15 PM, Vinamrata Singal <
    vsingal5@stanford.edu> wrote:
    <configuration>
    <property>
    <name>hadoop.tmp.dir </name>
    <value>/app/hadoop/tmp</value>
    <description> A base for other temporary
    directories </description>
    </property>

    <property>
    <name>fs.default.name</name>
    <value>hdfs://master:54310</value>
    </property>
    </configuration>


    On Thu, Aug 8, 2013 at 5:11 PM, Abraham Elmahrek <
    abe@cloudera.com> wrote:
    Could you provide your core-site.xml? This should be
    creating a file in 'hadoop.tmp.dir'.

    -Abe


    On Thu, Aug 8, 2013 at 5:03 PM, Vinamrata Singal <
    vsingal5@stanford.edu> wrote:
    File: `/tmp'
    Size: 4096 Blocks: 8 IO Block: 4096
    directory
    Device: fd00h/64768d Inode: 131082 Links: 11
    Access: (1777/drwxrwxrwt) Uid: ( 0/ root) Gid:
    ( 0/ root)
    Access: 2013-08-08 22:47:49.435308999 +0000
    Modify: 2013-08-09 00:01:17.251309001 +0000
    Change: 2013-08-09 00:01:17.251309001 +0000
    Birth: -



    On Thu, Aug 8, 2013 at 5:00 PM, Abraham Elmahrek <
    abe@cloudera.com> wrote:
    It looks like Beeswaxd cannot create a temporary
    directory in '/tmp/'. What do your permissions on that directory look like?
    What is the output of "stat /tmp/"?

    On Thu, Aug 8, 2013 at 4:52 PM, wrote:

    I'm getting the following while trying to build Hue:











    (6211) *** Controller starting at Thu Aug 8 11:29:50 2013
    Should start 1 new children
    Controller.spawn_children(number=1)
    $HADOOP_HOME=
    $HADOOP_BIN=/usr/local/hadoop/bin/hadoop
    $HIVE_CONF_DIR=~/hive-0.10.0/conf
    $HIVE_HOME=~/hive-0.10.0
    find: `~/hive-0.10.0/lib': No such file or directory
    $HADOOP_CLASSPATH=:
    $HADOOP_OPTS=-Dlog4j.configuration=log4j.properties
    $HADOOP_CONF_DIR=~/hive-0.10.0/conf:/usr/local/hadoop/conf
    $HADOOP_MAPRED_HOME=/usr/lib/hadoop-0.20-mapreduce
    CWD=/usr/local/hue/desktop/conf
    Executing /usr/local/hadoop/bin/hadoop jar /usr/local/hue/apps/beeswax/src/beeswax/../../java-lib/BeeswaxServer.jar --beeswax 8002 --desktop-host 127.0.0.1 --desktop-port 8888 --query-lifetime 604800000 --metastore 8003
    Exception in thread "main" java.io.IOException: Permission denied
    at java.io.UnixFileSystem.createFileExclusively(Native Method)
    at java.io.File.createTempFile(File.java:1879)
    at org.apache.hadoop.util.RunJar.main(RunJar.java:119)

    I've changed the configuration file so it doesn't use
    hue but the user that I'm logged in as which has read and write permissions
    in the hadoop dfs, hadoop, hive, etc. Not sure why it's doing this...

    --
    Best,
    Vinamrata Singal | BS Computer Science | Stanford Class
    of 2016 | (c) 650.215.3775 | (w)
    https://stanford.edu/~vsingal5

    --
    Best,
    Vinamrata Singal | BS Computer Science | Stanford Class of
    2016 | (c) 650.215.3775 | (w)
    https://stanford.edu/~vsingal5

    --
    Best,
    Vinamrata Singal | BS Computer Science | Stanford Class of
    2016 | (c) 650.215.3775 | (w) https://stanford.edu/~vsingal5

    --
    Best,
    Vinamrata Singal | BS Computer Science | Stanford Class of
    2016 | (c) 650.215.3775 | (w) https://stanford.edu/~vsingal5

    --
    Best,
    Vinamrata Singal | BS Computer Science | Stanford Class of 2016
    (c) 650.215.3775 | (w) https://stanford.edu/~vsingal5

    --
    Best,
    Vinamrata Singal | BS Computer Science | Stanford Class of 2016 |
    (c) 650.215.3775 | (w) https://stanford.edu/~vsingal5

    --
    Best,
    Vinamrata Singal | BS Computer Science | Stanford Class of 2016 |
    (c) 650.215.3775 | (w) https://stanford.edu/~vsingal5


    --
    Best,
    Vinamrata Singal | BS Computer Science | Stanford Class of 2016 | (c)
    650.215.3775 | (w) https://stanford.edu/~vsingal5

    --
    Best,
    Vinamrata Singal | BS Computer Science | Stanford Class of 2016 | (c)
    650.215.3775 | (w) https://stanford.edu/~vsingal5

    --
    Best,
    Vinamrata Singal | BS Computer Science | Stanford Class of 2016 | (c)
    650.215.3775 | (w) https://stanford.edu/~vsingal5

    --
    Best,
    Vinamrata Singal | BS Computer Science | Stanford Class of 2016 |
    (c) 650.215.3775 | (w) https://stanford.edu/~vsingal5
  • Romain Rigaux at Aug 9, 2013 at 7:20 pm
    Make sure all ps -ef | grep hue or grep beeswax return nothing.

    Also curious, what is the use case of not running Hue as 'hue'?

    Romain

    On Fri, Aug 9, 2013 at 12:16 PM, Vinamrata Singal wrote:

    It didn't send this part:

    Exception in thread "main"
    org.apache.thrift.transport.TTransportException: Could not create
    ServerSocket on address0.0.0.0/0.0.0.0:8002.
    at
    org.apache.thrift.transport.TServerSocket.<init>(TServerSocket.java:93)
    at
    org.apache.thrift.transport.TServerSocket.<init>(TServerSocket.java:75)
    at org.apache.thrift.transport.TServerSocket.<init>(TServerSocket.java:68)
    at com.cloudera.beeswax.Server.serveBeeswax(Server.java:250)
    at com.cloudera.beeswax.Server.main(Server.java:214)
    at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
    at
    sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
    at
    sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
    at java.lang.reflect.Method.invoke(Method.java:606)
    at org.apache.hadoop.util.RunJar.main(RunJar.java:160)
    org.apache.thrift.transport.TTransportException: Could not create
    ServerSocket on address 0.0.0.0/0.0.0.0:8003.
    at
    org.apache.thrift.transport.TServerSocket.<init>(TServerSocket.java:93)
    at org.apache.thrift.transport.TServerSocket.<init>(TServerSocket.java:75)
    at
    org.apache.thrift.transport.TServerSocket.<init>(TServerSocket.java:68)
    at com.cloudera.beeswax.Server.serveMeta(Server.java:348)
    at com.cloudera.beeswax.Server$1.run(Server.java:200)
    at java.lang.Thread.run(Thread.java:724)

    On Fri, Aug 9, 2013 at 12:14 PM, Abraham Elmahrek wrote:

    The logs you've just posted mean that beeswaxd is starting :).

    On Fri, Aug 9, 2013 at 12:07 PM, Vinamrata Singal wrote:

    Done, but I still get a bind to socket error, but when I run netstat
    there is no pid associated with the port:

    13/08/09 12:04:09 INFO beeswax.Server: Starting metastore at port 8003
    13/08/09 12:04:09 INFO beeswax.Server: Starting beeswaxd at port 8002
    Exception in thread "main"
    org.apache.thrift.transport.TTransportException: Could not create
    ServerSocket on address 0.0.0.0/0.0.0.0:8002.
    at
    org.apache.thrift.transport.TServerSocket.<init>(TServerSocket.java:93)
    at
    org.apache.thrift.transport.TServerSocket.<init>(TServerSocket.java:75)
    at
    org.apache.thrift.transport.TServerSocket.<init>(TServerSocket.java:68)
    at com.cloudera.beeswax.Server.serveBeeswax(Server.java:250)
    at com.cloudera.beeswax.Server.main(Server.java:214)
    at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
    at
    sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
    at
    sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
    at java.lang.reflect.Method.invoke(Method.java:606)
    at org.apache.hadoop.util.RunJar.main(RunJar.java:160)
    org.apache.thrift.transport.TTransportException: Could not create
    ServerSocket on address 0.0.0.0/0.0.0.0:8003.
    at
    org.apache.thrift.transport.TServerSocket.<init>(TServerSocket.java:93)
    at
    org.apache.thrift.transport.TServerSocket.<init>(TServerSocket.java:75)
    at
    org.apache.thrift.transport.TServerSocket.<init>(TServerSocket.java:68)
    at com.cloudera.beeswax.Server.serveMeta(Server.java:348)
    at com.cloudera.beeswax.Server$1.run(Server.java:200)
    at java.lang.Thread.run(Thread.java:724)


    On Fri, Aug 9, 2013 at 11:59 AM, Romain Rigaux wrote:

    Beeswax is trying to create the Hive metastore DB on /usr/share/hue and
    your custom user does not have the permissions:

    chown your_user:your_user /usr/share/hue

    Romain


    On Fri, Aug 9, 2013 at 11:56 AM, Vinamrata Singal <
    vsingal5@stanford.edu> wrote:
    After killing the appropriate processes, I still get the following
    error:

    java.sql.SQLException: Failed to create database 'metastore_db', see
    the next exception for details.
    at
    org.apache.derby.impl.jdbc.SQLExceptionFactory40.getSQLException(Unknown
    Source)
    at org.apache.derby.impl.jdbc.Util.newEmbedSQLException(Unknown Source)
    at org.apache.derby.impl.jdbc.Util.seeNextException(Unknown Source)
    at org.apache.derby.impl.jdbc.EmbedConnection.createDatabase(Unknown
    Source)
    at org.apache.derby.impl.jdbc.EmbedConnection.<init>(Unknown Source)
    at org.apache.derby.impl.jdbc.EmbedConnection30.<init>(Unknown Source)
    at org.apache.derby.impl.jdbc.EmbedConnection40.<init>(Unknown
    Source)
    at org.apache.derby.jdbc.Driver40.getNewEmbedConnection(Unknown Source)
    at org.apache.derby.jdbc.InternalDriver.connect(Unknown Source)
    at org.apache.derby.jdbc.AutoloadedDriver.connect(Unknown Source)
    at java.sql.DriverManager.getConnection(DriverManager.java:571)
    at java.sql.DriverManager.getConnection(DriverManager.java:215)
    at
    org.apache.commons.dbcp.DriverManagerConnectionFactory.createConnection(DriverManagerConnectionFactory.java:75)
    at
    org.apache.commons.dbcp.PoolableConnectionFactory.makeObject(PoolableConnectionFactory.java:582)
    at
    org.apache.commons.pool.impl.GenericObjectPool.borrowObject(GenericObjectPool.java:1148)
    at
    org.apache.commons.dbcp.PoolingDataSource.getConnection(PoolingDataSource.java:106)
    at
    org.datanucleus.store.rdbms.ConnectionFactoryImpl$ManagedConnectionImpl.getConnection(ConnectionFactoryImpl.java:521)
    at
    org.datanucleus.store.rdbms.RDBMSStoreManager.<init>(RDBMSStoreManager.java:290)
    at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native
    Method)
    at
    sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:57)
    at
    sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
    at java.lang.reflect.Constructor.newInstance(Constructor.java:526)
    at
    org.datanucleus.plugin.NonManagedPluginRegistry.createExecutableExtension(NonManagedPluginRegistry.java:588)
    at
    org.datanucleus.plugin.PluginManager.createExecutableExtension(PluginManager.java:300)
    at
    org.datanucleus.ObjectManagerFactoryImpl.initialiseStoreManager(ObjectManagerFactoryImpl.java:161)
    at
    org.datanucleus.jdo.JDOPersistenceManagerFactory.freezeConfiguration(JDOPersistenceManagerFactory.java:583)
    at
    org.datanucleus.jdo.JDOPersistenceManagerFactory.createPersistenceManagerFactory(JDOPersistenceManagerFactory.java:286)
    at
    org.datanucleus.jdo.JDOPersistenceManagerFactory.getPersistenceManagerFactory(JDOPersistenceManagerFactory.java:182)
    at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
    at
    sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
    at
    sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
    at java.lang.reflect.Method.invoke(Method.java:606)
    at javax.jdo.JDOHelper$16.run(JDOHelper.java:1958)
    at java.security.AccessController.doPrivileged(Native Method)
    at javax.jdo.JDOHelper.invoke(JDOHelper.java:1953)
    at
    javax.jdo.JDOHelper.invokeGetPersistenceManagerFactoryOnImplementation(JDOHelper.java:1159)
    at
    javax.jdo.JDOHelper.getPersistenceManagerFactory(JDOHelper.java:803)
    at javax.jdo.JDOHelper.getPersistenceManagerFactory(JDOHelper.java:698)
    at
    org.apache.hadoop.hive.metastore.ObjectStore.getPMF(ObjectStore.java:262)
    at
    org.apache.hadoop.hive.metastore.ObjectStore.getPersistenceManager(ObjectStore.java:291)
    at
    org.apache.hadoop.hive.metastore.ObjectStore.initialize(ObjectStore.java:224)
    at
    org.apache.hadoop.hive.metastore.ObjectStore.setConf(ObjectStore.java:199)
    at
    org.apache.hadoop.util.ReflectionUtils.setConf(ReflectionUtils.java:62)
    at
    org.apache.hadoop.util.ReflectionUtils.newInstance(ReflectionUtils.java:117)
    at
    org.apache.hadoop.hive.metastore.RetryingRawStore.<init>(RetryingRawStore.java:62)
    at
    org.apache.hadoop.hive.metastore.RetryingRawStore.getProxy(RetryingRawStore.java:71)
    at
    org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.newRawStore(HiveMetaStore.java:413)
    at
    org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.getMS(HiveMetaStore.java:401)
    at
    org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.createDefaultDB(HiveMetaStore.java:439)
    at
    org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.init(HiveMetaStore.java:325)
    at
    org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.<init>(HiveMetaStore.java:279)
    at com.cloudera.beeswax.Server.serveMeta(Server.java:349)
    at com.cloudera.beeswax.Server$1.run(Server.java:200)
    at java.lang.Thread.run(Thread.java:724)
    Caused by: java.sql.SQLException: Failed to create database
    'metastore_db', see the next exception for details.
    at
    org.apache.derby.impl.jdbc.SQLExceptionFactory.getSQLException(Unknown
    Source)
    at
    org.apache.derby.impl.jdbc.SQLExceptionFactory40.wrapArgsForTransportAcrossDRDA(Unknown
    Source)
    ... 54 more
    Caused by: java.sql.SQLException: Directory
    /usr/share/hue/metastore_db cannot be created.
    at
    org.apache.derby.impl.jdbc.SQLExceptionFactory.getSQLException(Unknown
    Source)
    at
    org.apache.derby.impl.jdbc.SQLExceptionFactory40.wrapArgsForTransportAcrossDRDA(Unknown
    Source)
    at
    org.apache.derby.impl.jdbc.SQLExceptionFactory40.getSQLException(Unknown
    Source)
    at org.apache.derby.impl.jdbc.Util.generateCsSQLException(Unknown
    Source)
    at
    org.apache.derby.impl.jdbc.TransactionResourceImpl.wrapInSQLException(Unknown
    Source)
    at
    org.apache.derby.impl.jdbc.TransactionResourceImpl.handleException(Unknown
    Source)
    at org.apache.derby.impl.jdbc.EmbedConnection.handleException(Unknown
    Source)
    ... 51 more
    Caused by: ERROR XBM0H: Directory /usr/share/hue/metastore_db cannot
    be created.
    at org.apache.derby.iapi.error.StandardException.newException(Unknown
    Source)
    at
    org.apache.derby.impl.services.monitor.StorageFactoryService$9.run(Unknown
    Source)
    at java.security.AccessController.doPrivileged(Native Method)
    at
    org.apache.derby.impl.services.monitor.StorageFactoryService.createServiceRoot(Unknown
    Source)
    at
    org.apache.derby.impl.services.monitor.BaseMonitor.bootService(Unknown
    Source)
    at
    org.apache.derby.impl.services.monitor.BaseMonitor.createPersistentService(Unknown
    Source)
    at
    org.apache.derby.iapi.services.monitor.Monitor.createPersistentService(Unknown
    Source)
    ... 51 more
    Exception in thread "MetaServerThread"
    javax.jdo.JDOFatalDataStoreException: Failed to create database
    'metastore_db', see the next exception for details.
    at
    org.datanucleus.jdo.NucleusJDOHelper.getJDOExceptionForNucleusException(NucleusJDOHelper.java:298)
    at
    org.datanucleus.jdo.JDOPersistenceManagerFactory.freezeConfiguration(JDOPersistenceManagerFactory.java:601)
    at
    org.datanucleus.jdo.JDOPersistenceManagerFactory.createPersistenceManagerFactory(JDOPersistenceManagerFactory.java:286)
    at
    org.datanucleus.jdo.JDOPersistenceManagerFactory.getPersistenceManagerFactory(JDOPersistenceManagerFactory.java:182)
    at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
    at
    sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
    at
    sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
    at java.lang.reflect.Method.invoke(Method.java:606)
    at javax.jdo.JDOHelper$16.run(JDOHelper.java:1958)
    at java.security.AccessController.doPrivileged(Native Method)
    at javax.jdo.JDOHelper.invoke(JDOHelper.java:1953)
    at
    javax.jdo.JDOHelper.invokeGetPersistenceManagerFactoryOnImplementation(JDOHelper.java:1159)
    at
    javax.jdo.JDOHelper.getPersistenceManagerFactory(JDOHelper.java:803)
    at javax.jdo.JDOHelper.getPersistenceManagerFactory(JDOHelper.java:698)
    at
    org.apache.hadoop.hive.metastore.ObjectStore.getPMF(ObjectStore.java:262)
    at
    org.apache.hadoop.hive.metastore.ObjectStore.getPersistenceManager(ObjectStore.java:291)
    at
    org.apache.hadoop.hive.metastore.ObjectStore.initialize(ObjectStore.java:224)
    at
    org.apache.hadoop.hive.metastore.ObjectStore.setConf(ObjectStore.java:199)
    at
    org.apache.hadoop.util.ReflectionUtils.setConf(ReflectionUtils.java:62)
    at
    org.apache.hadoop.util.ReflectionUtils.newInstance(ReflectionUtils.java:117)
    at
    org.apache.hadoop.hive.metastore.RetryingRawStore.<init>(RetryingRawStore.java:62)
    at
    org.apache.hadoop.hive.metastore.RetryingRawStore.getProxy(RetryingRawStore.java:71)
    at
    org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.newRawStore(HiveMetaStore.java:413)
    at
    org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.getMS(HiveMetaStore.java:401)
    at
    org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.createDefaultDB(HiveMetaStore.java:439)
    at
    org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.init(HiveMetaStore.java:325)
    at
    org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.<init>(HiveMetaStore.java:279)
    at com.cloudera.beeswax.Server.serveMeta(Server.java:349)
    at com.cloudera.beeswax.Server$1.run(Server.java:200)
    at java.lang.Thread.run(Thread.java:724)
    NestedThrowablesStackTrace:
    java.sql.SQLException: Failed to create database 'metastore_db', see
    the next exception for details.
    at
    org.apache.derby.impl.jdbc.SQLExceptionFactory40.getSQLException(Unknown
    Source)
    at org.apache.derby.impl.jdbc.Util.newEmbedSQLException(Unknown
    Source)
    at org.apache.derby.impl.jdbc.Util.seeNextException(Unknown Source)
    at org.apache.derby.impl.jdbc.EmbedConnection.createDatabase(Unknown
    Source)
    at org.apache.derby.impl.jdbc.EmbedConnection.<init>(Unknown Source)
    at org.apache.derby.impl.jdbc.EmbedConnection30.<init>(Unknown
    Source)
    at org.apache.derby.impl.jdbc.EmbedConnection40.<init>(Unknown Source)
    at org.apache.derby.jdbc.Driver40.getNewEmbedConnection(Unknown
    Source)
    at org.apache.derby.jdbc.InternalDriver.connect(Unknown Source)
    at org.apache.derby.jdbc.AutoloadedDriver.connect(Unknown Source)
    at java.sql.DriverManager.getConnection(DriverManager.java:571)
    at java.sql.DriverManager.getConnection(DriverManager.java:215)
    at
    org.apache.commons.dbcp.DriverManagerConnectionFactory.createConnection(DriverManagerConnectionFactory.java:75)
    at
    org.apache.commons.dbcp.PoolableConnectionFactory.makeObject(PoolableConnectionFactory.java:582)
    at
    org.apache.commons.pool.impl.GenericObjectPool.borrowObject(GenericObjectPool.java:1148)
    at
    org.apache.commons.dbcp.PoolingDataSource.getConnection(PoolingDataSource.java:106)
    at
    org.datanucleus.store.rdbms.ConnectionFactoryImpl$ManagedConnectionImpl.getConnection(ConnectionFactoryImpl.java:521)
    at
    org.datanucleus.store.rdbms.RDBMSStoreManager.<init>(RDBMSStoreManager.java:290)
    at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native
    Method)
    at
    sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:57)
    at
    sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
    at java.lang.reflect.Constructor.newInstance(Constructor.java:526)
    at
    org.datanucleus.plugin.NonManagedPluginRegistry.createExecutableExtension(NonManagedPluginRegistry.java:588)
    at
    org.datanucleus.plugin.PluginManager.createExecutableExtension(PluginManager.java:300)
    at
    org.datanucleus.ObjectManagerFactoryImpl.initialiseStoreManager(ObjectManagerFactoryImpl.java:161)
    at
    org.datanucleus.jdo.JDOPersistenceManagerFactory.freezeConfiguration(JDOPersistenceManagerFactory.java:583)
    at
    org.datanucleus.jdo.JDOPersistenceManagerFactory.createPersistenceManagerFactory(JDOPersistenceManagerFactory.java:286)
    at
    org.datanucleus.jdo.JDOPersistenceManagerFactory.getPersistenceManagerFactory(JDOPersistenceManagerFactory.java:182)
    at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
    at
    sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
    at
    sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
    at java.lang.reflect.Method.invoke(Method.java:606)
    at javax.jdo.JDOHelper$16.run(JDOHelper.java:1958)
    at java.security.AccessController.doPrivileged(Native Method)
    at javax.jdo.JDOHelper.invoke(JDOHelper.java:1953)
    at
    javax.jdo.JDOHelper.invokeGetPersistenceManagerFactoryOnImplementation(JDOHelper.java:1159)
    at javax.jdo.JDOHelper.getPersistenceManagerFactory(JDOHelper.java:803)
    at
    javax.jdo.JDOHelper.getPersistenceManagerFactory(JDOHelper.java:698)
    at
    org.apache.hadoop.hive.metastore.ObjectStore.getPMF(ObjectStore.java:262)
    at
    org.apache.hadoop.hive.metastore.ObjectStore.getPersistenceManager(ObjectStore.java:291)
    at
    org.apache.hadoop.hive.metastore.ObjectStore.initialize(ObjectStore.java:224)
    at
    org.apache.hadoop.hive.metastore.ObjectStore.setConf(ObjectStore.java:199)
    at
    org.apache.hadoop.util.ReflectionUtils.setConf(ReflectionUtils.java:62)
    at
    org.apache.hadoop.util.ReflectionUtils.newInstance(ReflectionUtils.java:117)
    at
    org.apache.hadoop.hive.metastore.RetryingRawStore.<init>(RetryingRawStore.java:62)
    at
    org.apache.hadoop.hive.metastore.RetryingRawStore.getProxy(RetryingRawStore.java:71)
    at
    org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.newRawStore(HiveMetaStore.java:413)
    at
    org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.getMS(HiveMetaStore.java:401)
    at
    org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.createDefaultDB(HiveMetaStore.java:439)
    at
    org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.init(HiveMetaStore.java:325)
    at
    org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.<init>(HiveMetaStore.java:279)
    at com.cloudera.beeswax.Server.serveMeta(Server.java:349)
    at com.cloudera.beeswax.Server$1.run(Server.java:200)
    at java.lang.Thread.run(Thread.java:724)
    Caused by: java.sql.SQLException: Failed to create database
    'metastore_db', see the next exception for details.
    at
    org.apache.derby.impl.jdbc.SQLExceptionFactory.getSQLException(Unknown
    Source)
    at
    org.apache.derby.impl.jdbc.SQLExceptionFactory40.wrapArgsForTransportAcrossDRDA(Unknown
    Source)
    ... 54 more
    Caused by: java.sql.SQLException: Directory
    /usr/share/hue/metastore_db cannot be created.
    at
    org.apache.derby.impl.jdbc.SQLExceptionFactory.getSQLException(Unknown
    Source)
    at
    org.apache.derby.impl.jdbc.SQLExceptionFactory40.wrapArgsForTransportAcrossDRDA(Unknown
    Source)
    at
    org.apache.derby.impl.jdbc.SQLExceptionFactory40.getSQLException(Unknown
    Source)
    at org.apache.derby.impl.jdbc.Util.generateCsSQLException(Unknown
    Source)
    at
    org.apache.derby.impl.jdbc.TransactionResourceImpl.wrapInSQLException(Unknown
    Source)
    at
    org.apache.derby.impl.jdbc.TransactionResourceImpl.handleException(Unknown
    Source)
    at org.apache.derby.impl.jdbc.EmbedConnection.handleException(Unknown
    Source)
    ... 51 more
    Caused by: ERROR XBM0H: Directory /usr/share/hue/metastore_db cannot
    be created.
    at org.apache.derby.iapi.error.StandardException.newException(Unknown
    Source)
    at
    org.apache.derby.impl.services.monitor.StorageFactoryService$9.run(Unknown
    Source)
    at java.security.AccessController.doPrivileged(Native Method)
    at
    org.apache.derby.impl.services.monitor.StorageFactoryService.createServiceRoot(Unknown
    Source)
    at
    org.apache.derby.impl.services.monitor.BaseMonitor.bootService(Unknown
    Source)
    at
    org.apache.derby.impl.services.monitor.BaseMonitor.createPersistentService(Unknown
    Source)
    at
    org.apache.derby.iapi.services.monitor.Monitor.createPersistentService(Unknown
    Source)
    ... 51 more
    (31713) socket 0.0.0.0:8888 already in use, retrying after 4.0
    seconds...
    (31713) socket 0.0.0.0:8888 already in use, retrying after 8.0
    seconds...
    (31713) socket 0.0.0.0:8888 already in use, retrying after 16.0
    seconds...

    On Fri, Aug 9, 2013 at 11:51 AM, Abraham Elmahrek wrote:

    Port 8888 is PID 22739. You should be able to execute "ps auxwww" or
    a similar command to find Beeswax and Hue. They should both be running as
    use Hue. Then just kill them both.

    -Abe


    On Fri, Aug 9, 2013 at 11:45 AM, Vinamrata Singal <
    vsingal5@stanford.edu> wrote:
    also no pid's for 8002 and 8003


    On Fri, Aug 9, 2013 at 11:44 AM, Vinamrata Singal <
    vsingal5@stanford.edu> wrote:
    I have the following output, can't find port 8888

    Proto Recv-Q Send-Q Local Address Foreign Address
    State PID/Program name
    tcp 0 0 0.0.0.0:50060 0.0.0.0:*
    LISTEN 30843/java
    tcp 0 0 0.0.0.0:44141 0.0.0.0:*
    LISTEN 30312/java
    tcp 0 0 0.0.0.0:50030 0.0.0.0:*
    LISTEN 30697/java
    tcp 0 0 0.0.0.0:111 0.0.0.0:*
    LISTEN -
    tcp 0 0 0.0.0.0:50070 0.0.0.0:*
    LISTEN 30312/java
    tcp 0 0 0.0.0.0:22 0.0.0.0:*
    LISTEN -
    tcp 0 0 0.0.0.0:58776 0.0.0.0:*
    LISTEN 30442/java
    tcp 0 0 0.0.0.0:8888 0.0.0.0:*
    LISTEN 22739/python2.7
    tcp 0 0 0.0.0.0:50010 0.0.0.0:*
    LISTEN 30442/java
    tcp 0 0 0.0.0.0:50075 0.0.0.0:*
    LISTEN 30442/java
    tcp 0 0 0.0.0.0:55035 0.0.0.0:*
    LISTEN 30572/java
    tcp 0 0 0.0.0.0:55039 0.0.0.0:*
    LISTEN 30697/java
    tcp 0 0 127.0.0.1:60353 0.0.0.0:*
    LISTEN 30843/java
    tcp 0 0 0.0.0.0:8002 0.0.0.0:*
    LISTEN -
    tcp 0 0 0.0.0.0:8003 0.0.0.0:*
    LISTEN -
    tcp 0 0 0.0.0.0:50020 0.0.0.0:*
    LISTEN 30442/java
    tcp 0 0 127.0.0.1:54310 0.0.0.0:*
    LISTEN 30312/java
    tcp 0 0 0.0.0.0:40358 0.0.0.0:*
    LISTEN -
    tcp 0 0 127.0.0.1:54311 0.0.0.0:*
    LISTEN 30697/java
    tcp 0 0 0.0.0.0:50090 0.0.0.0:*
    LISTEN 30572/java
    tcp6 0 0 :::49932 :::*
    LISTEN -
    tcp6 0 0 :::111 :::*
    LISTEN -
    tcp6 0 0 :::22 :::*
    LISTEN -


    On Fri, Aug 9, 2013 at 11:27 AM, Abraham Elmahrek <abe@cloudera.com
    wrote:
    It definitely looks like they're in use. The command "netstat
    -tlnp" should show you their PIDs and process names. If they're Hue, you
    can simply 'kill' those processes using "kill <pid1> <pid2> ..." and start
    Hue again.


    On Fri, Aug 9, 2013 at 11:22 AM, Vinamrata Singal <
    vsingal5@stanford.edu> wrote:
    tcp 0 0 0.0.0.0:50060 0.0.0.0:*
    LISTEN
    tcp 0 0 0.0.0.0:44141 0.0.0.0:*
    LISTEN
    tcp 0 0 0.0.0.0:50030 0.0.0.0:*
    LISTEN
    tcp 0 0 0.0.0.0:111 0.0.0.0:*
    LISTEN
    tcp 0 0 0.0.0.0:50070 0.0.0.0:*
    LISTEN
    tcp 0 0 0.0.0.0:22 0.0.0.0:*
    LISTEN
    tcp 0 0 0.0.0.0:58776 0.0.0.0:*
    LISTEN
    tcp 0 0 0.0.0.0:8888 0.0.0.0:*
    LISTEN
    tcp 0 0 0.0.0.0:50010 0.0.0.0:*
    LISTEN
    tcp 0 0 0.0.0.0:50075 0.0.0.0:*
    LISTEN
    tcp 0 0 0.0.0.0:55035 0.0.0.0:*
    LISTEN
    tcp 0 0 0.0.0.0:55039 0.0.0.0:*
    LISTEN
    tcp 0 0 127.0.0.1:60353 0.0.0.0:*
    LISTEN
    tcp 0 0 0.0.0.0:8002 0.0.0.0:*
    LISTEN
    tcp 0 0 0.0.0.0:8003 0.0.0.0:*
    LISTEN
    tcp 0 0 0.0.0.0:50020 0.0.0.0:*
    LISTEN
    tcp 0 0 127.0.0.1:54310 0.0.0.0:*
    LISTEN
    tcp 0 0 0.0.0.0:40358 0.0.0.0:*
    LISTEN
    tcp 0 0 127.0.0.1:54311 0.0.0.0:*
    LISTEN
    tcp 0 0 0.0.0.0:50090 0.0.0.0:*
    LISTEN
    tcp6 0 0 :::49932 :::*
    LISTEN
    tcp6 0 0 :::111 :::*
    LISTEN
    tcp6 0 0 :::22 :::*
    LISTEN


    On Fri, Aug 9, 2013 at 11:20 AM, Abraham Elmahrek <
    abe@cloudera.com> wrote:
    It looks like port 8888 is in use. Maybe 8002 and 8003 are in
    use as well. Could you provide the output of 'netstat -tln'?

    -Abe


    On Fri, Aug 9, 2013 at 11:16 AM, Vinamrata Singal <
    vsingal5@stanford.edu> wrote:
    Abe,

    Thanks for all your help and quick replies.

    So I went ahead and fixed everything (including the username as
    you suggested) yet now I'm getting a different error:

    Executing /usr/local/hadoop/bin/hadoop jar
    /usr/share/hue/apps/beeswax/src/beeswax/../../java-lib/BeeswaxServer.jar
    --beeswax 8002 --desktop-host 127.0.0.1 --desktop-port 8888
    --query-lifetime 604800000 --metastore 8003
    (31443) socket 0.0.0.0:8888 already in use, retrying after 0.5
    seconds...
    (31443) socket 0.0.0.0:8888 already in use, retrying after 1.0
    seconds...
    (31443) socket 0.0.0.0:8888 already in use, retrying after 2.0
    seconds...
    (31443) socket 0.0.0.0:8888 already in use, retrying after 4.0
    seconds...
    13/08/09 11:15:37 INFO beeswax.Server: Starting metastore at
    port 8003
    13/08/09 11:15:37 INFO beeswax.Server: Starting beeswaxd at
    port 8002
    Exception in thread "main"
    org.apache.thrift.transport.TTransportException: Could not create
    ServerSocket on address 0.0.0.0/0.0.0.0:8002.
    at
    org.apache.thrift.transport.TServerSocket.<init>(TServerSocket.java:93)
    at
    org.apache.thrift.transport.TServerSocket.<init>(TServerSocket.java:75)
    at
    org.apache.thrift.transport.TServerSocket.<init>(TServerSocket.java:68)
    at com.cloudera.beeswax.Server.serveBeeswax(Server.java:250)
    at com.cloudera.beeswax.Server.main(Server.java:214)
    at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
    at
    sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
    at
    sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
    at java.lang.reflect.Method.invoke(Method.java:606)
    at org.apache.hadoop.util.RunJar.main(RunJar.java:160)
    org.apache.thrift.transport.TTransportException: Could not
    create ServerSocket on address 0.0.0.0/0.0.0.0:8003.
    at
    org.apache.thrift.transport.TServerSocket.<init>(TServerSocket.java:93)
    at
    org.apache.thrift.transport.TServerSocket.<init>(TServerSocket.java:75)
    at
    org.apache.thrift.transport.TServerSocket.<init>(TServerSocket.java:68)
    at com.cloudera.beeswax.Server.serveMeta(Server.java:348)
    at com.cloudera.beeswax.Server$1.run(Server.java:200)
    at java.lang.Thread.run(Thread.java:724)



    On Thu, Aug 8, 2013 at 5:53 PM, Abraham Elmahrek <
    abe@cloudera.com> wrote:
    Vinamrata,

    I think it will work with the exception of the job browser
    application. It depends on the thrift service code for the job tracker
    packaged in hue-plugins.jar, which only runs on CDH. There have been users
    who've ported that code over to the upstream job tracker in the past though.

    To solve the rest of your issues, take a look at
    apps/beeswax/beeswax_server.sh. There are environment variables that you
    need to set when you start Hue. In particular, "HIVE_LIB" should be set or
    some how grok'd into Hue's environment.

    -Abe


    On Thu, Aug 8, 2013 at 5:43 PM, Vinamrata Singal <
    vsingal5@stanford.edu> wrote:
    I have an Apache installation. Is that an issue for Hue?

    Got it, will look into the hue user installation issues.


    On Thu, Aug 8, 2013 at 5:25 PM, Abraham Elmahrek <
    abe@cloudera.com> wrote:
    That error is likely because of the error: find:
    `~/hive-0.10.0/lib': No such file or directory. Are you
    using a custom installation of Hive? Perhaps, the apache distribution
    rather than the CDH distribution?

    Note: It's probably not a good idea to change the user from
    'hue' to something else.


    On Thu, Aug 8, 2013 at 5:20 PM, Vinamrata Singal <
    vsingal5@stanford.edu> wrote:
    I actually think I fixed the issue by changing the user in
    the supervisor.py script to be the user that owns that directory.

    Question: now I'm getting this error:

    xception in thread "main" java.lang.NoClassDefFoundError:
    org/apache/hadoop/hive/conf/HiveConf
    at java.lang.Class.forName0(Native Method)
    at java.lang.Class.forName(Class.java:270)
    at org.apache.hadoop.util.RunJar.main(RunJar.java:153)
    Caused by: java.lang.ClassNotFoundException:
    org.apache.hadoop.hive.conf.HiveConf
    at java.net.URLClassLoader$1.run(URLClassLoader.java:366)
    at java.net.URLClassLoader$1.run(URLClassLoader.java:355)
    at java.security.AccessController.doPrivileged(Native
    Method)
    at
    java.net.URLClassLoader.findClass(URLClassLoader.java:354)
    at java.lang.ClassLoader.loadClass(ClassLoader.java:424)
    at java.lang.ClassLoader.loadClass(ClassLoader.java:357)


    On Thu, Aug 8, 2013 at 5:19 PM, Abraham Elmahrek <
    abe@cloudera.com> wrote:
    Your 'hadoop.tmp.dir' doesn't vary per user. I'm not
    entirely sure if that directory is chown'd or if the first user to create
    it takes ownership. Either way, you probably don't have access to that
    directory using the user starting Beeswaxd (hue). Take a look at
    http://hadoop.apache.org/docs/current/hadoop-project-dist/hadoop-common/core-default.xmlfor an example 'hadoop.tmp.dir' that varies by username.

    -Abe


    On Thu, Aug 8, 2013 at 5:15 PM, Vinamrata Singal <
    vsingal5@stanford.edu> wrote:
    <configuration>
    <property>
    <name>hadoop.tmp.dir </name>
    <value>/app/hadoop/tmp</value>
    <description> A base for other temporary
    directories </description>
    </property>

    <property>
    <name>fs.default.name</name>
    <value>hdfs://master:54310</value>
    </property>
    </configuration>


    On Thu, Aug 8, 2013 at 5:11 PM, Abraham Elmahrek <
    abe@cloudera.com> wrote:
    Could you provide your core-site.xml? This should be
    creating a file in 'hadoop.tmp.dir'.

    -Abe


    On Thu, Aug 8, 2013 at 5:03 PM, Vinamrata Singal <
    vsingal5@stanford.edu> wrote:
    File: `/tmp'
    Size: 4096 Blocks: 8 IO Block: 4096
    directory
    Device: fd00h/64768d Inode: 131082 Links: 11
    Access: (1777/drwxrwxrwt) Uid: ( 0/ root) Gid:
    ( 0/ root)
    Access: 2013-08-08 22:47:49.435308999 +0000
    Modify: 2013-08-09 00:01:17.251309001 +0000
    Change: 2013-08-09 00:01:17.251309001 +0000
    Birth: -



    On Thu, Aug 8, 2013 at 5:00 PM, Abraham Elmahrek <
    abe@cloudera.com> wrote:
    It looks like Beeswaxd cannot create a temporary
    directory in '/tmp/'. What do your permissions on that directory look like?
    What is the output of "stat /tmp/"?


    On Thu, Aug 8, 2013 at 4:52 PM, <vsingal5@stanford.edu
    wrote:
    I'm getting the following while trying to build Hue:












    (6211) *** Controller starting at Thu Aug 8 11:29:50 2013
    Should start 1 new children
    Controller.spawn_children(number=1)
    $HADOOP_HOME=
    $HADOOP_BIN=/usr/local/hadoop/bin/hadoop
    $HIVE_CONF_DIR=~/hive-0.10.0/conf
    $HIVE_HOME=~/hive-0.10.0
    find: `~/hive-0.10.0/lib': No such file or directory
    $HADOOP_CLASSPATH=:
    $HADOOP_OPTS=-Dlog4j.configuration=log4j.properties
    $HADOOP_CONF_DIR=~/hive-0.10.0/conf:/usr/local/hadoop/conf
    $HADOOP_MAPRED_HOME=/usr/lib/hadoop-0.20-mapreduce
    CWD=/usr/local/hue/desktop/conf
    Executing /usr/local/hadoop/bin/hadoop jar /usr/local/hue/apps/beeswax/src/beeswax/../../java-lib/BeeswaxServer.jar --beeswax 8002 --desktop-host 127.0.0.1 --desktop-port 8888 --query-lifetime 604800000 --metastore 8003
    Exception in thread "main" java.io.IOException: Permission denied
    at java.io.UnixFileSystem.createFileExclusively(Native Method)
    at java.io.File.createTempFile(File.java:1879)
    at org.apache.hadoop.util.RunJar.main(RunJar.java:119)

    I've changed the configuration file so it doesn't use
    hue but the user that I'm logged in as which has read and write permissions
    in the hadoop dfs, hadoop, hive, etc. Not sure why it's doing this...

    --
    Best,
    Vinamrata Singal | BS Computer Science | Stanford Class
    of 2016 | (c) 650.215.3775 | (w)
    https://stanford.edu/~vsingal5

    --
    Best,
    Vinamrata Singal | BS Computer Science | Stanford Class
    of 2016 | (c) 650.215.3775 | (w)
    https://stanford.edu/~vsingal5

    --
    Best,
    Vinamrata Singal | BS Computer Science | Stanford Class of
    2016 | (c) 650.215.3775 | (w)
    https://stanford.edu/~vsingal5

    --
    Best,
    Vinamrata Singal | BS Computer Science | Stanford Class of
    2016 | (c) 650.215.3775 | (w) https://stanford.edu/~vsingal5

    --
    Best,
    Vinamrata Singal | BS Computer Science | Stanford Class of 2016
    (c) 650.215.3775 | (w) https://stanford.edu/~vsingal5

    --
    Best,
    Vinamrata Singal | BS Computer Science | Stanford Class of 2016 |
    (c) 650.215.3775 | (w) https://stanford.edu/~vsingal5

    --
    Best,
    Vinamrata Singal | BS Computer Science | Stanford Class of 2016 |
    (c) 650.215.3775 | (w) https://stanford.edu/~vsingal5


    --
    Best,
    Vinamrata Singal | BS Computer Science | Stanford Class of 2016 |
    (c) 650.215.3775 | (w) https://stanford.edu/~vsingal5

    --
    Best,
    Vinamrata Singal | BS Computer Science | Stanford Class of 2016 | (c)
    650.215.3775 | (w) https://stanford.edu/~vsingal5

    --
    Best,
    Vinamrata Singal | BS Computer Science | Stanford Class of 2016 | (c)
    650.215.3775 | (w) https://stanford.edu/~vsingal5

    --
    Best,
    Vinamrata Singal | BS Computer Science | Stanford Class of 2016 | (c)
    650.215.3775 | (w) https://stanford.edu/~vsingal5
  • Vinamrata Singal at Aug 9, 2013 at 7:24 pm
    Even though I gave permissions to derby.log and where the metastore gets
    created, why am I still getting this error:

    2013-08-09 19:23:15.059 GMT Thread[MetaServerThread,5,main]
    java.io.FileNotFoundException: derby.log (Permission denied)
    2013-08-09 19:23:15.193 GMT Thread[MetaServerThread,5,main] Cleanup action
    starting
    ERROR XBM0H: Directory /home/vsingal/metastore_db cannot be created.
    at org.apache.derby.iapi.error.StandardException.newException(Unknown
    Source)
      at
    org.apache.derby.impl.services.monitor.StorageFactoryService$9.run(Unknown
    Source)
    at java.security.AccessController.doPrivileged(Native Method)
      at
    org.apache.derby.impl.services.monitor.StorageFactoryService.createServiceRoot(Unknown
    Source)
    at org.apache.derby.impl.services.monitor.BaseMonitor.bootService(Unknown
    Source)
      at
    org.apache.derby.impl.services.monitor.BaseMonitor.createPersistentService(Unknown
    Source)
    at
    org.apache.derby.iapi.services.monitor.Monitor.createPersistentService(Unknown
    Source)
      at org.apache.derby.impl.jdbc.EmbedConnection.createDatabase(Unknown
    Source)
    at org.apache.derby.impl.jdbc.EmbedConnection.<init>(Unknown Source)
      at org.apache.derby.impl.jdbc.EmbedConnection30.<init>(Unknown Source)
    at org.apache.derby.impl.jdbc.EmbedConnection40.<init>(Unknown Source)
      at org.apache.derby.jdbc.Driver40.getNewEmbedConnection(Unknown Source)
    at org.apache.derby.jdbc.InternalDriver.connect(Unknown Source)
      at org.apache.derby.jdbc.AutoloadedDriver.connect(Unknown Source)
    at java.sql.DriverManager.getConnection(DriverManager.java:571)
      at java.sql.DriverManager.getConnection(DriverManager.java:215)
    at
    org.apache.commons.dbcp.DriverManagerConnectionFactory.createConnection(DriverManagerConnectionFactory.java:75)
      at
    org.apache.commons.dbcp.PoolableConnectionFactory.makeObject(PoolableConnectionFactory.java:582)
    at
    org.apache.commons.pool.impl.GenericObjectPool.borrowObject(GenericObjectPool.java:1148)
      at
    org.apache.commons.dbcp.PoolingDataSource.getConnection(PoolingDataSource.java:106)
    at
    org.datanucleus.store.rdbms.ConnectionFactoryImpl$ManagedConnectionImpl.getConnection(ConnectionFactoryImpl.java:521)
      at
    org.datanucleus.store.rdbms.RDBMSStoreManager.(Native Method)
      at
    sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:57)
    at
    sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
      at java.lang.reflect.Constructor.newInstance(Constructor.java:526)
    at
    org.datanucleus.plugin.NonManagedPluginRegistry.createExecutableExtension(NonManagedPluginRegistry.java:588)
      at
    org.datanucleus.plugin.PluginManager.createExecutableExtension(PluginManager.java:300)
    at
    org.datanucleus.ObjectManagerFactoryImpl.initialiseStoreManager(ObjectManagerFactoryImpl.java:161)
      at
    org.datanucleus.jdo.JDOPersistenceManagerFactory.freezeConfiguration(JDOPersistenceManagerFactory.java:583)
    at
    org.datanucleus.jdo.JDOPersistenceManagerFactory.createPersistenceManagerFactory(JDOPersistenceManagerFactory.java:286)
      at
    org.datanucleus.jdo.JDOPersistenceManagerFactory.getPersistenceManagerFactory(JDOPersistenceManagerFactory.java:182)
    at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
      at
    sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
    at
    sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
      at java.lang.reflect.Method.invoke(Method.java:606)
    at javax.jdo.JDOHelper$16.run(JDOHelper.java:1958)
      at java.security.AccessController.doPrivileged(Native Method)
    at javax.jdo.JDOHelper.invoke(JDOHelper.java:1953)
      at
    javax.jdo.JDOHelper.invokeGetPersistenceManagerFactoryOnImplementation(JDOHelper.java:1159)
    at javax.jdo.JDOHelper.getPersistenceManagerFactory(JDOHelper.java:803)
      at javax.jdo.JDOHelper.getPersistenceManagerFactory(JDOHelper.java:698)
    at org.apache.hadoop.hive.metastore.ObjectStore.getPMF(ObjectStore.java:262)
      at
    org.apache.hadoop.hive.metastore.ObjectStore.getPersistenceManager(ObjectStore.java:291)
    at
    org.apache.hadoop.hive.metastore.ObjectStore.initialize(ObjectStore.java:224)
      at
    org.apache.hadoop.hive.metastore.ObjectStore.setConf(ObjectStore.java:199)
    at org.apache.hadoop.util.ReflectionUtils.setConf(ReflectionUtils.java:62)
      at
    org.apache.hadoop.util.ReflectionUtils.newInstance(ReflectionUtils.java:117)
    at
    org.apache.hadoop.hive.metastore.RetryingRawStore.(RetryingRawStore.java:71)
    at
    org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.newRawStore(HiveMetaStore.java:413)
      at
    org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.getMS(HiveMetaStore.java:401)
    at
    org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.createDefaultDB(HiveMetaStore.java:439)
      at
    org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.init(HiveMetaStore.java:325)
    at
    org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.(Server.java:349)
    at com.cloudera.beeswax.Server$1.run(Server.java:200)
      at java.lang.Thread.run(Thread.java:724)


    On Fri, Aug 9, 2013 at 12:20 PM, Romain Rigaux wrote:

    Make sure all ps -ef | grep hue or grep beeswax return nothing.

    Also curious, what is the use case of not running Hue as 'hue'?

    Romain

    On Fri, Aug 9, 2013 at 12:16 PM, Vinamrata Singal wrote:

    It didn't send this part:

    Exception in thread "main"
    org.apache.thrift.transport.TTransportException: Could not create
    ServerSocket on address0.0.0.0/0.0.0.0:8002.
    at
    org.apache.thrift.transport.TServerSocket.<init>(TServerSocket.java:93)
    at
    org.apache.thrift.transport.TServerSocket.<init>(TServerSocket.java:75)
    at org.apache.thrift.transport.TServerSocket.<init>(TServerSocket.java:68)
    at com.cloudera.beeswax.Server.serveBeeswax(Server.java:250)
    at com.cloudera.beeswax.Server.main(Server.java:214)
    at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
    at
    sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
    at
    sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
    at java.lang.reflect.Method.invoke(Method.java:606)
    at org.apache.hadoop.util.RunJar.main(RunJar.java:160)
    org.apache.thrift.transport.TTransportException: Could not create
    ServerSocket on address 0.0.0.0/0.0.0.0:8003.
    at
    org.apache.thrift.transport.TServerSocket.<init>(TServerSocket.java:93)
    at org.apache.thrift.transport.TServerSocket.<init>(TServerSocket.java:75)
    at
    org.apache.thrift.transport.TServerSocket.<init>(TServerSocket.java:68)
    at com.cloudera.beeswax.Server.serveMeta(Server.java:348)
    at com.cloudera.beeswax.Server$1.run(Server.java:200)
    at java.lang.Thread.run(Thread.java:724)

    On Fri, Aug 9, 2013 at 12:14 PM, Abraham Elmahrek wrote:

    The logs you've just posted mean that beeswaxd is starting :).


    On Fri, Aug 9, 2013 at 12:07 PM, Vinamrata Singal <vsingal5@stanford.edu
    wrote:
    Done, but I still get a bind to socket error, but when I run netstat
    there is no pid associated with the port:

    13/08/09 12:04:09 INFO beeswax.Server: Starting metastore at port 8003
    13/08/09 12:04:09 INFO beeswax.Server: Starting beeswaxd at port 8002
    Exception in thread "main"
    org.apache.thrift.transport.TTransportException: Could not create
    ServerSocket on address 0.0.0.0/0.0.0.0:8002.
    at
    org.apache.thrift.transport.TServerSocket.<init>(TServerSocket.java:93)
    at
    org.apache.thrift.transport.TServerSocket.<init>(TServerSocket.java:75)
    at
    org.apache.thrift.transport.TServerSocket.<init>(TServerSocket.java:68)
    at com.cloudera.beeswax.Server.serveBeeswax(Server.java:250)
    at com.cloudera.beeswax.Server.main(Server.java:214)
    at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
    at
    sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
    at
    sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
    at java.lang.reflect.Method.invoke(Method.java:606)
    at org.apache.hadoop.util.RunJar.main(RunJar.java:160)
    org.apache.thrift.transport.TTransportException: Could not create
    ServerSocket on address 0.0.0.0/0.0.0.0:8003.
    at
    org.apache.thrift.transport.TServerSocket.<init>(TServerSocket.java:93)
    at
    org.apache.thrift.transport.TServerSocket.<init>(TServerSocket.java:75)
    at
    org.apache.thrift.transport.TServerSocket.<init>(TServerSocket.java:68)
    at com.cloudera.beeswax.Server.serveMeta(Server.java:348)
    at com.cloudera.beeswax.Server$1.run(Server.java:200)
    at java.lang.Thread.run(Thread.java:724)


    On Fri, Aug 9, 2013 at 11:59 AM, Romain Rigaux wrote:

    Beeswax is trying to create the Hive metastore DB on /usr/share/hue
    and your custom user does not have the permissions:

    chown your_user:your_user /usr/share/hue

    Romain


    On Fri, Aug 9, 2013 at 11:56 AM, Vinamrata Singal <
    vsingal5@stanford.edu> wrote:
    After killing the appropriate processes, I still get the following
    error:

    java.sql.SQLException: Failed to create database 'metastore_db', see
    the next exception for details.
    at
    org.apache.derby.impl.jdbc.SQLExceptionFactory40.getSQLException(Unknown
    Source)
    at org.apache.derby.impl.jdbc.Util.newEmbedSQLException(Unknown
    Source)
    at org.apache.derby.impl.jdbc.Util.seeNextException(Unknown Source)
    at org.apache.derby.impl.jdbc.EmbedConnection.createDatabase(Unknown
    Source)
    at org.apache.derby.impl.jdbc.EmbedConnection.<init>(Unknown Source)
    at org.apache.derby.impl.jdbc.EmbedConnection30.<init>(Unknown Source)
    at org.apache.derby.impl.jdbc.EmbedConnection40.<init>(Unknown
    Source)
    at org.apache.derby.jdbc.Driver40.getNewEmbedConnection(Unknown
    Source)
    at org.apache.derby.jdbc.InternalDriver.connect(Unknown Source)
    at org.apache.derby.jdbc.AutoloadedDriver.connect(Unknown Source)
    at java.sql.DriverManager.getConnection(DriverManager.java:571)
    at java.sql.DriverManager.getConnection(DriverManager.java:215)
    at
    org.apache.commons.dbcp.DriverManagerConnectionFactory.createConnection(DriverManagerConnectionFactory.java:75)
    at
    org.apache.commons.dbcp.PoolableConnectionFactory.makeObject(PoolableConnectionFactory.java:582)
    at
    org.apache.commons.pool.impl.GenericObjectPool.borrowObject(GenericObjectPool.java:1148)
    at
    org.apache.commons.dbcp.PoolingDataSource.getConnection(PoolingDataSource.java:106)
    at
    org.datanucleus.store.rdbms.ConnectionFactoryImpl$ManagedConnectionImpl.getConnection(ConnectionFactoryImpl.java:521)
    at
    org.datanucleus.store.rdbms.RDBMSStoreManager.<init>(RDBMSStoreManager.java:290)
    at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native
    Method)
    at
    sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:57)
    at
    sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
    at java.lang.reflect.Constructor.newInstance(Constructor.java:526)
    at
    org.datanucleus.plugin.NonManagedPluginRegistry.createExecutableExtension(NonManagedPluginRegistry.java:588)
    at
    org.datanucleus.plugin.PluginManager.createExecutableExtension(PluginManager.java:300)
    at
    org.datanucleus.ObjectManagerFactoryImpl.initialiseStoreManager(ObjectManagerFactoryImpl.java:161)
    at
    org.datanucleus.jdo.JDOPersistenceManagerFactory.freezeConfiguration(JDOPersistenceManagerFactory.java:583)
    at
    org.datanucleus.jdo.JDOPersistenceManagerFactory.createPersistenceManagerFactory(JDOPersistenceManagerFactory.java:286)
    at
    org.datanucleus.jdo.JDOPersistenceManagerFactory.getPersistenceManagerFactory(JDOPersistenceManagerFactory.java:182)
    at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
    at
    sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
    at
    sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
    at java.lang.reflect.Method.invoke(Method.java:606)
    at javax.jdo.JDOHelper$16.run(JDOHelper.java:1958)
    at java.security.AccessController.doPrivileged(Native Method)
    at javax.jdo.JDOHelper.invoke(JDOHelper.java:1953)
    at
    javax.jdo.JDOHelper.invokeGetPersistenceManagerFactoryOnImplementation(JDOHelper.java:1159)
    at
    javax.jdo.JDOHelper.getPersistenceManagerFactory(JDOHelper.java:803)
    at
    javax.jdo.JDOHelper.getPersistenceManagerFactory(JDOHelper.java:698)
    at
    org.apache.hadoop.hive.metastore.ObjectStore.getPMF(ObjectStore.java:262)
    at
    org.apache.hadoop.hive.metastore.ObjectStore.getPersistenceManager(ObjectStore.java:291)
    at
    org.apache.hadoop.hive.metastore.ObjectStore.initialize(ObjectStore.java:224)
    at
    org.apache.hadoop.hive.metastore.ObjectStore.setConf(ObjectStore.java:199)
    at
    org.apache.hadoop.util.ReflectionUtils.setConf(ReflectionUtils.java:62)
    at
    org.apache.hadoop.util.ReflectionUtils.newInstance(ReflectionUtils.java:117)
    at
    org.apache.hadoop.hive.metastore.RetryingRawStore.<init>(RetryingRawStore.java:62)
    at
    org.apache.hadoop.hive.metastore.RetryingRawStore.getProxy(RetryingRawStore.java:71)
    at
    org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.newRawStore(HiveMetaStore.java:413)
    at
    org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.getMS(HiveMetaStore.java:401)
    at
    org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.createDefaultDB(HiveMetaStore.java:439)
    at
    org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.init(HiveMetaStore.java:325)
    at
    org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.<init>(HiveMetaStore.java:279)
    at com.cloudera.beeswax.Server.serveMeta(Server.java:349)
    at com.cloudera.beeswax.Server$1.run(Server.java:200)
    at java.lang.Thread.run(Thread.java:724)
    Caused by: java.sql.SQLException: Failed to create database
    'metastore_db', see the next exception for details.
    at
    org.apache.derby.impl.jdbc.SQLExceptionFactory.getSQLException(Unknown
    Source)
    at
    org.apache.derby.impl.jdbc.SQLExceptionFactory40.wrapArgsForTransportAcrossDRDA(Unknown
    Source)
    ... 54 more
    Caused by: java.sql.SQLException: Directory
    /usr/share/hue/metastore_db cannot be created.
    at
    org.apache.derby.impl.jdbc.SQLExceptionFactory.getSQLException(Unknown
    Source)
    at
    org.apache.derby.impl.jdbc.SQLExceptionFactory40.wrapArgsForTransportAcrossDRDA(Unknown
    Source)
    at
    org.apache.derby.impl.jdbc.SQLExceptionFactory40.getSQLException(Unknown
    Source)
    at org.apache.derby.impl.jdbc.Util.generateCsSQLException(Unknown
    Source)
    at
    org.apache.derby.impl.jdbc.TransactionResourceImpl.wrapInSQLException(Unknown
    Source)
    at
    org.apache.derby.impl.jdbc.TransactionResourceImpl.handleException(Unknown
    Source)
    at org.apache.derby.impl.jdbc.EmbedConnection.handleException(Unknown
    Source)
    ... 51 more
    Caused by: ERROR XBM0H: Directory /usr/share/hue/metastore_db cannot
    be created.
    at org.apache.derby.iapi.error.StandardException.newException(Unknown
    Source)
    at
    org.apache.derby.impl.services.monitor.StorageFactoryService$9.run(Unknown
    Source)
    at java.security.AccessController.doPrivileged(Native Method)
    at
    org.apache.derby.impl.services.monitor.StorageFactoryService.createServiceRoot(Unknown
    Source)
    at
    org.apache.derby.impl.services.monitor.BaseMonitor.bootService(Unknown
    Source)
    at
    org.apache.derby.impl.services.monitor.BaseMonitor.createPersistentService(Unknown
    Source)
    at
    org.apache.derby.iapi.services.monitor.Monitor.createPersistentService(Unknown
    Source)
    ... 51 more
    Exception in thread "MetaServerThread"
    javax.jdo.JDOFatalDataStoreException: Failed to create database
    'metastore_db', see the next exception for details.
    at
    org.datanucleus.jdo.NucleusJDOHelper.getJDOExceptionForNucleusException(NucleusJDOHelper.java:298)
    at
    org.datanucleus.jdo.JDOPersistenceManagerFactory.freezeConfiguration(JDOPersistenceManagerFactory.java:601)
    at
    org.datanucleus.jdo.JDOPersistenceManagerFactory.createPersistenceManagerFactory(JDOPersistenceManagerFactory.java:286)
    at
    org.datanucleus.jdo.JDOPersistenceManagerFactory.getPersistenceManagerFactory(JDOPersistenceManagerFactory.java:182)
    at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
    at
    sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
    at
    sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
    at java.lang.reflect.Method.invoke(Method.java:606)
    at javax.jdo.JDOHelper$16.run(JDOHelper.java:1958)
    at java.security.AccessController.doPrivileged(Native Method)
    at javax.jdo.JDOHelper.invoke(JDOHelper.java:1953)
    at
    javax.jdo.JDOHelper.invokeGetPersistenceManagerFactoryOnImplementation(JDOHelper.java:1159)
    at
    javax.jdo.JDOHelper.getPersistenceManagerFactory(JDOHelper.java:803)
    at
    javax.jdo.JDOHelper.getPersistenceManagerFactory(JDOHelper.java:698)
    at
    org.apache.hadoop.hive.metastore.ObjectStore.getPMF(ObjectStore.java:262)
    at
    org.apache.hadoop.hive.metastore.ObjectStore.getPersistenceManager(ObjectStore.java:291)
    at
    org.apache.hadoop.hive.metastore.ObjectStore.initialize(ObjectStore.java:224)
    at
    org.apache.hadoop.hive.metastore.ObjectStore.setConf(ObjectStore.java:199)
    at
    org.apache.hadoop.util.ReflectionUtils.setConf(ReflectionUtils.java:62)
    at
    org.apache.hadoop.util.ReflectionUtils.newInstance(ReflectionUtils.java:117)
    at
    org.apache.hadoop.hive.metastore.RetryingRawStore.<init>(RetryingRawStore.java:62)
    at
    org.apache.hadoop.hive.metastore.RetryingRawStore.getProxy(RetryingRawStore.java:71)
    at
    org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.newRawStore(HiveMetaStore.java:413)
    at
    org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.getMS(HiveMetaStore.java:401)
    at
    org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.createDefaultDB(HiveMetaStore.java:439)
    at
    org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.init(HiveMetaStore.java:325)
    at
    org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.<init>(HiveMetaStore.java:279)
    at com.cloudera.beeswax.Server.serveMeta(Server.java:349)
    at com.cloudera.beeswax.Server$1.run(Server.java:200)
    at java.lang.Thread.run(Thread.java:724)
    NestedThrowablesStackTrace:
    java.sql.SQLException: Failed to create database 'metastore_db', see
    the next exception for details.
    at
    org.apache.derby.impl.jdbc.SQLExceptionFactory40.getSQLException(Unknown
    Source)
    at org.apache.derby.impl.jdbc.Util.newEmbedSQLException(Unknown
    Source)
    at org.apache.derby.impl.jdbc.Util.seeNextException(Unknown Source)
    at
    org.apache.derby.impl.jdbc.EmbedConnection.createDatabase(Unknown Source)
    at org.apache.derby.impl.jdbc.EmbedConnection.<init>(Unknown Source)
    at org.apache.derby.impl.jdbc.EmbedConnection30.<init>(Unknown
    Source)
    at org.apache.derby.impl.jdbc.EmbedConnection40.<init>(Unknown Source)
    at org.apache.derby.jdbc.Driver40.getNewEmbedConnection(Unknown
    Source)
    at org.apache.derby.jdbc.InternalDriver.connect(Unknown Source)
    at org.apache.derby.jdbc.AutoloadedDriver.connect(Unknown Source)
    at java.sql.DriverManager.getConnection(DriverManager.java:571)
    at java.sql.DriverManager.getConnection(DriverManager.java:215)
    at
    org.apache.commons.dbcp.DriverManagerConnectionFactory.createConnection(DriverManagerConnectionFactory.java:75)
    at
    org.apache.commons.dbcp.PoolableConnectionFactory.makeObject(PoolableConnectionFactory.java:582)
    at
    org.apache.commons.pool.impl.GenericObjectPool.borrowObject(GenericObjectPool.java:1148)
    at
    org.apache.commons.dbcp.PoolingDataSource.getConnection(PoolingDataSource.java:106)
    at
    org.datanucleus.store.rdbms.ConnectionFactoryImpl$ManagedConnectionImpl.getConnection(ConnectionFactoryImpl.java:521)
    at
    org.datanucleus.store.rdbms.RDBMSStoreManager.<init>(RDBMSStoreManager.java:290)
    at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native
    Method)
    at
    sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:57)
    at
    sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
    at java.lang.reflect.Constructor.newInstance(Constructor.java:526)
    at
    org.datanucleus.plugin.NonManagedPluginRegistry.createExecutableExtension(NonManagedPluginRegistry.java:588)
    at
    org.datanucleus.plugin.PluginManager.createExecutableExtension(PluginManager.java:300)
    at
    org.datanucleus.ObjectManagerFactoryImpl.initialiseStoreManager(ObjectManagerFactoryImpl.java:161)
    at
    org.datanucleus.jdo.JDOPersistenceManagerFactory.freezeConfiguration(JDOPersistenceManagerFactory.java:583)
    at
    org.datanucleus.jdo.JDOPersistenceManagerFactory.createPersistenceManagerFactory(JDOPersistenceManagerFactory.java:286)
    at
    org.datanucleus.jdo.JDOPersistenceManagerFactory.getPersistenceManagerFactory(JDOPersistenceManagerFactory.java:182)
    at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
    at
    sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
    at
    sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
    at java.lang.reflect.Method.invoke(Method.java:606)
    at javax.jdo.JDOHelper$16.run(JDOHelper.java:1958)
    at java.security.AccessController.doPrivileged(Native Method)
    at javax.jdo.JDOHelper.invoke(JDOHelper.java:1953)
    at
    javax.jdo.JDOHelper.invokeGetPersistenceManagerFactoryOnImplementation(JDOHelper.java:1159)
    at
    javax.jdo.JDOHelper.getPersistenceManagerFactory(JDOHelper.java:803)
    at
    javax.jdo.JDOHelper.getPersistenceManagerFactory(JDOHelper.java:698)
    at
    org.apache.hadoop.hive.metastore.ObjectStore.getPMF(ObjectStore.java:262)
    at
    org.apache.hadoop.hive.metastore.ObjectStore.getPersistenceManager(ObjectStore.java:291)
    at
    org.apache.hadoop.hive.metastore.ObjectStore.initialize(ObjectStore.java:224)
    at
    org.apache.hadoop.hive.metastore.ObjectStore.setConf(ObjectStore.java:199)
    at
    org.apache.hadoop.util.ReflectionUtils.setConf(ReflectionUtils.java:62)
    at
    org.apache.hadoop.util.ReflectionUtils.newInstance(ReflectionUtils.java:117)
    at
    org.apache.hadoop.hive.metastore.RetryingRawStore.<init>(RetryingRawStore.java:62)
    at
    org.apache.hadoop.hive.metastore.RetryingRawStore.getProxy(RetryingRawStore.java:71)
    at
    org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.newRawStore(HiveMetaStore.java:413)
    at
    org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.getMS(HiveMetaStore.java:401)
    at
    org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.createDefaultDB(HiveMetaStore.java:439)
    at
    org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.init(HiveMetaStore.java:325)
    at
    org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.<init>(HiveMetaStore.java:279)
    at com.cloudera.beeswax.Server.serveMeta(Server.java:349)
    at com.cloudera.beeswax.Server$1.run(Server.java:200)
    at java.lang.Thread.run(Thread.java:724)
    Caused by: java.sql.SQLException: Failed to create database
    'metastore_db', see the next exception for details.
    at
    org.apache.derby.impl.jdbc.SQLExceptionFactory.getSQLException(Unknown
    Source)
    at
    org.apache.derby.impl.jdbc.SQLExceptionFactory40.wrapArgsForTransportAcrossDRDA(Unknown
    Source)
    ... 54 more
    Caused by: java.sql.SQLException: Directory
    /usr/share/hue/metastore_db cannot be created.
    at
    org.apache.derby.impl.jdbc.SQLExceptionFactory.getSQLException(Unknown
    Source)
    at
    org.apache.derby.impl.jdbc.SQLExceptionFactory40.wrapArgsForTransportAcrossDRDA(Unknown
    Source)
    at
    org.apache.derby.impl.jdbc.SQLExceptionFactory40.getSQLException(Unknown
    Source)
    at org.apache.derby.impl.jdbc.Util.generateCsSQLException(Unknown
    Source)
    at
    org.apache.derby.impl.jdbc.TransactionResourceImpl.wrapInSQLException(Unknown
    Source)
    at
    org.apache.derby.impl.jdbc.TransactionResourceImpl.handleException(Unknown
    Source)
    at org.apache.derby.impl.jdbc.EmbedConnection.handleException(Unknown
    Source)
    ... 51 more
    Caused by: ERROR XBM0H: Directory /usr/share/hue/metastore_db cannot
    be created.
    at org.apache.derby.iapi.error.StandardException.newException(Unknown
    Source)
    at
    org.apache.derby.impl.services.monitor.StorageFactoryService$9.run(Unknown
    Source)
    at java.security.AccessController.doPrivileged(Native Method)
    at
    org.apache.derby.impl.services.monitor.StorageFactoryService.createServiceRoot(Unknown
    Source)
    at
    org.apache.derby.impl.services.monitor.BaseMonitor.bootService(Unknown
    Source)
    at
    org.apache.derby.impl.services.monitor.BaseMonitor.createPersistentService(Unknown
    Source)
    at
    org.apache.derby.iapi.services.monitor.Monitor.createPersistentService(Unknown
    Source)
    ... 51 more
    (31713) socket 0.0.0.0:8888 already in use, retrying after 4.0
    seconds...
    (31713) socket 0.0.0.0:8888 already in use, retrying after 8.0
    seconds...
    (31713) socket 0.0.0.0:8888 already in use, retrying after 16.0
    seconds...

    On Fri, Aug 9, 2013 at 11:51 AM, Abraham Elmahrek wrote:

    Port 8888 is PID 22739. You should be able to execute "ps auxwww"
    or a similar command to find Beeswax and Hue. They should both be running
    as use Hue. Then just kill them both.

    -Abe


    On Fri, Aug 9, 2013 at 11:45 AM, Vinamrata Singal <
    vsingal5@stanford.edu> wrote:
    also no pid's for 8002 and 8003


    On Fri, Aug 9, 2013 at 11:44 AM, Vinamrata Singal <
    vsingal5@stanford.edu> wrote:
    I have the following output, can't find port 8888

    Proto Recv-Q Send-Q Local Address Foreign Address
    State PID/Program name
    tcp 0 0 0.0.0.0:50060 0.0.0.0:*
    LISTEN 30843/java
    tcp 0 0 0.0.0.0:44141 0.0.0.0:*
    LISTEN 30312/java
    tcp 0 0 0.0.0.0:50030 0.0.0.0:*
    LISTEN 30697/java
    tcp 0 0 0.0.0.0:111 0.0.0.0:*
    LISTEN -
    tcp 0 0 0.0.0.0:50070 0.0.0.0:*
    LISTEN 30312/java
    tcp 0 0 0.0.0.0:22 0.0.0.0:*
    LISTEN -
    tcp 0 0 0.0.0.0:58776 0.0.0.0:*
    LISTEN 30442/java
    tcp 0 0 0.0.0.0:8888 0.0.0.0:*
    LISTEN 22739/python2.7
    tcp 0 0 0.0.0.0:50010 0.0.0.0:*
    LISTEN 30442/java
    tcp 0 0 0.0.0.0:50075 0.0.0.0:*
    LISTEN 30442/java
    tcp 0 0 0.0.0.0:55035 0.0.0.0:*
    LISTEN 30572/java
    tcp 0 0 0.0.0.0:55039 0.0.0.0:*
    LISTEN 30697/java
    tcp 0 0 127.0.0.1:60353 0.0.0.0:*
    LISTEN 30843/java
    tcp 0 0 0.0.0.0:8002 0.0.0.0:*
    LISTEN -
    tcp 0 0 0.0.0.0:8003 0.0.0.0:*
    LISTEN -
    tcp 0 0 0.0.0.0:50020 0.0.0.0:*
    LISTEN 30442/java
    tcp 0 0 127.0.0.1:54310 0.0.0.0:*
    LISTEN 30312/java
    tcp 0 0 0.0.0.0:40358 0.0.0.0:*
    LISTEN -
    tcp 0 0 127.0.0.1:54311 0.0.0.0:*
    LISTEN 30697/java
    tcp 0 0 0.0.0.0:50090 0.0.0.0:*
    LISTEN 30572/java
    tcp6 0 0 :::49932 :::*
    LISTEN -
    tcp6 0 0 :::111 :::*
    LISTEN -
    tcp6 0 0 :::22 :::*
    LISTEN -


    On Fri, Aug 9, 2013 at 11:27 AM, Abraham Elmahrek <
    abe@cloudera.com> wrote:
    It definitely looks like they're in use. The command "netstat
    -tlnp" should show you their PIDs and process names. If they're Hue, you
    can simply 'kill' those processes using "kill <pid1> <pid2> ..." and start
    Hue again.


    On Fri, Aug 9, 2013 at 11:22 AM, Vinamrata Singal <
    vsingal5@stanford.edu> wrote:
    tcp 0 0 0.0.0.0:50060 0.0.0.0:*
    LISTEN
    tcp 0 0 0.0.0.0:44141 0.0.0.0:*
    LISTEN
    tcp 0 0 0.0.0.0:50030 0.0.0.0:*
    LISTEN
    tcp 0 0 0.0.0.0:111 0.0.0.0:*
    LISTEN
    tcp 0 0 0.0.0.0:50070 0.0.0.0:*
    LISTEN
    tcp 0 0 0.0.0.0:22 0.0.0.0:*
    LISTEN
    tcp 0 0 0.0.0.0:58776 0.0.0.0:*
    LISTEN
    tcp 0 0 0.0.0.0:8888 0.0.0.0:*
    LISTEN
    tcp 0 0 0.0.0.0:50010 0.0.0.0:*
    LISTEN
    tcp 0 0 0.0.0.0:50075 0.0.0.0:*
    LISTEN
    tcp 0 0 0.0.0.0:55035 0.0.0.0:*
    LISTEN
    tcp 0 0 0.0.0.0:55039 0.0.0.0:*
    LISTEN
    tcp 0 0 127.0.0.1:60353 0.0.0.0:*
    LISTEN
    tcp 0 0 0.0.0.0:8002 0.0.0.0:*
    LISTEN
    tcp 0 0 0.0.0.0:8003 0.0.0.0:*
    LISTEN
    tcp 0 0 0.0.0.0:50020 0.0.0.0:*
    LISTEN
    tcp 0 0 127.0.0.1:54310 0.0.0.0:*
    LISTEN
    tcp 0 0 0.0.0.0:40358 0.0.0.0:*
    LISTEN
    tcp 0 0 127.0.0.1:54311 0.0.0.0:*
    LISTEN
    tcp 0 0 0.0.0.0:50090 0.0.0.0:*
    LISTEN
    tcp6 0 0 :::49932 :::*
    LISTEN
    tcp6 0 0 :::111 :::*
    LISTEN
    tcp6 0 0 :::22 :::*
    LISTEN


    On Fri, Aug 9, 2013 at 11:20 AM, Abraham Elmahrek <
    abe@cloudera.com> wrote:
    It looks like port 8888 is in use. Maybe 8002 and 8003 are in
    use as well. Could you provide the output of 'netstat -tln'?

    -Abe


    On Fri, Aug 9, 2013 at 11:16 AM, Vinamrata Singal <
    vsingal5@stanford.edu> wrote:
    Abe,

    Thanks for all your help and quick replies.

    So I went ahead and fixed everything (including the username
    as you suggested) yet now I'm getting a different error:

    Executing /usr/local/hadoop/bin/hadoop jar
    /usr/share/hue/apps/beeswax/src/beeswax/../../java-lib/BeeswaxServer.jar
    --beeswax 8002 --desktop-host 127.0.0.1 --desktop-port 8888
    --query-lifetime 604800000 --metastore 8003
    (31443) socket 0.0.0.0:8888 already in use, retrying after
    0.5 seconds...
    (31443) socket 0.0.0.0:8888 already in use, retrying after
    1.0 seconds...
    (31443) socket 0.0.0.0:8888 already in use, retrying after
    2.0 seconds...
    (31443) socket 0.0.0.0:8888 already in use, retrying after
    4.0 seconds...
    13/08/09 11:15:37 INFO beeswax.Server: Starting metastore at
    port 8003
    13/08/09 11:15:37 INFO beeswax.Server: Starting beeswaxd at
    port 8002
    Exception in thread "main"
    org.apache.thrift.transport.TTransportException: Could not create
    ServerSocket on address 0.0.0.0/0.0.0.0:8002.
    at
    org.apache.thrift.transport.TServerSocket.<init>(TServerSocket.java:93)
    at
    org.apache.thrift.transport.TServerSocket.<init>(TServerSocket.java:75)
    at
    org.apache.thrift.transport.TServerSocket.<init>(TServerSocket.java:68)
    at com.cloudera.beeswax.Server.serveBeeswax(Server.java:250)
    at com.cloudera.beeswax.Server.main(Server.java:214)
    at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
    at
    sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
    at
    sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
    at java.lang.reflect.Method.invoke(Method.java:606)
    at org.apache.hadoop.util.RunJar.main(RunJar.java:160)
    org.apache.thrift.transport.TTransportException: Could not
    create ServerSocket on address 0.0.0.0/0.0.0.0:8003.
    at
    org.apache.thrift.transport.TServerSocket.<init>(TServerSocket.java:93)
    at
    org.apache.thrift.transport.TServerSocket.<init>(TServerSocket.java:75)
    at
    org.apache.thrift.transport.TServerSocket.<init>(TServerSocket.java:68)
    at com.cloudera.beeswax.Server.serveMeta(Server.java:348)
    at com.cloudera.beeswax.Server$1.run(Server.java:200)
    at java.lang.Thread.run(Thread.java:724)



    On Thu, Aug 8, 2013 at 5:53 PM, Abraham Elmahrek <
    abe@cloudera.com> wrote:
    Vinamrata,

    I think it will work with the exception of the job browser
    application. It depends on the thrift service code for the job tracker
    packaged in hue-plugins.jar, which only runs on CDH. There have been users
    who've ported that code over to the upstream job tracker in the past though.

    To solve the rest of your issues, take a look at
    apps/beeswax/beeswax_server.sh. There are environment variables that you
    need to set when you start Hue. In particular, "HIVE_LIB" should be set or
    some how grok'd into Hue's environment.

    -Abe


    On Thu, Aug 8, 2013 at 5:43 PM, Vinamrata Singal <
    vsingal5@stanford.edu> wrote:
    I have an Apache installation. Is that an issue for Hue?

    Got it, will look into the hue user installation issues.


    On Thu, Aug 8, 2013 at 5:25 PM, Abraham Elmahrek <
    abe@cloudera.com> wrote:
    That error is likely because of the error: find:
    `~/hive-0.10.0/lib': No such file or directory. Are you
    using a custom installation of Hive? Perhaps, the apache distribution
    rather than the CDH distribution?

    Note: It's probably not a good idea to change the user from
    'hue' to something else.


    On Thu, Aug 8, 2013 at 5:20 PM, Vinamrata Singal <
    vsingal5@stanford.edu> wrote:
    I actually think I fixed the issue by changing the user in
    the supervisor.py script to be the user that owns that directory.

    Question: now I'm getting this error:

    xception in thread "main" java.lang.NoClassDefFoundError:
    org/apache/hadoop/hive/conf/HiveConf
    at java.lang.Class.forName0(Native Method)
    at java.lang.Class.forName(Class.java:270)
    at org.apache.hadoop.util.RunJar.main(RunJar.java:153)
    Caused by: java.lang.ClassNotFoundException:
    org.apache.hadoop.hive.conf.HiveConf
    at java.net.URLClassLoader$1.run(URLClassLoader.java:366)
    at java.net.URLClassLoader$1.run(URLClassLoader.java:355)
    at java.security.AccessController.doPrivileged(Native
    Method)
    at
    java.net.URLClassLoader.findClass(URLClassLoader.java:354)
    at java.lang.ClassLoader.loadClass(ClassLoader.java:424)
    at java.lang.ClassLoader.loadClass(ClassLoader.java:357)


    On Thu, Aug 8, 2013 at 5:19 PM, Abraham Elmahrek <
    abe@cloudera.com> wrote:
    Your 'hadoop.tmp.dir' doesn't vary per user. I'm not
    entirely sure if that directory is chown'd or if the first user to create
    it takes ownership. Either way, you probably don't have access to that
    directory using the user starting Beeswaxd (hue). Take a look at
    http://hadoop.apache.org/docs/current/hadoop-project-dist/hadoop-common/core-default.xmlfor an example 'hadoop.tmp.dir' that varies by username.

    -Abe


    On Thu, Aug 8, 2013 at 5:15 PM, Vinamrata Singal <
    vsingal5@stanford.edu> wrote:
    <configuration>
    <property>
    <name>hadoop.tmp.dir </name>
    <value>/app/hadoop/tmp</value>
    <description> A base for other temporary
    directories </description>
    </property>

    <property>
    <name>fs.default.name</name>
    <value>hdfs://master:54310</value>
    </property>
    </configuration>


    On Thu, Aug 8, 2013 at 5:11 PM, Abraham Elmahrek <
    abe@cloudera.com> wrote:
    Could you provide your core-site.xml? This should be
    creating a file in 'hadoop.tmp.dir'.

    -Abe


    On Thu, Aug 8, 2013 at 5:03 PM, Vinamrata Singal <
    vsingal5@stanford.edu> wrote:
    File: `/tmp'
    Size: 4096 Blocks: 8 IO Block: 4096
    directory
    Device: fd00h/64768d Inode: 131082 Links: 11
    Access: (1777/drwxrwxrwt) Uid: ( 0/ root)
    Gid: ( 0/ root)
    Access: 2013-08-08 22:47:49.435308999 +0000
    Modify: 2013-08-09 00:01:17.251309001 +0000
    Change: 2013-08-09 00:01:17.251309001 +0000
    Birth: -



    On Thu, Aug 8, 2013 at 5:00 PM, Abraham Elmahrek <
    abe@cloudera.com> wrote:
    It looks like Beeswaxd cannot create a temporary
    directory in '/tmp/'. What do your permissions on that directory look like?
    What is the output of "stat /tmp/"?


    On Thu, Aug 8, 2013 at 4:52 PM, <
    vsingal5@stanford.edu> wrote:
    I'm getting the following while trying to build Hue:













    (6211) *** Controller starting at Thu Aug 8 11:29:50 2013
    Should start 1 new children
    Controller.spawn_children(number=1)
    $HADOOP_HOME=
    $HADOOP_BIN=/usr/local/hadoop/bin/hadoop
    $HIVE_CONF_DIR=~/hive-0.10.0/conf
    $HIVE_HOME=~/hive-0.10.0
    find: `~/hive-0.10.0/lib': No such file or directory
    $HADOOP_CLASSPATH=:
    $HADOOP_OPTS=-Dlog4j.configuration=log4j.properties
    $HADOOP_CONF_DIR=~/hive-0.10.0/conf:/usr/local/hadoop/conf
    $HADOOP_MAPRED_HOME=/usr/lib/hadoop-0.20-mapreduce
    CWD=/usr/local/hue/desktop/conf
    Executing /usr/local/hadoop/bin/hadoop jar /usr/local/hue/apps/beeswax/src/beeswax/../../java-lib/BeeswaxServer.jar --beeswax 8002 --desktop-host 127.0.0.1 --desktop-port 8888 --query-lifetime 604800000 --metastore 8003
    Exception in thread "main" java.io.IOException: Permission denied
    at java.io.UnixFileSystem.createFileExclusively(Native Method)
    at java.io.File.createTempFile(File.java:1879)
    at org.apache.hadoop.util.RunJar.main(RunJar.java:119)

    I've changed the configuration file so it doesn't
    use hue but the user that I'm logged in as which has read and write
    permissions in the hadoop dfs, hadoop, hive, etc. Not sure why it's doing
    this...

    --
    Best,
    Vinamrata Singal | BS Computer Science | Stanford
    Class of 2016 | (c) 650.215.3775 | (w)
    https://stanford.edu/~vsingal5

    --
    Best,
    Vinamrata Singal | BS Computer Science | Stanford Class
    of 2016 | (c) 650.215.3775 | (w)
    https://stanford.edu/~vsingal5

    --
    Best,
    Vinamrata Singal | BS Computer Science | Stanford Class of
    2016 | (c) 650.215.3775 | (w)
    https://stanford.edu/~vsingal5

    --
    Best,
    Vinamrata Singal | BS Computer Science | Stanford Class of
    2016 | (c) 650.215.3775 | (w) https://stanford.edu/~vsingal5

    --
    Best,
    Vinamrata Singal | BS Computer Science | Stanford Class of
    2016 | (c) 650.215.3775 | (w) https://stanford.edu/~vsingal5

    --
    Best,
    Vinamrata Singal | BS Computer Science | Stanford Class of 2016
    (c) 650.215.3775 | (w) https://stanford.edu/~vsingal5

    --
    Best,
    Vinamrata Singal | BS Computer Science | Stanford Class of 2016 |
    (c) 650.215.3775 | (w) https://stanford.edu/~vsingal5


    --
    Best,
    Vinamrata Singal | BS Computer Science | Stanford Class of 2016 |
    (c) 650.215.3775 | (w) https://stanford.edu/~vsingal5

    --
    Best,
    Vinamrata Singal | BS Computer Science | Stanford Class of 2016 | (c)
    650.215.3775 | (w) https://stanford.edu/~vsingal5

    --
    Best,
    Vinamrata Singal | BS Computer Science | Stanford Class of 2016 | (c)
    650.215.3775 | (w) https://stanford.edu/~vsingal5

    --
    Best,
    Vinamrata Singal | BS Computer Science | Stanford Class of 2016 | (c)
    650.215.3775 | (w) https://stanford.edu/~vsingal5

    --
    Best,
    Vinamrata Singal | BS Computer Science | Stanford Class of 2016 |
    (c) 650.215.3775 | (w) https://stanford.edu/~vsingal5
  • Romain Rigaux at Aug 9, 2013 at 7:29 pm
    We recommend to run Hue as 'hue' as it is the standard. If not you need to
    chown or chmod all the log, db directories/files.

    Romain

    On Fri, Aug 9, 2013 at 12:24 PM, Vinamrata Singal wrote:

    Even though I gave permissions to derby.log and where the metastore gets
    created, why am I still getting this error:

    2013-08-09 19:23:15.059 GMT Thread[MetaServerThread,5,main]
    java.io.FileNotFoundException: derby.log (Permission denied)
    2013-08-09 19:23:15.193 GMT Thread[MetaServerThread,5,main] Cleanup action
    starting
    ERROR XBM0H: Directory /home/vsingal/metastore_db cannot be created.
    at org.apache.derby.iapi.error.StandardException.newException(Unknown
    Source)
    at
    org.apache.derby.impl.services.monitor.StorageFactoryService$9.run(Unknown
    Source)
    at java.security.AccessController.doPrivileged(Native Method)
    at
    org.apache.derby.impl.services.monitor.StorageFactoryService.createServiceRoot(Unknown
    Source)
    at org.apache.derby.impl.services.monitor.BaseMonitor.bootService(Unknown
    Source)
    at
    org.apache.derby.impl.services.monitor.BaseMonitor.createPersistentService(Unknown
    Source)
    at
    org.apache.derby.iapi.services.monitor.Monitor.createPersistentService(Unknown
    Source)
    at org.apache.derby.impl.jdbc.EmbedConnection.createDatabase(Unknown
    Source)
    at org.apache.derby.impl.jdbc.EmbedConnection.<init>(Unknown Source)
    at org.apache.derby.impl.jdbc.EmbedConnection30.<init>(Unknown Source)
    at org.apache.derby.impl.jdbc.EmbedConnection40.<init>(Unknown Source)
    at org.apache.derby.jdbc.Driver40.getNewEmbedConnection(Unknown Source)
    at org.apache.derby.jdbc.InternalDriver.connect(Unknown Source)
    at org.apache.derby.jdbc.AutoloadedDriver.connect(Unknown Source)
    at java.sql.DriverManager.getConnection(DriverManager.java:571)
    at java.sql.DriverManager.getConnection(DriverManager.java:215)
    at
    org.apache.commons.dbcp.DriverManagerConnectionFactory.createConnection(DriverManagerConnectionFactory.java:75)
    at
    org.apache.commons.dbcp.PoolableConnectionFactory.makeObject(PoolableConnectionFactory.java:582)
    at
    org.apache.commons.pool.impl.GenericObjectPool.borrowObject(GenericObjectPool.java:1148)
    at
    org.apache.commons.dbcp.PoolingDataSource.getConnection(PoolingDataSource.java:106)
    at
    org.datanucleus.store.rdbms.ConnectionFactoryImpl$ManagedConnectionImpl.getConnection(ConnectionFactoryImpl.java:521)
    at
    org.datanucleus.store.rdbms.RDBMSStoreManager.<init>(RDBMSStoreManager.java:290)
    at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
    at
    sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:57)
    at
    sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
    at java.lang.reflect.Constructor.newInstance(Constructor.java:526)
    at
    org.datanucleus.plugin.NonManagedPluginRegistry.createExecutableExtension(NonManagedPluginRegistry.java:588)
    at
    org.datanucleus.plugin.PluginManager.createExecutableExtension(PluginManager.java:300)
    at
    org.datanucleus.ObjectManagerFactoryImpl.initialiseStoreManager(ObjectManagerFactoryImpl.java:161)
    at
    org.datanucleus.jdo.JDOPersistenceManagerFactory.freezeConfiguration(JDOPersistenceManagerFactory.java:583)
    at
    org.datanucleus.jdo.JDOPersistenceManagerFactory.createPersistenceManagerFactory(JDOPersistenceManagerFactory.java:286)
    at
    org.datanucleus.jdo.JDOPersistenceManagerFactory.getPersistenceManagerFactory(JDOPersistenceManagerFactory.java:182)
    at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
    at
    sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
    at
    sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
    at java.lang.reflect.Method.invoke(Method.java:606)
    at javax.jdo.JDOHelper$16.run(JDOHelper.java:1958)
    at java.security.AccessController.doPrivileged(Native Method)
    at javax.jdo.JDOHelper.invoke(JDOHelper.java:1953)
    at
    javax.jdo.JDOHelper.invokeGetPersistenceManagerFactoryOnImplementation(JDOHelper.java:1159)
    at javax.jdo.JDOHelper.getPersistenceManagerFactory(JDOHelper.java:803)
    at javax.jdo.JDOHelper.getPersistenceManagerFactory(JDOHelper.java:698)
    at
    org.apache.hadoop.hive.metastore.ObjectStore.getPMF(ObjectStore.java:262)
    at
    org.apache.hadoop.hive.metastore.ObjectStore.getPersistenceManager(ObjectStore.java:291)
    at
    org.apache.hadoop.hive.metastore.ObjectStore.initialize(ObjectStore.java:224)
    at
    org.apache.hadoop.hive.metastore.ObjectStore.setConf(ObjectStore.java:199)
    at org.apache.hadoop.util.ReflectionUtils.setConf(ReflectionUtils.java:62)
    at
    org.apache.hadoop.util.ReflectionUtils.newInstance(ReflectionUtils.java:117)
    at
    org.apache.hadoop.hive.metastore.RetryingRawStore.<init>(RetryingRawStore.java:62)
    at
    org.apache.hadoop.hive.metastore.RetryingRawStore.getProxy(RetryingRawStore.java:71)
    at
    org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.newRawStore(HiveMetaStore.java:413)
    at
    org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.getMS(HiveMetaStore.java:401)
    at
    org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.createDefaultDB(HiveMetaStore.java:439)
    at
    org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.init(HiveMetaStore.java:325)
    at
    org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.<init>(HiveMetaStore.java:279)
    at com.cloudera.beeswax.Server.serveMeta(Server.java:349)
    at com.cloudera.beeswax.Server$1.run(Server.java:200)
    at java.lang.Thread.run(Thread.java:724)


    On Fri, Aug 9, 2013 at 12:20 PM, Romain Rigaux wrote:

    Make sure all ps -ef | grep hue or grep beeswax return nothing.

    Also curious, what is the use case of not running Hue as 'hue'?

    Romain

    On Fri, Aug 9, 2013 at 12:16 PM, Vinamrata Singal wrote:

    It didn't send this part:

    Exception in thread "main"
    org.apache.thrift.transport.TTransportException: Could not create
    ServerSocket on address0.0.0.0/0.0.0.0:8002.
    at
    org.apache.thrift.transport.TServerSocket.<init>(TServerSocket.java:93)
    at
    org.apache.thrift.transport.TServerSocket.<init>(TServerSocket.java:75)
    at
    org.apache.thrift.transport.TServerSocket.<init>(TServerSocket.java:68)
    at com.cloudera.beeswax.Server.serveBeeswax(Server.java:250)
    at com.cloudera.beeswax.Server.main(Server.java:214)
    at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
    at
    sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
    at
    sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
    at java.lang.reflect.Method.invoke(Method.java:606)
    at org.apache.hadoop.util.RunJar.main(RunJar.java:160)
    org.apache.thrift.transport.TTransportException: Could not create
    ServerSocket on address 0.0.0.0/0.0.0.0:8003.
    at
    org.apache.thrift.transport.TServerSocket.<init>(TServerSocket.java:93)
    at
    org.apache.thrift.transport.TServerSocket.<init>(TServerSocket.java:75)
    at
    org.apache.thrift.transport.TServerSocket.<init>(TServerSocket.java:68)
    at com.cloudera.beeswax.Server.serveMeta(Server.java:348)
    at com.cloudera.beeswax.Server$1.run(Server.java:200)
    at java.lang.Thread.run(Thread.java:724)

    On Fri, Aug 9, 2013 at 12:14 PM, Abraham Elmahrek wrote:

    The logs you've just posted mean that beeswaxd is starting :).


    On Fri, Aug 9, 2013 at 12:07 PM, Vinamrata Singal <
    vsingal5@stanford.edu> wrote:
    Done, but I still get a bind to socket error, but when I run netstat
    there is no pid associated with the port:

    13/08/09 12:04:09 INFO beeswax.Server: Starting metastore at port 8003
    13/08/09 12:04:09 INFO beeswax.Server: Starting beeswaxd at port 8002
    Exception in thread "main"
    org.apache.thrift.transport.TTransportException: Could not create
    ServerSocket on address 0.0.0.0/0.0.0.0:8002.
    at
    org.apache.thrift.transport.TServerSocket.<init>(TServerSocket.java:93)
    at
    org.apache.thrift.transport.TServerSocket.<init>(TServerSocket.java:75)
    at
    org.apache.thrift.transport.TServerSocket.<init>(TServerSocket.java:68)
    at com.cloudera.beeswax.Server.serveBeeswax(Server.java:250)
    at com.cloudera.beeswax.Server.main(Server.java:214)
    at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
    at
    sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
    at
    sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
    at java.lang.reflect.Method.invoke(Method.java:606)
    at org.apache.hadoop.util.RunJar.main(RunJar.java:160)
    org.apache.thrift.transport.TTransportException: Could not create
    ServerSocket on address 0.0.0.0/0.0.0.0:8003.
    at
    org.apache.thrift.transport.TServerSocket.<init>(TServerSocket.java:93)
    at
    org.apache.thrift.transport.TServerSocket.<init>(TServerSocket.java:75)
    at
    org.apache.thrift.transport.TServerSocket.<init>(TServerSocket.java:68)
    at com.cloudera.beeswax.Server.serveMeta(Server.java:348)
    at com.cloudera.beeswax.Server$1.run(Server.java:200)
    at java.lang.Thread.run(Thread.java:724)


    On Fri, Aug 9, 2013 at 11:59 AM, Romain Rigaux wrote:

    Beeswax is trying to create the Hive metastore DB on /usr/share/hue
    and your custom user does not have the permissions:

    chown your_user:your_user /usr/share/hue

    Romain


    On Fri, Aug 9, 2013 at 11:56 AM, Vinamrata Singal <
    vsingal5@stanford.edu> wrote:
    After killing the appropriate processes, I still get the following
    error:

    java.sql.SQLException: Failed to create database 'metastore_db', see
    the next exception for details.
    at
    org.apache.derby.impl.jdbc.SQLExceptionFactory40.getSQLException(Unknown
    Source)
    at org.apache.derby.impl.jdbc.Util.newEmbedSQLException(Unknown
    Source)
    at org.apache.derby.impl.jdbc.Util.seeNextException(Unknown Source)
    at org.apache.derby.impl.jdbc.EmbedConnection.createDatabase(Unknown
    Source)
    at org.apache.derby.impl.jdbc.EmbedConnection.<init>(Unknown
    Source)
    at org.apache.derby.impl.jdbc.EmbedConnection30.<init>(Unknown
    Source)
    at org.apache.derby.impl.jdbc.EmbedConnection40.<init>(Unknown
    Source)
    at org.apache.derby.jdbc.Driver40.getNewEmbedConnection(Unknown
    Source)
    at org.apache.derby.jdbc.InternalDriver.connect(Unknown Source)
    at org.apache.derby.jdbc.AutoloadedDriver.connect(Unknown Source)
    at java.sql.DriverManager.getConnection(DriverManager.java:571)
    at java.sql.DriverManager.getConnection(DriverManager.java:215)
    at
    org.apache.commons.dbcp.DriverManagerConnectionFactory.createConnection(DriverManagerConnectionFactory.java:75)
    at
    org.apache.commons.dbcp.PoolableConnectionFactory.makeObject(PoolableConnectionFactory.java:582)
    at
    org.apache.commons.pool.impl.GenericObjectPool.borrowObject(GenericObjectPool.java:1148)
    at
    org.apache.commons.dbcp.PoolingDataSource.getConnection(PoolingDataSource.java:106)
    at
    org.datanucleus.store.rdbms.ConnectionFactoryImpl$ManagedConnectionImpl.getConnection(ConnectionFactoryImpl.java:521)
    at
    org.datanucleus.store.rdbms.RDBMSStoreManager.<init>(RDBMSStoreManager.java:290)
    at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native
    Method)
    at
    sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:57)
    at
    sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
    at java.lang.reflect.Constructor.newInstance(Constructor.java:526)
    at
    org.datanucleus.plugin.NonManagedPluginRegistry.createExecutableExtension(NonManagedPluginRegistry.java:588)
    at
    org.datanucleus.plugin.PluginManager.createExecutableExtension(PluginManager.java:300)
    at
    org.datanucleus.ObjectManagerFactoryImpl.initialiseStoreManager(ObjectManagerFactoryImpl.java:161)
    at
    org.datanucleus.jdo.JDOPersistenceManagerFactory.freezeConfiguration(JDOPersistenceManagerFactory.java:583)
    at
    org.datanucleus.jdo.JDOPersistenceManagerFactory.createPersistenceManagerFactory(JDOPersistenceManagerFactory.java:286)
    at
    org.datanucleus.jdo.JDOPersistenceManagerFactory.getPersistenceManagerFactory(JDOPersistenceManagerFactory.java:182)
    at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
    at
    sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
    at
    sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
    at java.lang.reflect.Method.invoke(Method.java:606)
    at javax.jdo.JDOHelper$16.run(JDOHelper.java:1958)
    at java.security.AccessController.doPrivileged(Native Method)
    at javax.jdo.JDOHelper.invoke(JDOHelper.java:1953)
    at
    javax.jdo.JDOHelper.invokeGetPersistenceManagerFactoryOnImplementation(JDOHelper.java:1159)
    at
    javax.jdo.JDOHelper.getPersistenceManagerFactory(JDOHelper.java:803)
    at
    javax.jdo.JDOHelper.getPersistenceManagerFactory(JDOHelper.java:698)
    at
    org.apache.hadoop.hive.metastore.ObjectStore.getPMF(ObjectStore.java:262)
    at
    org.apache.hadoop.hive.metastore.ObjectStore.getPersistenceManager(ObjectStore.java:291)
    at
    org.apache.hadoop.hive.metastore.ObjectStore.initialize(ObjectStore.java:224)
    at
    org.apache.hadoop.hive.metastore.ObjectStore.setConf(ObjectStore.java:199)
    at
    org.apache.hadoop.util.ReflectionUtils.setConf(ReflectionUtils.java:62)
    at
    org.apache.hadoop.util.ReflectionUtils.newInstance(ReflectionUtils.java:117)
    at
    org.apache.hadoop.hive.metastore.RetryingRawStore.<init>(RetryingRawStore.java:62)
    at
    org.apache.hadoop.hive.metastore.RetryingRawStore.getProxy(RetryingRawStore.java:71)
    at
    org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.newRawStore(HiveMetaStore.java:413)
    at
    org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.getMS(HiveMetaStore.java:401)
    at
    org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.createDefaultDB(HiveMetaStore.java:439)
    at
    org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.init(HiveMetaStore.java:325)
    at
    org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.<init>(HiveMetaStore.java:279)
    at com.cloudera.beeswax.Server.serveMeta(Server.java:349)
    at com.cloudera.beeswax.Server$1.run(Server.java:200)
    at java.lang.Thread.run(Thread.java:724)
    Caused by: java.sql.SQLException: Failed to create database
    'metastore_db', see the next exception for details.
    at
    org.apache.derby.impl.jdbc.SQLExceptionFactory.getSQLException(Unknown
    Source)
    at
    org.apache.derby.impl.jdbc.SQLExceptionFactory40.wrapArgsForTransportAcrossDRDA(Unknown
    Source)
    ... 54 more
    Caused by: java.sql.SQLException: Directory
    /usr/share/hue/metastore_db cannot be created.
    at
    org.apache.derby.impl.jdbc.SQLExceptionFactory.getSQLException(Unknown
    Source)
    at
    org.apache.derby.impl.jdbc.SQLExceptionFactory40.wrapArgsForTransportAcrossDRDA(Unknown
    Source)
    at
    org.apache.derby.impl.jdbc.SQLExceptionFactory40.getSQLException(Unknown
    Source)
    at org.apache.derby.impl.jdbc.Util.generateCsSQLException(Unknown
    Source)
    at
    org.apache.derby.impl.jdbc.TransactionResourceImpl.wrapInSQLException(Unknown
    Source)
    at
    org.apache.derby.impl.jdbc.TransactionResourceImpl.handleException(Unknown
    Source)
    at
    org.apache.derby.impl.jdbc.EmbedConnection.handleException(Unknown Source)
    ... 51 more
    Caused by: ERROR XBM0H: Directory /usr/share/hue/metastore_db cannot
    be created.
    at
    org.apache.derby.iapi.error.StandardException.newException(Unknown Source)
    at
    org.apache.derby.impl.services.monitor.StorageFactoryService$9.run(Unknown
    Source)
    at java.security.AccessController.doPrivileged(Native Method)
    at
    org.apache.derby.impl.services.monitor.StorageFactoryService.createServiceRoot(Unknown
    Source)
    at
    org.apache.derby.impl.services.monitor.BaseMonitor.bootService(Unknown
    Source)
    at
    org.apache.derby.impl.services.monitor.BaseMonitor.createPersistentService(Unknown
    Source)
    at
    org.apache.derby.iapi.services.monitor.Monitor.createPersistentService(Unknown
    Source)
    ... 51 more
    Exception in thread "MetaServerThread"
    javax.jdo.JDOFatalDataStoreException: Failed to create database
    'metastore_db', see the next exception for details.
    at
    org.datanucleus.jdo.NucleusJDOHelper.getJDOExceptionForNucleusException(NucleusJDOHelper.java:298)
    at
    org.datanucleus.jdo.JDOPersistenceManagerFactory.freezeConfiguration(JDOPersistenceManagerFactory.java:601)
    at
    org.datanucleus.jdo.JDOPersistenceManagerFactory.createPersistenceManagerFactory(JDOPersistenceManagerFactory.java:286)
    at
    org.datanucleus.jdo.JDOPersistenceManagerFactory.getPersistenceManagerFactory(JDOPersistenceManagerFactory.java:182)
    at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
    at
    sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
    at
    sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
    at java.lang.reflect.Method.invoke(Method.java:606)
    at javax.jdo.JDOHelper$16.run(JDOHelper.java:1958)
    at java.security.AccessController.doPrivileged(Native Method)
    at javax.jdo.JDOHelper.invoke(JDOHelper.java:1953)
    at
    javax.jdo.JDOHelper.invokeGetPersistenceManagerFactoryOnImplementation(JDOHelper.java:1159)
    at
    javax.jdo.JDOHelper.getPersistenceManagerFactory(JDOHelper.java:803)
    at
    javax.jdo.JDOHelper.getPersistenceManagerFactory(JDOHelper.java:698)
    at
    org.apache.hadoop.hive.metastore.ObjectStore.getPMF(ObjectStore.java:262)
    at
    org.apache.hadoop.hive.metastore.ObjectStore.getPersistenceManager(ObjectStore.java:291)
    at
    org.apache.hadoop.hive.metastore.ObjectStore.initialize(ObjectStore.java:224)
    at
    org.apache.hadoop.hive.metastore.ObjectStore.setConf(ObjectStore.java:199)
    at
    org.apache.hadoop.util.ReflectionUtils.setConf(ReflectionUtils.java:62)
    at
    org.apache.hadoop.util.ReflectionUtils.newInstance(ReflectionUtils.java:117)
    at
    org.apache.hadoop.hive.metastore.RetryingRawStore.<init>(RetryingRawStore.java:62)
    at
    org.apache.hadoop.hive.metastore.RetryingRawStore.getProxy(RetryingRawStore.java:71)
    at
    org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.newRawStore(HiveMetaStore.java:413)
    at
    org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.getMS(HiveMetaStore.java:401)
    at
    org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.createDefaultDB(HiveMetaStore.java:439)
    at
    org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.init(HiveMetaStore.java:325)
    at
    org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.<init>(HiveMetaStore.java:279)
    at com.cloudera.beeswax.Server.serveMeta(Server.java:349)
    at com.cloudera.beeswax.Server$1.run(Server.java:200)
    at java.lang.Thread.run(Thread.java:724)
    NestedThrowablesStackTrace:
    java.sql.SQLException: Failed to create database 'metastore_db', see
    the next exception for details.
    at
    org.apache.derby.impl.jdbc.SQLExceptionFactory40.getSQLException(Unknown
    Source)
    at org.apache.derby.impl.jdbc.Util.newEmbedSQLException(Unknown
    Source)
    at org.apache.derby.impl.jdbc.Util.seeNextException(Unknown Source)
    at
    org.apache.derby.impl.jdbc.EmbedConnection.createDatabase(Unknown Source)
    at org.apache.derby.impl.jdbc.EmbedConnection.<init>(Unknown Source)
    at org.apache.derby.impl.jdbc.EmbedConnection30.<init>(Unknown
    Source)
    at org.apache.derby.impl.jdbc.EmbedConnection40.<init>(Unknown
    Source)
    at org.apache.derby.jdbc.Driver40.getNewEmbedConnection(Unknown
    Source)
    at org.apache.derby.jdbc.InternalDriver.connect(Unknown Source)
    at org.apache.derby.jdbc.AutoloadedDriver.connect(Unknown Source)
    at java.sql.DriverManager.getConnection(DriverManager.java:571)
    at java.sql.DriverManager.getConnection(DriverManager.java:215)
    at
    org.apache.commons.dbcp.DriverManagerConnectionFactory.createConnection(DriverManagerConnectionFactory.java:75)
    at
    org.apache.commons.dbcp.PoolableConnectionFactory.makeObject(PoolableConnectionFactory.java:582)
    at
    org.apache.commons.pool.impl.GenericObjectPool.borrowObject(GenericObjectPool.java:1148)
    at
    org.apache.commons.dbcp.PoolingDataSource.getConnection(PoolingDataSource.java:106)
    at
    org.datanucleus.store.rdbms.ConnectionFactoryImpl$ManagedConnectionImpl.getConnection(ConnectionFactoryImpl.java:521)
    at
    org.datanucleus.store.rdbms.RDBMSStoreManager.<init>(RDBMSStoreManager.java:290)
    at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native
    Method)
    at
    sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:57)
    at
    sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
    at java.lang.reflect.Constructor.newInstance(Constructor.java:526)
    at
    org.datanucleus.plugin.NonManagedPluginRegistry.createExecutableExtension(NonManagedPluginRegistry.java:588)
    at
    org.datanucleus.plugin.PluginManager.createExecutableExtension(PluginManager.java:300)
    at
    org.datanucleus.ObjectManagerFactoryImpl.initialiseStoreManager(ObjectManagerFactoryImpl.java:161)
    at
    org.datanucleus.jdo.JDOPersistenceManagerFactory.freezeConfiguration(JDOPersistenceManagerFactory.java:583)
    at
    org.datanucleus.jdo.JDOPersistenceManagerFactory.createPersistenceManagerFactory(JDOPersistenceManagerFactory.java:286)
    at
    org.datanucleus.jdo.JDOPersistenceManagerFactory.getPersistenceManagerFactory(JDOPersistenceManagerFactory.java:182)
    at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
    at
    sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
    at
    sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
    at java.lang.reflect.Method.invoke(Method.java:606)
    at javax.jdo.JDOHelper$16.run(JDOHelper.java:1958)
    at java.security.AccessController.doPrivileged(Native Method)
    at javax.jdo.JDOHelper.invoke(JDOHelper.java:1953)
    at
    javax.jdo.JDOHelper.invokeGetPersistenceManagerFactoryOnImplementation(JDOHelper.java:1159)
    at
    javax.jdo.JDOHelper.getPersistenceManagerFactory(JDOHelper.java:803)
    at
    javax.jdo.JDOHelper.getPersistenceManagerFactory(JDOHelper.java:698)
    at
    org.apache.hadoop.hive.metastore.ObjectStore.getPMF(ObjectStore.java:262)
    at
    org.apache.hadoop.hive.metastore.ObjectStore.getPersistenceManager(ObjectStore.java:291)
    at
    org.apache.hadoop.hive.metastore.ObjectStore.initialize(ObjectStore.java:224)
    at
    org.apache.hadoop.hive.metastore.ObjectStore.setConf(ObjectStore.java:199)
    at
    org.apache.hadoop.util.ReflectionUtils.setConf(ReflectionUtils.java:62)
    at
    org.apache.hadoop.util.ReflectionUtils.newInstance(ReflectionUtils.java:117)
    at
    org.apache.hadoop.hive.metastore.RetryingRawStore.<init>(RetryingRawStore.java:62)
    at
    org.apache.hadoop.hive.metastore.RetryingRawStore.getProxy(RetryingRawStore.java:71)
    at
    org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.newRawStore(HiveMetaStore.java:413)
    at
    org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.getMS(HiveMetaStore.java:401)
    at
    org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.createDefaultDB(HiveMetaStore.java:439)
    at
    org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.init(HiveMetaStore.java:325)
    at
    org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.<init>(HiveMetaStore.java:279)
    at com.cloudera.beeswax.Server.serveMeta(Server.java:349)
    at com.cloudera.beeswax.Server$1.run(Server.java:200)
    at java.lang.Thread.run(Thread.java:724)
    Caused by: java.sql.SQLException: Failed to create database
    'metastore_db', see the next exception for details.
    at
    org.apache.derby.impl.jdbc.SQLExceptionFactory.getSQLException(Unknown
    Source)
    at
    org.apache.derby.impl.jdbc.SQLExceptionFactory40.wrapArgsForTransportAcrossDRDA(Unknown
    Source)
    ... 54 more
    Caused by: java.sql.SQLException: Directory
    /usr/share/hue/metastore_db cannot be created.
    at
    org.apache.derby.impl.jdbc.SQLExceptionFactory.getSQLException(Unknown
    Source)
    at
    org.apache.derby.impl.jdbc.SQLExceptionFactory40.wrapArgsForTransportAcrossDRDA(Unknown
    Source)
    at
    org.apache.derby.impl.jdbc.SQLExceptionFactory40.getSQLException(Unknown
    Source)
    at org.apache.derby.impl.jdbc.Util.generateCsSQLException(Unknown
    Source)
    at
    org.apache.derby.impl.jdbc.TransactionResourceImpl.wrapInSQLException(Unknown
    Source)
    at
    org.apache.derby.impl.jdbc.TransactionResourceImpl.handleException(Unknown
    Source)
    at
    org.apache.derby.impl.jdbc.EmbedConnection.handleException(Unknown Source)
    ... 51 more
    Caused by: ERROR XBM0H: Directory /usr/share/hue/metastore_db cannot
    be created.
    at
    org.apache.derby.iapi.error.StandardException.newException(Unknown Source)
    at
    org.apache.derby.impl.services.monitor.StorageFactoryService$9.run(Unknown
    Source)
    at java.security.AccessController.doPrivileged(Native Method)
    at
    org.apache.derby.impl.services.monitor.StorageFactoryService.createServiceRoot(Unknown
    Source)
    at
    org.apache.derby.impl.services.monitor.BaseMonitor.bootService(Unknown
    Source)
    at
    org.apache.derby.impl.services.monitor.BaseMonitor.createPersistentService(Unknown
    Source)
    at
    org.apache.derby.iapi.services.monitor.Monitor.createPersistentService(Unknown
    Source)
    ... 51 more
    (31713) socket 0.0.0.0:8888 already in use, retrying after 4.0
    seconds...
    (31713) socket 0.0.0.0:8888 already in use, retrying after 8.0
    seconds...
    (31713) socket 0.0.0.0:8888 already in use, retrying after 16.0
    seconds...

    On Fri, Aug 9, 2013 at 11:51 AM, Abraham Elmahrek wrote:

    Port 8888 is PID 22739. You should be able to execute "ps auxwww"
    or a similar command to find Beeswax and Hue. They should both be running
    as use Hue. Then just kill them both.

    -Abe


    On Fri, Aug 9, 2013 at 11:45 AM, Vinamrata Singal <
    vsingal5@stanford.edu> wrote:
    also no pid's for 8002 and 8003


    On Fri, Aug 9, 2013 at 11:44 AM, Vinamrata Singal <
    vsingal5@stanford.edu> wrote:
    I have the following output, can't find port 8888

    Proto Recv-Q Send-Q Local Address Foreign Address
    State PID/Program name
    tcp 0 0 0.0.0.0:50060 0.0.0.0:*
    LISTEN 30843/java
    tcp 0 0 0.0.0.0:44141 0.0.0.0:*
    LISTEN 30312/java
    tcp 0 0 0.0.0.0:50030 0.0.0.0:*
    LISTEN 30697/java
    tcp 0 0 0.0.0.0:111 0.0.0.0:*
    LISTEN -
    tcp 0 0 0.0.0.0:50070 0.0.0.0:*
    LISTEN 30312/java
    tcp 0 0 0.0.0.0:22 0.0.0.0:*
    LISTEN -
    tcp 0 0 0.0.0.0:58776 0.0.0.0:*
    LISTEN 30442/java
    tcp 0 0 0.0.0.0:8888 0.0.0.0:*
    LISTEN 22739/python2.7
    tcp 0 0 0.0.0.0:50010 0.0.0.0:*
    LISTEN 30442/java
    tcp 0 0 0.0.0.0:50075 0.0.0.0:*
    LISTEN 30442/java
    tcp 0 0 0.0.0.0:55035 0.0.0.0:*
    LISTEN 30572/java
    tcp 0 0 0.0.0.0:55039 0.0.0.0:*
    LISTEN 30697/java
    tcp 0 0 127.0.0.1:60353 0.0.0.0:*
    LISTEN 30843/java
    tcp 0 0 0.0.0.0:8002 0.0.0.0:*
    LISTEN -
    tcp 0 0 0.0.0.0:8003 0.0.0.0:*
    LISTEN -
    tcp 0 0 0.0.0.0:50020 0.0.0.0:*
    LISTEN 30442/java
    tcp 0 0 127.0.0.1:54310 0.0.0.0:*
    LISTEN 30312/java
    tcp 0 0 0.0.0.0:40358 0.0.0.0:*
    LISTEN -
    tcp 0 0 127.0.0.1:54311 0.0.0.0:*
    LISTEN 30697/java
    tcp 0 0 0.0.0.0:50090 0.0.0.0:*
    LISTEN 30572/java
    tcp6 0 0 :::49932 :::*
    LISTEN -
    tcp6 0 0 :::111 :::*
    LISTEN -
    tcp6 0 0 :::22 :::*
    LISTEN -


    On Fri, Aug 9, 2013 at 11:27 AM, Abraham Elmahrek <
    abe@cloudera.com> wrote:
    It definitely looks like they're in use. The command "netstat
    -tlnp" should show you their PIDs and process names. If they're Hue, you
    can simply 'kill' those processes using "kill <pid1> <pid2> ..." and start
    Hue again.


    On Fri, Aug 9, 2013 at 11:22 AM, Vinamrata Singal <
    vsingal5@stanford.edu> wrote:
    tcp 0 0 0.0.0.0:50060 0.0.0.0:*
    LISTEN
    tcp 0 0 0.0.0.0:44141 0.0.0.0:*
    LISTEN
    tcp 0 0 0.0.0.0:50030 0.0.0.0:*
    LISTEN
    tcp 0 0 0.0.0.0:111 0.0.0.0:*
    LISTEN
    tcp 0 0 0.0.0.0:50070 0.0.0.0:*
    LISTEN
    tcp 0 0 0.0.0.0:22 0.0.0.0:*
    LISTEN
    tcp 0 0 0.0.0.0:58776 0.0.0.0:*
    LISTEN
    tcp 0 0 0.0.0.0:8888 0.0.0.0:*
    LISTEN
    tcp 0 0 0.0.0.0:50010 0.0.0.0:*
    LISTEN
    tcp 0 0 0.0.0.0:50075 0.0.0.0:*
    LISTEN
    tcp 0 0 0.0.0.0:55035 0.0.0.0:*
    LISTEN
    tcp 0 0 0.0.0.0:55039 0.0.0.0:*
    LISTEN
    tcp 0 0 127.0.0.1:60353 0.0.0.0:*
    LISTEN
    tcp 0 0 0.0.0.0:8002 0.0.0.0:*
    LISTEN
    tcp 0 0 0.0.0.0:8003 0.0.0.0:*
    LISTEN
    tcp 0 0 0.0.0.0:50020 0.0.0.0:*
    LISTEN
    tcp 0 0 127.0.0.1:54310 0.0.0.0:*
    LISTEN
    tcp 0 0 0.0.0.0:40358 0.0.0.0:*
    LISTEN
    tcp 0 0 127.0.0.1:54311 0.0.0.0:*
    LISTEN
    tcp 0 0 0.0.0.0:50090 0.0.0.0:*
    LISTEN
    tcp6 0 0 :::49932 :::*
    LISTEN
    tcp6 0 0 :::111 :::*
    LISTEN
    tcp6 0 0 :::22 :::*
    LISTEN


    On Fri, Aug 9, 2013 at 11:20 AM, Abraham Elmahrek <
    abe@cloudera.com> wrote:
    It looks like port 8888 is in use. Maybe 8002 and 8003 are in
    use as well. Could you provide the output of 'netstat -tln'?

    -Abe


    On Fri, Aug 9, 2013 at 11:16 AM, Vinamrata Singal <
    vsingal5@stanford.edu> wrote:
    Abe,

    Thanks for all your help and quick replies.

    So I went ahead and fixed everything (including the username
    as you suggested) yet now I'm getting a different error:

    Executing /usr/local/hadoop/bin/hadoop jar
    /usr/share/hue/apps/beeswax/src/beeswax/../../java-lib/BeeswaxServer.jar
    --beeswax 8002 --desktop-host 127.0.0.1 --desktop-port 8888
    --query-lifetime 604800000 --metastore 8003
    (31443) socket 0.0.0.0:8888 already in use, retrying after
    0.5 seconds...
    (31443) socket 0.0.0.0:8888 already in use, retrying after
    1.0 seconds...
    (31443) socket 0.0.0.0:8888 already in use, retrying after
    2.0 seconds...
    (31443) socket 0.0.0.0:8888 already in use, retrying after
    4.0 seconds...
    13/08/09 11:15:37 INFO beeswax.Server: Starting metastore at
    port 8003
    13/08/09 11:15:37 INFO beeswax.Server: Starting beeswaxd at
    port 8002
    Exception in thread "main"
    org.apache.thrift.transport.TTransportException: Could not create
    ServerSocket on address 0.0.0.0/0.0.0.0:8002.
    at
    org.apache.thrift.transport.TServerSocket.<init>(TServerSocket.java:93)
    at
    org.apache.thrift.transport.TServerSocket.<init>(TServerSocket.java:75)
    at
    org.apache.thrift.transport.TServerSocket.<init>(TServerSocket.java:68)
    at com.cloudera.beeswax.Server.serveBeeswax(Server.java:250)
    at com.cloudera.beeswax.Server.main(Server.java:214)
    at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
    at
    sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
    at
    sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
    at java.lang.reflect.Method.invoke(Method.java:606)
    at org.apache.hadoop.util.RunJar.main(RunJar.java:160)
    org.apache.thrift.transport.TTransportException: Could not
    create ServerSocket on address 0.0.0.0/0.0.0.0:8003.
    at
    org.apache.thrift.transport.TServerSocket.<init>(TServerSocket.java:93)
    at
    org.apache.thrift.transport.TServerSocket.<init>(TServerSocket.java:75)
    at
    org.apache.thrift.transport.TServerSocket.<init>(TServerSocket.java:68)
    at com.cloudera.beeswax.Server.serveMeta(Server.java:348)
    at com.cloudera.beeswax.Server$1.run(Server.java:200)
    at java.lang.Thread.run(Thread.java:724)



    On Thu, Aug 8, 2013 at 5:53 PM, Abraham Elmahrek <
    abe@cloudera.com> wrote:
    Vinamrata,

    I think it will work with the exception of the job browser
    application. It depends on the thrift service code for the job tracker
    packaged in hue-plugins.jar, which only runs on CDH. There have been users
    who've ported that code over to the upstream job tracker in the past though.

    To solve the rest of your issues, take a look at
    apps/beeswax/beeswax_server.sh. There are environment variables that you
    need to set when you start Hue. In particular, "HIVE_LIB" should be set or
    some how grok'd into Hue's environment.

    -Abe


    On Thu, Aug 8, 2013 at 5:43 PM, Vinamrata Singal <
    vsingal5@stanford.edu> wrote:
    I have an Apache installation. Is that an issue for Hue?

    Got it, will look into the hue user installation issues.


    On Thu, Aug 8, 2013 at 5:25 PM, Abraham Elmahrek <
    abe@cloudera.com> wrote:
    That error is likely because of the error: find:
    `~/hive-0.10.0/lib': No such file or directory. Are you
    using a custom installation of Hive? Perhaps, the apache distribution
    rather than the CDH distribution?

    Note: It's probably not a good idea to change the user
    from 'hue' to something else.


    On Thu, Aug 8, 2013 at 5:20 PM, Vinamrata Singal <
    vsingal5@stanford.edu> wrote:
    I actually think I fixed the issue by changing the user
    in the supervisor.py script to be the user that owns that directory.

    Question: now I'm getting this error:

    xception in thread "main" java.lang.NoClassDefFoundError:
    org/apache/hadoop/hive/conf/HiveConf
    at java.lang.Class.forName0(Native Method)
    at java.lang.Class.forName(Class.java:270)
    at org.apache.hadoop.util.RunJar.main(RunJar.java:153)
    Caused by: java.lang.ClassNotFoundException:
    org.apache.hadoop.hive.conf.HiveConf
    at
    java.net.URLClassLoader$1.run(URLClassLoader.java:366)
    at java.net.URLClassLoader$1.run(URLClassLoader.java:355)
    at java.security.AccessController.doPrivileged(Native
    Method)
    at
    java.net.URLClassLoader.findClass(URLClassLoader.java:354)
    at java.lang.ClassLoader.loadClass(ClassLoader.java:424)
    at java.lang.ClassLoader.loadClass(ClassLoader.java:357)


    On Thu, Aug 8, 2013 at 5:19 PM, Abraham Elmahrek <
    abe@cloudera.com> wrote:
    Your 'hadoop.tmp.dir' doesn't vary per user. I'm not
    entirely sure if that directory is chown'd or if the first user to create
    it takes ownership. Either way, you probably don't have access to that
    directory using the user starting Beeswaxd (hue). Take a look at
    http://hadoop.apache.org/docs/current/hadoop-project-dist/hadoop-common/core-default.xmlfor an example 'hadoop.tmp.dir' that varies by username.

    -Abe


    On Thu, Aug 8, 2013 at 5:15 PM, Vinamrata Singal <
    vsingal5@stanford.edu> wrote:
    <configuration>
    <property>
    <name>hadoop.tmp.dir </name>
    <value>/app/hadoop/tmp</value>
    <description> A base for other temporary
    directories </description>
    </property>

    <property>
    <name>fs.default.name</name>
    <value>hdfs://master:54310</value>
    </property>
    </configuration>


    On Thu, Aug 8, 2013 at 5:11 PM, Abraham Elmahrek <
    abe@cloudera.com> wrote:
    Could you provide your core-site.xml? This should be
    creating a file in 'hadoop.tmp.dir'.

    -Abe


    On Thu, Aug 8, 2013 at 5:03 PM, Vinamrata Singal <
    vsingal5@stanford.edu> wrote:
    File: `/tmp'
    Size: 4096 Blocks: 8 IO Block: 4096
    directory
    Device: fd00h/64768d Inode: 131082 Links: 11
    Access: (1777/drwxrwxrwt) Uid: ( 0/ root)
    Gid: ( 0/ root)
    Access: 2013-08-08 22:47:49.435308999 +0000
    Modify: 2013-08-09 00:01:17.251309001 +0000
    Change: 2013-08-09 00:01:17.251309001 +0000
    Birth: -



    On Thu, Aug 8, 2013 at 5:00 PM, Abraham Elmahrek <
    abe@cloudera.com> wrote:
    It looks like Beeswaxd cannot create a temporary
    directory in '/tmp/'. What do your permissions on that directory look like?
    What is the output of "stat /tmp/"?


    On Thu, Aug 8, 2013 at 4:52 PM, <
    vsingal5@stanford.edu> wrote:
    I'm getting the following while trying to build Hue:














    (6211) *** Controller starting at Thu Aug 8 11:29:50 2013
    Should start 1 new children
    Controller.spawn_children(number=1)
    $HADOOP_HOME=
    $HADOOP_BIN=/usr/local/hadoop/bin/hadoop
    $HIVE_CONF_DIR=~/hive-0.10.0/conf
    $HIVE_HOME=~/hive-0.10.0
    find: `~/hive-0.10.0/lib': No such file or directory
    $HADOOP_CLASSPATH=:
    $HADOOP_OPTS=-Dlog4j.configuration=log4j.properties
    $HADOOP_CONF_DIR=~/hive-0.10.0/conf:/usr/local/hadoop/conf
    $HADOOP_MAPRED_HOME=/usr/lib/hadoop-0.20-mapreduce
    CWD=/usr/local/hue/desktop/conf
    Executing /usr/local/hadoop/bin/hadoop jar /usr/local/hue/apps/beeswax/src/beeswax/../../java-lib/BeeswaxServer.jar --beeswax 8002 --desktop-host 127.0.0.1 --desktop-port 8888 --query-lifetime 604800000 --metastore 8003
    Exception in thread "main" java.io.IOException: Permission denied
    at java.io.UnixFileSystem.createFileExclusively(Native Method)
    at java.io.File.createTempFile(File.java:1879)
    at org.apache.hadoop.util.RunJar.main(RunJar.java:119)

    I've changed the configuration file so it doesn't
    use hue but the user that I'm logged in as which has read and write
    permissions in the hadoop dfs, hadoop, hive, etc. Not sure why it's doing
    this...

    --
    Best,
    Vinamrata Singal | BS Computer Science | Stanford
    Class of 2016 | (c) 650.215.3775 | (w)
    https://stanford.edu/~vsingal5

    --
    Best,
    Vinamrata Singal | BS Computer Science | Stanford Class
    of 2016 | (c) 650.215.3775 | (w)
    https://stanford.edu/~vsingal5

    --
    Best,
    Vinamrata Singal | BS Computer Science | Stanford Class
    of 2016 | (c) 650.215.3775 | (w)
    https://stanford.edu/~vsingal5

    --
    Best,
    Vinamrata Singal | BS Computer Science | Stanford Class of
    2016 | (c) 650.215.3775 | (w)
    https://stanford.edu/~vsingal5

    --
    Best,
    Vinamrata Singal | BS Computer Science | Stanford Class of
    2016 | (c) 650.215.3775 | (w) https://stanford.edu/~vsingal5

    --
    Best,
    Vinamrata Singal | BS Computer Science | Stanford Class of 2016
    (c) 650.215.3775 | (w) https://stanford.edu/~vsingal5

    --
    Best,
    Vinamrata Singal | BS Computer Science | Stanford Class of 2016 |
    (c) 650.215.3775 | (w) https://stanford.edu/~vsingal5


    --
    Best,
    Vinamrata Singal | BS Computer Science | Stanford Class of 2016 |
    (c) 650.215.3775 | (w) https://stanford.edu/~vsingal5

    --
    Best,
    Vinamrata Singal | BS Computer Science | Stanford Class of 2016 |
    (c) 650.215.3775 | (w) https://stanford.edu/~vsingal5

    --
    Best,
    Vinamrata Singal | BS Computer Science | Stanford Class of 2016 | (c)
    650.215.3775 | (w) https://stanford.edu/~vsingal5

    --
    Best,
    Vinamrata Singal | BS Computer Science | Stanford Class of 2016 | (c)
    650.215.3775 | (w) https://stanford.edu/~vsingal5

    --
    Best,
    Vinamrata Singal | BS Computer Science | Stanford Class of 2016 | (c)
    650.215.3775 | (w) https://stanford.edu/~vsingal5
  • Vinamrata Singal at Aug 9, 2013 at 7:32 pm
    Hue is being run as hue, as that is what is specified in supervisor.py

    I am logged in as another user, is that OK?

    And apparently it's trying to create a new metastore_db instead of using
    the one that's already created as specified in the hive_site.xml as well as
    the hive lib variables?

    On Fri, Aug 9, 2013 at 12:29 PM, Romain Rigaux wrote:

    We recommend to run Hue as 'hue' as it is the standard. If not you need to
    chown or chmod all the log, db directories/files.

    Romain

    On Fri, Aug 9, 2013 at 12:24 PM, Vinamrata Singal wrote:

    Even though I gave permissions to derby.log and where the metastore gets
    created, why am I still getting this error:

    2013-08-09 19:23:15.059 GMT Thread[MetaServerThread,5,main]
    java.io.FileNotFoundException: derby.log (Permission denied)
    2013-08-09 19:23:15.193 GMT Thread[MetaServerThread,5,main] Cleanup
    action starting
    ERROR XBM0H: Directory /home/vsingal/metastore_db cannot be created.
    at org.apache.derby.iapi.error.StandardException.newException(Unknown
    Source)
    at
    org.apache.derby.impl.services.monitor.StorageFactoryService$9.run(Unknown
    Source)
    at java.security.AccessController.doPrivileged(Native Method)
    at
    org.apache.derby.impl.services.monitor.StorageFactoryService.createServiceRoot(Unknown
    Source)
    at org.apache.derby.impl.services.monitor.BaseMonitor.bootService(Unknown
    Source)
    at
    org.apache.derby.impl.services.monitor.BaseMonitor.createPersistentService(Unknown
    Source)
    at
    org.apache.derby.iapi.services.monitor.Monitor.createPersistentService(Unknown
    Source)
    at org.apache.derby.impl.jdbc.EmbedConnection.createDatabase(Unknown
    Source)
    at org.apache.derby.impl.jdbc.EmbedConnection.<init>(Unknown Source)
    at org.apache.derby.impl.jdbc.EmbedConnection30.<init>(Unknown Source)
    at org.apache.derby.impl.jdbc.EmbedConnection40.<init>(Unknown Source)
    at org.apache.derby.jdbc.Driver40.getNewEmbedConnection(Unknown Source)
    at org.apache.derby.jdbc.InternalDriver.connect(Unknown Source)
    at org.apache.derby.jdbc.AutoloadedDriver.connect(Unknown Source)
    at java.sql.DriverManager.getConnection(DriverManager.java:571)
    at java.sql.DriverManager.getConnection(DriverManager.java:215)
    at
    org.apache.commons.dbcp.DriverManagerConnectionFactory.createConnection(DriverManagerConnectionFactory.java:75)
    at
    org.apache.commons.dbcp.PoolableConnectionFactory.makeObject(PoolableConnectionFactory.java:582)
    at
    org.apache.commons.pool.impl.GenericObjectPool.borrowObject(GenericObjectPool.java:1148)
    at
    org.apache.commons.dbcp.PoolingDataSource.getConnection(PoolingDataSource.java:106)
    at
    org.datanucleus.store.rdbms.ConnectionFactoryImpl$ManagedConnectionImpl.getConnection(ConnectionFactoryImpl.java:521)
    at
    org.datanucleus.store.rdbms.RDBMSStoreManager.<init>(RDBMSStoreManager.java:290)
    at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
    at
    sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:57)
    at
    sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
    at java.lang.reflect.Constructor.newInstance(Constructor.java:526)
    at
    org.datanucleus.plugin.NonManagedPluginRegistry.createExecutableExtension(NonManagedPluginRegistry.java:588)
    at
    org.datanucleus.plugin.PluginManager.createExecutableExtension(PluginManager.java:300)
    at
    org.datanucleus.ObjectManagerFactoryImpl.initialiseStoreManager(ObjectManagerFactoryImpl.java:161)
    at
    org.datanucleus.jdo.JDOPersistenceManagerFactory.freezeConfiguration(JDOPersistenceManagerFactory.java:583)
    at
    org.datanucleus.jdo.JDOPersistenceManagerFactory.createPersistenceManagerFactory(JDOPersistenceManagerFactory.java:286)
    at
    org.datanucleus.jdo.JDOPersistenceManagerFactory.getPersistenceManagerFactory(JDOPersistenceManagerFactory.java:182)
    at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
    at
    sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
    at
    sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
    at java.lang.reflect.Method.invoke(Method.java:606)
    at javax.jdo.JDOHelper$16.run(JDOHelper.java:1958)
    at java.security.AccessController.doPrivileged(Native Method)
    at javax.jdo.JDOHelper.invoke(JDOHelper.java:1953)
    at
    javax.jdo.JDOHelper.invokeGetPersistenceManagerFactoryOnImplementation(JDOHelper.java:1159)
    at javax.jdo.JDOHelper.getPersistenceManagerFactory(JDOHelper.java:803)
    at javax.jdo.JDOHelper.getPersistenceManagerFactory(JDOHelper.java:698)
    at
    org.apache.hadoop.hive.metastore.ObjectStore.getPMF(ObjectStore.java:262)
    at
    org.apache.hadoop.hive.metastore.ObjectStore.getPersistenceManager(ObjectStore.java:291)
    at
    org.apache.hadoop.hive.metastore.ObjectStore.initialize(ObjectStore.java:224)
    at
    org.apache.hadoop.hive.metastore.ObjectStore.setConf(ObjectStore.java:199)
    at org.apache.hadoop.util.ReflectionUtils.setConf(ReflectionUtils.java:62)
    at
    org.apache.hadoop.util.ReflectionUtils.newInstance(ReflectionUtils.java:117)
    at
    org.apache.hadoop.hive.metastore.RetryingRawStore.<init>(RetryingRawStore.java:62)
    at
    org.apache.hadoop.hive.metastore.RetryingRawStore.getProxy(RetryingRawStore.java:71)
    at
    org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.newRawStore(HiveMetaStore.java:413)
    at
    org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.getMS(HiveMetaStore.java:401)
    at
    org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.createDefaultDB(HiveMetaStore.java:439)
    at
    org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.init(HiveMetaStore.java:325)
    at
    org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.<init>(HiveMetaStore.java:279)
    at com.cloudera.beeswax.Server.serveMeta(Server.java:349)
    at com.cloudera.beeswax.Server$1.run(Server.java:200)
    at java.lang.Thread.run(Thread.java:724)


    On Fri, Aug 9, 2013 at 12:20 PM, Romain Rigaux wrote:

    Make sure all ps -ef | grep hue or grep beeswax return nothing.

    Also curious, what is the use case of not running Hue as 'hue'?

    Romain


    On Fri, Aug 9, 2013 at 12:16 PM, Vinamrata Singal <vsingal5@stanford.edu
    wrote:
    It didn't send this part:

    Exception in thread "main"
    org.apache.thrift.transport.TTransportException: Could not create
    ServerSocket on address0.0.0.0/0.0.0.0:8002.
    at
    org.apache.thrift.transport.TServerSocket.<init>(TServerSocket.java:93)
    at
    org.apache.thrift.transport.TServerSocket.<init>(TServerSocket.java:75)
    at
    org.apache.thrift.transport.TServerSocket.<init>(TServerSocket.java:68)
    at com.cloudera.beeswax.Server.serveBeeswax(Server.java:250)
    at com.cloudera.beeswax.Server.main(Server.java:214)
    at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
    at
    sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
    at
    sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
    at java.lang.reflect.Method.invoke(Method.java:606)
    at org.apache.hadoop.util.RunJar.main(RunJar.java:160)
    org.apache.thrift.transport.TTransportException: Could not create
    ServerSocket on address 0.0.0.0/0.0.0.0:8003.
    at
    org.apache.thrift.transport.TServerSocket.<init>(TServerSocket.java:93)
    at
    org.apache.thrift.transport.TServerSocket.<init>(TServerSocket.java:75)
    at
    org.apache.thrift.transport.TServerSocket.<init>(TServerSocket.java:68)
    at com.cloudera.beeswax.Server.serveMeta(Server.java:348)
    at com.cloudera.beeswax.Server$1.run(Server.java:200)
    at java.lang.Thread.run(Thread.java:724)

    On Fri, Aug 9, 2013 at 12:14 PM, Abraham Elmahrek wrote:

    The logs you've just posted mean that beeswaxd is starting :).


    On Fri, Aug 9, 2013 at 12:07 PM, Vinamrata Singal <
    vsingal5@stanford.edu> wrote:
    Done, but I still get a bind to socket error, but when I run netstat
    there is no pid associated with the port:

    13/08/09 12:04:09 INFO beeswax.Server: Starting metastore at port 8003
    13/08/09 12:04:09 INFO beeswax.Server: Starting beeswaxd at port 8002
    Exception in thread "main"
    org.apache.thrift.transport.TTransportException: Could not create
    ServerSocket on address 0.0.0.0/0.0.0.0:8002.
    at
    org.apache.thrift.transport.TServerSocket.<init>(TServerSocket.java:93)
    at
    org.apache.thrift.transport.TServerSocket.<init>(TServerSocket.java:75)
    at
    org.apache.thrift.transport.TServerSocket.<init>(TServerSocket.java:68)
    at com.cloudera.beeswax.Server.serveBeeswax(Server.java:250)
    at com.cloudera.beeswax.Server.main(Server.java:214)
    at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
    at
    sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
    at
    sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
    at java.lang.reflect.Method.invoke(Method.java:606)
    at org.apache.hadoop.util.RunJar.main(RunJar.java:160)
    org.apache.thrift.transport.TTransportException: Could not create
    ServerSocket on address 0.0.0.0/0.0.0.0:8003.
    at
    org.apache.thrift.transport.TServerSocket.<init>(TServerSocket.java:93)
    at
    org.apache.thrift.transport.TServerSocket.<init>(TServerSocket.java:75)
    at
    org.apache.thrift.transport.TServerSocket.<init>(TServerSocket.java:68)
    at com.cloudera.beeswax.Server.serveMeta(Server.java:348)
    at com.cloudera.beeswax.Server$1.run(Server.java:200)
    at java.lang.Thread.run(Thread.java:724)


    On Fri, Aug 9, 2013 at 11:59 AM, Romain Rigaux wrote:

    Beeswax is trying to create the Hive metastore DB on /usr/share/hue
    and your custom user does not have the permissions:

    chown your_user:your_user /usr/share/hue

    Romain


    On Fri, Aug 9, 2013 at 11:56 AM, Vinamrata Singal <
    vsingal5@stanford.edu> wrote:
    After killing the appropriate processes, I still get the following
    error:

    java.sql.SQLException: Failed to create database 'metastore_db',
    see the next exception for details.
    at
    org.apache.derby.impl.jdbc.SQLExceptionFactory40.getSQLException(Unknown
    Source)
    at org.apache.derby.impl.jdbc.Util.newEmbedSQLException(Unknown
    Source)
    at org.apache.derby.impl.jdbc.Util.seeNextException(Unknown
    Source)
    at
    org.apache.derby.impl.jdbc.EmbedConnection.createDatabase(Unknown Source)
    at org.apache.derby.impl.jdbc.EmbedConnection.<init>(Unknown
    Source)
    at org.apache.derby.impl.jdbc.EmbedConnection30.<init>(Unknown
    Source)
    at org.apache.derby.impl.jdbc.EmbedConnection40.<init>(Unknown
    Source)
    at org.apache.derby.jdbc.Driver40.getNewEmbedConnection(Unknown
    Source)
    at org.apache.derby.jdbc.InternalDriver.connect(Unknown Source)
    at org.apache.derby.jdbc.AutoloadedDriver.connect(Unknown Source)
    at java.sql.DriverManager.getConnection(DriverManager.java:571)
    at java.sql.DriverManager.getConnection(DriverManager.java:215)
    at
    org.apache.commons.dbcp.DriverManagerConnectionFactory.createConnection(DriverManagerConnectionFactory.java:75)
    at
    org.apache.commons.dbcp.PoolableConnectionFactory.makeObject(PoolableConnectionFactory.java:582)
    at
    org.apache.commons.pool.impl.GenericObjectPool.borrowObject(GenericObjectPool.java:1148)
    at
    org.apache.commons.dbcp.PoolingDataSource.getConnection(PoolingDataSource.java:106)
    at
    org.datanucleus.store.rdbms.ConnectionFactoryImpl$ManagedConnectionImpl.getConnection(ConnectionFactoryImpl.java:521)
    at
    org.datanucleus.store.rdbms.RDBMSStoreManager.<init>(RDBMSStoreManager.java:290)
    at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native
    Method)
    at
    sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:57)
    at
    sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
    at java.lang.reflect.Constructor.newInstance(Constructor.java:526)
    at
    org.datanucleus.plugin.NonManagedPluginRegistry.createExecutableExtension(NonManagedPluginRegistry.java:588)
    at
    org.datanucleus.plugin.PluginManager.createExecutableExtension(PluginManager.java:300)
    at
    org.datanucleus.ObjectManagerFactoryImpl.initialiseStoreManager(ObjectManagerFactoryImpl.java:161)
    at
    org.datanucleus.jdo.JDOPersistenceManagerFactory.freezeConfiguration(JDOPersistenceManagerFactory.java:583)
    at
    org.datanucleus.jdo.JDOPersistenceManagerFactory.createPersistenceManagerFactory(JDOPersistenceManagerFactory.java:286)
    at
    org.datanucleus.jdo.JDOPersistenceManagerFactory.getPersistenceManagerFactory(JDOPersistenceManagerFactory.java:182)
    at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
    at
    sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
    at
    sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
    at java.lang.reflect.Method.invoke(Method.java:606)
    at javax.jdo.JDOHelper$16.run(JDOHelper.java:1958)
    at java.security.AccessController.doPrivileged(Native Method)
    at javax.jdo.JDOHelper.invoke(JDOHelper.java:1953)
    at
    javax.jdo.JDOHelper.invokeGetPersistenceManagerFactoryOnImplementation(JDOHelper.java:1159)
    at
    javax.jdo.JDOHelper.getPersistenceManagerFactory(JDOHelper.java:803)
    at
    javax.jdo.JDOHelper.getPersistenceManagerFactory(JDOHelper.java:698)
    at
    org.apache.hadoop.hive.metastore.ObjectStore.getPMF(ObjectStore.java:262)
    at
    org.apache.hadoop.hive.metastore.ObjectStore.getPersistenceManager(ObjectStore.java:291)
    at
    org.apache.hadoop.hive.metastore.ObjectStore.initialize(ObjectStore.java:224)
    at
    org.apache.hadoop.hive.metastore.ObjectStore.setConf(ObjectStore.java:199)
    at
    org.apache.hadoop.util.ReflectionUtils.setConf(ReflectionUtils.java:62)
    at
    org.apache.hadoop.util.ReflectionUtils.newInstance(ReflectionUtils.java:117)
    at
    org.apache.hadoop.hive.metastore.RetryingRawStore.<init>(RetryingRawStore.java:62)
    at
    org.apache.hadoop.hive.metastore.RetryingRawStore.getProxy(RetryingRawStore.java:71)
    at
    org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.newRawStore(HiveMetaStore.java:413)
    at
    org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.getMS(HiveMetaStore.java:401)
    at
    org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.createDefaultDB(HiveMetaStore.java:439)
    at
    org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.init(HiveMetaStore.java:325)
    at
    org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.<init>(HiveMetaStore.java:279)
    at com.cloudera.beeswax.Server.serveMeta(Server.java:349)
    at com.cloudera.beeswax.Server$1.run(Server.java:200)
    at java.lang.Thread.run(Thread.java:724)
    Caused by: java.sql.SQLException: Failed to create database
    'metastore_db', see the next exception for details.
    at
    org.apache.derby.impl.jdbc.SQLExceptionFactory.getSQLException(Unknown
    Source)
    at
    org.apache.derby.impl.jdbc.SQLExceptionFactory40.wrapArgsForTransportAcrossDRDA(Unknown
    Source)
    ... 54 more
    Caused by: java.sql.SQLException: Directory
    /usr/share/hue/metastore_db cannot be created.
    at
    org.apache.derby.impl.jdbc.SQLExceptionFactory.getSQLException(Unknown
    Source)
    at
    org.apache.derby.impl.jdbc.SQLExceptionFactory40.wrapArgsForTransportAcrossDRDA(Unknown
    Source)
    at
    org.apache.derby.impl.jdbc.SQLExceptionFactory40.getSQLException(Unknown
    Source)
    at org.apache.derby.impl.jdbc.Util.generateCsSQLException(Unknown
    Source)
    at
    org.apache.derby.impl.jdbc.TransactionResourceImpl.wrapInSQLException(Unknown
    Source)
    at
    org.apache.derby.impl.jdbc.TransactionResourceImpl.handleException(Unknown
    Source)
    at
    org.apache.derby.impl.jdbc.EmbedConnection.handleException(Unknown Source)
    ... 51 more
    Caused by: ERROR XBM0H: Directory /usr/share/hue/metastore_db
    cannot be created.
    at
    org.apache.derby.iapi.error.StandardException.newException(Unknown Source)
    at
    org.apache.derby.impl.services.monitor.StorageFactoryService$9.run(Unknown
    Source)
    at java.security.AccessController.doPrivileged(Native Method)
    at
    org.apache.derby.impl.services.monitor.StorageFactoryService.createServiceRoot(Unknown
    Source)
    at
    org.apache.derby.impl.services.monitor.BaseMonitor.bootService(Unknown
    Source)
    at
    org.apache.derby.impl.services.monitor.BaseMonitor.createPersistentService(Unknown
    Source)
    at
    org.apache.derby.iapi.services.monitor.Monitor.createPersistentService(Unknown
    Source)
    ... 51 more
    Exception in thread "MetaServerThread"
    javax.jdo.JDOFatalDataStoreException: Failed to create database
    'metastore_db', see the next exception for details.
    at
    org.datanucleus.jdo.NucleusJDOHelper.getJDOExceptionForNucleusException(NucleusJDOHelper.java:298)
    at
    org.datanucleus.jdo.JDOPersistenceManagerFactory.freezeConfiguration(JDOPersistenceManagerFactory.java:601)
    at
    org.datanucleus.jdo.JDOPersistenceManagerFactory.createPersistenceManagerFactory(JDOPersistenceManagerFactory.java:286)
    at
    org.datanucleus.jdo.JDOPersistenceManagerFactory.getPersistenceManagerFactory(JDOPersistenceManagerFactory.java:182)
    at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
    at
    sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
    at
    sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
    at java.lang.reflect.Method.invoke(Method.java:606)
    at javax.jdo.JDOHelper$16.run(JDOHelper.java:1958)
    at java.security.AccessController.doPrivileged(Native Method)
    at javax.jdo.JDOHelper.invoke(JDOHelper.java:1953)
    at
    javax.jdo.JDOHelper.invokeGetPersistenceManagerFactoryOnImplementation(JDOHelper.java:1159)
    at
    javax.jdo.JDOHelper.getPersistenceManagerFactory(JDOHelper.java:803)
    at
    javax.jdo.JDOHelper.getPersistenceManagerFactory(JDOHelper.java:698)
    at
    org.apache.hadoop.hive.metastore.ObjectStore.getPMF(ObjectStore.java:262)
    at
    org.apache.hadoop.hive.metastore.ObjectStore.getPersistenceManager(ObjectStore.java:291)
    at
    org.apache.hadoop.hive.metastore.ObjectStore.initialize(ObjectStore.java:224)
    at
    org.apache.hadoop.hive.metastore.ObjectStore.setConf(ObjectStore.java:199)
    at
    org.apache.hadoop.util.ReflectionUtils.setConf(ReflectionUtils.java:62)
    at
    org.apache.hadoop.util.ReflectionUtils.newInstance(ReflectionUtils.java:117)
    at
    org.apache.hadoop.hive.metastore.RetryingRawStore.<init>(RetryingRawStore.java:62)
    at
    org.apache.hadoop.hive.metastore.RetryingRawStore.getProxy(RetryingRawStore.java:71)
    at
    org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.newRawStore(HiveMetaStore.java:413)
    at
    org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.getMS(HiveMetaStore.java:401)
    at
    org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.createDefaultDB(HiveMetaStore.java:439)
    at
    org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.init(HiveMetaStore.java:325)
    at
    org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.<init>(HiveMetaStore.java:279)
    at com.cloudera.beeswax.Server.serveMeta(Server.java:349)
    at com.cloudera.beeswax.Server$1.run(Server.java:200)
    at java.lang.Thread.run(Thread.java:724)
    NestedThrowablesStackTrace:
    java.sql.SQLException: Failed to create database 'metastore_db',
    see the next exception for details.
    at
    org.apache.derby.impl.jdbc.SQLExceptionFactory40.getSQLException(Unknown
    Source)
    at org.apache.derby.impl.jdbc.Util.newEmbedSQLException(Unknown
    Source)
    at org.apache.derby.impl.jdbc.Util.seeNextException(Unknown Source)
    at
    org.apache.derby.impl.jdbc.EmbedConnection.createDatabase(Unknown Source)
    at org.apache.derby.impl.jdbc.EmbedConnection.<init>(Unknown Source)
    at org.apache.derby.impl.jdbc.EmbedConnection30.<init>(Unknown
    Source)
    at org.apache.derby.impl.jdbc.EmbedConnection40.<init>(Unknown
    Source)
    at org.apache.derby.jdbc.Driver40.getNewEmbedConnection(Unknown
    Source)
    at org.apache.derby.jdbc.InternalDriver.connect(Unknown Source)
    at org.apache.derby.jdbc.AutoloadedDriver.connect(Unknown Source)
    at java.sql.DriverManager.getConnection(DriverManager.java:571)
    at java.sql.DriverManager.getConnection(DriverManager.java:215)
    at
    org.apache.commons.dbcp.DriverManagerConnectionFactory.createConnection(DriverManagerConnectionFactory.java:75)
    at
    org.apache.commons.dbcp.PoolableConnectionFactory.makeObject(PoolableConnectionFactory.java:582)
    at
    org.apache.commons.pool.impl.GenericObjectPool.borrowObject(GenericObjectPool.java:1148)
    at
    org.apache.commons.dbcp.PoolingDataSource.getConnection(PoolingDataSource.java:106)
    at
    org.datanucleus.store.rdbms.ConnectionFactoryImpl$ManagedConnectionImpl.getConnection(ConnectionFactoryImpl.java:521)
    at
    org.datanucleus.store.rdbms.RDBMSStoreManager.<init>(RDBMSStoreManager.java:290)
    at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native
    Method)
    at
    sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:57)
    at
    sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
    at java.lang.reflect.Constructor.newInstance(Constructor.java:526)
    at
    org.datanucleus.plugin.NonManagedPluginRegistry.createExecutableExtension(NonManagedPluginRegistry.java:588)
    at
    org.datanucleus.plugin.PluginManager.createExecutableExtension(PluginManager.java:300)
    at
    org.datanucleus.ObjectManagerFactoryImpl.initialiseStoreManager(ObjectManagerFactoryImpl.java:161)
    at
    org.datanucleus.jdo.JDOPersistenceManagerFactory.freezeConfiguration(JDOPersistenceManagerFactory.java:583)
    at
    org.datanucleus.jdo.JDOPersistenceManagerFactory.createPersistenceManagerFactory(JDOPersistenceManagerFactory.java:286)
    at
    org.datanucleus.jdo.JDOPersistenceManagerFactory.getPersistenceManagerFactory(JDOPersistenceManagerFactory.java:182)
    at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
    at
    sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
    at
    sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
    at java.lang.reflect.Method.invoke(Method.java:606)
    at javax.jdo.JDOHelper$16.run(JDOHelper.java:1958)
    at java.security.AccessController.doPrivileged(Native Method)
    at javax.jdo.JDOHelper.invoke(JDOHelper.java:1953)
    at
    javax.jdo.JDOHelper.invokeGetPersistenceManagerFactoryOnImplementation(JDOHelper.java:1159)
    at
    javax.jdo.JDOHelper.getPersistenceManagerFactory(JDOHelper.java:803)
    at
    javax.jdo.JDOHelper.getPersistenceManagerFactory(JDOHelper.java:698)
    at
    org.apache.hadoop.hive.metastore.ObjectStore.getPMF(ObjectStore.java:262)
    at
    org.apache.hadoop.hive.metastore.ObjectStore.getPersistenceManager(ObjectStore.java:291)
    at
    org.apache.hadoop.hive.metastore.ObjectStore.initialize(ObjectStore.java:224)
    at
    org.apache.hadoop.hive.metastore.ObjectStore.setConf(ObjectStore.java:199)
    at
    org.apache.hadoop.util.ReflectionUtils.setConf(ReflectionUtils.java:62)
    at
    org.apache.hadoop.util.ReflectionUtils.newInstance(ReflectionUtils.java:117)
    at
    org.apache.hadoop.hive.metastore.RetryingRawStore.<init>(RetryingRawStore.java:62)
    at
    org.apache.hadoop.hive.metastore.RetryingRawStore.getProxy(RetryingRawStore.java:71)
    at
    org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.newRawStore(HiveMetaStore.java:413)
    at
    org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.getMS(HiveMetaStore.java:401)
    at
    org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.createDefaultDB(HiveMetaStore.java:439)
    at
    org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.init(HiveMetaStore.java:325)
    at
    org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.<init>(HiveMetaStore.java:279)
    at com.cloudera.beeswax.Server.serveMeta(Server.java:349)
    at com.cloudera.beeswax.Server$1.run(Server.java:200)
    at java.lang.Thread.run(Thread.java:724)
    Caused by: java.sql.SQLException: Failed to create database
    'metastore_db', see the next exception for details.
    at
    org.apache.derby.impl.jdbc.SQLExceptionFactory.getSQLException(Unknown
    Source)
    at
    org.apache.derby.impl.jdbc.SQLExceptionFactory40.wrapArgsForTransportAcrossDRDA(Unknown
    Source)
    ... 54 more
    Caused by: java.sql.SQLException: Directory
    /usr/share/hue/metastore_db cannot be created.
    at
    org.apache.derby.impl.jdbc.SQLExceptionFactory.getSQLException(Unknown
    Source)
    at
    org.apache.derby.impl.jdbc.SQLExceptionFactory40.wrapArgsForTransportAcrossDRDA(Unknown
    Source)
    at
    org.apache.derby.impl.jdbc.SQLExceptionFactory40.getSQLException(Unknown
    Source)
    at org.apache.derby.impl.jdbc.Util.generateCsSQLException(Unknown
    Source)
    at
    org.apache.derby.impl.jdbc.TransactionResourceImpl.wrapInSQLException(Unknown
    Source)
    at
    org.apache.derby.impl.jdbc.TransactionResourceImpl.handleException(Unknown
    Source)
    at
    org.apache.derby.impl.jdbc.EmbedConnection.handleException(Unknown Source)
    ... 51 more
    Caused by: ERROR XBM0H: Directory /usr/share/hue/metastore_db
    cannot be created.
    at
    org.apache.derby.iapi.error.StandardException.newException(Unknown Source)
    at
    org.apache.derby.impl.services.monitor.StorageFactoryService$9.run(Unknown
    Source)
    at java.security.AccessController.doPrivileged(Native Method)
    at
    org.apache.derby.impl.services.monitor.StorageFactoryService.createServiceRoot(Unknown
    Source)
    at
    org.apache.derby.impl.services.monitor.BaseMonitor.bootService(Unknown
    Source)
    at
    org.apache.derby.impl.services.monitor.BaseMonitor.createPersistentService(Unknown
    Source)
    at
    org.apache.derby.iapi.services.monitor.Monitor.createPersistentService(Unknown
    Source)
    ... 51 more
    (31713) socket 0.0.0.0:8888 already in use, retrying after 4.0
    seconds...
    (31713) socket 0.0.0.0:8888 already in use, retrying after 8.0
    seconds...
    (31713) socket 0.0.0.0:8888 already in use, retrying after 16.0
    seconds...


    On Fri, Aug 9, 2013 at 11:51 AM, Abraham Elmahrek <abe@cloudera.com
    wrote:
    Port 8888 is PID 22739. You should be able to execute "ps auxwww"
    or a similar command to find Beeswax and Hue. They should both be running
    as use Hue. Then just kill them both.

    -Abe


    On Fri, Aug 9, 2013 at 11:45 AM, Vinamrata Singal <
    vsingal5@stanford.edu> wrote:
    also no pid's for 8002 and 8003


    On Fri, Aug 9, 2013 at 11:44 AM, Vinamrata Singal <
    vsingal5@stanford.edu> wrote:
    I have the following output, can't find port 8888

    Proto Recv-Q Send-Q Local Address Foreign Address
    State PID/Program name
    tcp 0 0 0.0.0.0:50060 0.0.0.0:*
    LISTEN 30843/java
    tcp 0 0 0.0.0.0:44141 0.0.0.0:*
    LISTEN 30312/java
    tcp 0 0 0.0.0.0:50030 0.0.0.0:*
    LISTEN 30697/java
    tcp 0 0 0.0.0.0:111 0.0.0.0:*
    LISTEN -
    tcp 0 0 0.0.0.0:50070 0.0.0.0:*
    LISTEN 30312/java
    tcp 0 0 0.0.0.0:22 0.0.0.0:*
    LISTEN -
    tcp 0 0 0.0.0.0:58776 0.0.0.0:*
    LISTEN 30442/java
    tcp 0 0 0.0.0.0:8888 0.0.0.0:*
    LISTEN 22739/python2.7
    tcp 0 0 0.0.0.0:50010 0.0.0.0:*
    LISTEN 30442/java
    tcp 0 0 0.0.0.0:50075 0.0.0.0:*
    LISTEN 30442/java
    tcp 0 0 0.0.0.0:55035 0.0.0.0:*
    LISTEN 30572/java
    tcp 0 0 0.0.0.0:55039 0.0.0.0:*
    LISTEN 30697/java
    tcp 0 0 127.0.0.1:60353 0.0.0.0:*
    LISTEN 30843/java
    tcp 0 0 0.0.0.0:8002 0.0.0.0:*
    LISTEN -
    tcp 0 0 0.0.0.0:8003 0.0.0.0:*
    LISTEN -
    tcp 0 0 0.0.0.0:50020 0.0.0.0:*
    LISTEN 30442/java
    tcp 0 0 127.0.0.1:54310 0.0.0.0:*
    LISTEN 30312/java
    tcp 0 0 0.0.0.0:40358 0.0.0.0:*
    LISTEN -
    tcp 0 0 127.0.0.1:54311 0.0.0.0:*
    LISTEN 30697/java
    tcp 0 0 0.0.0.0:50090 0.0.0.0:*
    LISTEN 30572/java
    tcp6 0 0 :::49932 :::*
    LISTEN -
    tcp6 0 0 :::111 :::*
    LISTEN -
    tcp6 0 0 :::22 :::*
    LISTEN -


    On Fri, Aug 9, 2013 at 11:27 AM, Abraham Elmahrek <
    abe@cloudera.com> wrote:
    It definitely looks like they're in use. The command "netstat
    -tlnp" should show you their PIDs and process names. If they're Hue, you
    can simply 'kill' those processes using "kill <pid1> <pid2> ..." and start
    Hue again.


    On Fri, Aug 9, 2013 at 11:22 AM, Vinamrata Singal <
    vsingal5@stanford.edu> wrote:
    tcp 0 0 0.0.0.0:50060 0.0.0.0:*
    LISTEN
    tcp 0 0 0.0.0.0:44141 0.0.0.0:*
    LISTEN
    tcp 0 0 0.0.0.0:50030 0.0.0.0:*
    LISTEN
    tcp 0 0 0.0.0.0:111 0.0.0.0:*
    LISTEN
    tcp 0 0 0.0.0.0:50070 0.0.0.0:*
    LISTEN
    tcp 0 0 0.0.0.0:22 0.0.0.0:*
    LISTEN
    tcp 0 0 0.0.0.0:58776 0.0.0.0:*
    LISTEN
    tcp 0 0 0.0.0.0:8888 0.0.0.0:*
    LISTEN
    tcp 0 0 0.0.0.0:50010 0.0.0.0:*
    LISTEN
    tcp 0 0 0.0.0.0:50075 0.0.0.0:*
    LISTEN
    tcp 0 0 0.0.0.0:55035 0.0.0.0:*
    LISTEN
    tcp 0 0 0.0.0.0:55039 0.0.0.0:*
    LISTEN
    tcp 0 0 127.0.0.1:60353 0.0.0.0:*
    LISTEN
    tcp 0 0 0.0.0.0:8002 0.0.0.0:*
    LISTEN
    tcp 0 0 0.0.0.0:8003 0.0.0.0:*
    LISTEN
    tcp 0 0 0.0.0.0:50020 0.0.0.0:*
    LISTEN
    tcp 0 0 127.0.0.1:54310 0.0.0.0:*
    LISTEN
    tcp 0 0 0.0.0.0:40358 0.0.0.0:*
    LISTEN
    tcp 0 0 127.0.0.1:54311 0.0.0.0:*
    LISTEN
    tcp 0 0 0.0.0.0:50090 0.0.0.0:*
    LISTEN
    tcp6 0 0 :::49932 :::*
    LISTEN
    tcp6 0 0 :::111 :::*
    LISTEN
    tcp6 0 0 :::22 :::*
    LISTEN


    On Fri, Aug 9, 2013 at 11:20 AM, Abraham Elmahrek <
    abe@cloudera.com> wrote:
    It looks like port 8888 is in use. Maybe 8002 and 8003 are in
    use as well. Could you provide the output of 'netstat -tln'?

    -Abe


    On Fri, Aug 9, 2013 at 11:16 AM, Vinamrata Singal <
    vsingal5@stanford.edu> wrote:
    Abe,

    Thanks for all your help and quick replies.

    So I went ahead and fixed everything (including the username
    as you suggested) yet now I'm getting a different error:

    Executing /usr/local/hadoop/bin/hadoop jar
    /usr/share/hue/apps/beeswax/src/beeswax/../../java-lib/BeeswaxServer.jar
    --beeswax 8002 --desktop-host 127.0.0.1 --desktop-port 8888
    --query-lifetime 604800000 --metastore 8003
    (31443) socket 0.0.0.0:8888 already in use, retrying after
    0.5 seconds...
    (31443) socket 0.0.0.0:8888 already in use, retrying after
    1.0 seconds...
    (31443) socket 0.0.0.0:8888 already in use, retrying after
    2.0 seconds...
    (31443) socket 0.0.0.0:8888 already in use, retrying after
    4.0 seconds...
    13/08/09 11:15:37 INFO beeswax.Server: Starting metastore at
    port 8003
    13/08/09 11:15:37 INFO beeswax.Server: Starting beeswaxd at
    port 8002
    Exception in thread "main"
    org.apache.thrift.transport.TTransportException: Could not create
    ServerSocket on address 0.0.0.0/0.0.0.0:8002.
    at
    org.apache.thrift.transport.TServerSocket.<init>(TServerSocket.java:93)
    at
    org.apache.thrift.transport.TServerSocket.<init>(TServerSocket.java:75)
    at
    org.apache.thrift.transport.TServerSocket.<init>(TServerSocket.java:68)
    at com.cloudera.beeswax.Server.serveBeeswax(Server.java:250)
    at com.cloudera.beeswax.Server.main(Server.java:214)
    at sun.reflect.NativeMethodAccessorImpl.invoke0(Native
    Method)
    at
    sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
    at
    sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
    at java.lang.reflect.Method.invoke(Method.java:606)
    at org.apache.hadoop.util.RunJar.main(RunJar.java:160)
    org.apache.thrift.transport.TTransportException: Could not
    create ServerSocket on address 0.0.0.0/0.0.0.0:8003.
    at
    org.apache.thrift.transport.TServerSocket.<init>(TServerSocket.java:93)
    at
    org.apache.thrift.transport.TServerSocket.<init>(TServerSocket.java:75)
    at
    org.apache.thrift.transport.TServerSocket.<init>(TServerSocket.java:68)
    at com.cloudera.beeswax.Server.serveMeta(Server.java:348)
    at com.cloudera.beeswax.Server$1.run(Server.java:200)
    at java.lang.Thread.run(Thread.java:724)



    On Thu, Aug 8, 2013 at 5:53 PM, Abraham Elmahrek <
    abe@cloudera.com> wrote:
    Vinamrata,

    I think it will work with the exception of the job browser
    application. It depends on the thrift service code for the job tracker
    packaged in hue-plugins.jar, which only runs on CDH. There have been users
    who've ported that code over to the upstream job tracker in the past though.

    To solve the rest of your issues, take a look at
    apps/beeswax/beeswax_server.sh. There are environment variables that you
    need to set when you start Hue. In particular, "HIVE_LIB" should be set or
    some how grok'd into Hue's environment.

    -Abe


    On Thu, Aug 8, 2013 at 5:43 PM, Vinamrata Singal <
    vsingal5@stanford.edu> wrote:
    I have an Apache installation. Is that an issue for Hue?

    Got it, will look into the hue user installation issues.


    On Thu, Aug 8, 2013 at 5:25 PM, Abraham Elmahrek <
    abe@cloudera.com> wrote:
    That error is likely because of the error: find:
    `~/hive-0.10.0/lib': No such file or directory. Are you
    using a custom installation of Hive? Perhaps, the apache distribution
    rather than the CDH distribution?

    Note: It's probably not a good idea to change the user
    from 'hue' to something else.


    On Thu, Aug 8, 2013 at 5:20 PM, Vinamrata Singal <
    vsingal5@stanford.edu> wrote:
    I actually think I fixed the issue by changing the user
    in the supervisor.py script to be the user that owns that directory.

    Question: now I'm getting this error:

    xception in thread "main"
    java.lang.NoClassDefFoundError: org/apache/hadoop/hive/conf/HiveConf
    at java.lang.Class.forName0(Native Method)
    at java.lang.Class.forName(Class.java:270)
    at org.apache.hadoop.util.RunJar.main(RunJar.java:153)
    Caused by: java.lang.ClassNotFoundException:
    org.apache.hadoop.hive.conf.HiveConf
    at
    java.net.URLClassLoader$1.run(URLClassLoader.java:366)
    at java.net.URLClassLoader$1.run(URLClassLoader.java:355)
    at java.security.AccessController.doPrivileged(Native
    Method)
    at
    java.net.URLClassLoader.findClass(URLClassLoader.java:354)
    at
    java.lang.ClassLoader.loadClass(ClassLoader.java:424)
    at java.lang.ClassLoader.loadClass(ClassLoader.java:357)


    On Thu, Aug 8, 2013 at 5:19 PM, Abraham Elmahrek <
    abe@cloudera.com> wrote:
    Your 'hadoop.tmp.dir' doesn't vary per user. I'm not
    entirely sure if that directory is chown'd or if the first user to create
    it takes ownership. Either way, you probably don't have access to that
    directory using the user starting Beeswaxd (hue). Take a look at
    http://hadoop.apache.org/docs/current/hadoop-project-dist/hadoop-common/core-default.xmlfor an example 'hadoop.tmp.dir' that varies by username.

    -Abe


    On Thu, Aug 8, 2013 at 5:15 PM, Vinamrata Singal <
    vsingal5@stanford.edu> wrote:
    <configuration>
    <property>
    <name>hadoop.tmp.dir </name>
    <value>/app/hadoop/tmp</value>
    <description> A base for other temporary
    directories </description>
    </property>

    <property>
    <name>fs.default.name</name>
    <value>hdfs://master:54310</value>
    </property>
    </configuration>


    On Thu, Aug 8, 2013 at 5:11 PM, Abraham Elmahrek <
    abe@cloudera.com> wrote:
    Could you provide your core-site.xml? This should be
    creating a file in 'hadoop.tmp.dir'.

    -Abe


    On Thu, Aug 8, 2013 at 5:03 PM, Vinamrata Singal <
    vsingal5@stanford.edu> wrote:
    File: `/tmp'
    Size: 4096 Blocks: 8 IO Block:
    4096 directory
    Device: fd00h/64768d Inode: 131082 Links: 11
    Access: (1777/drwxrwxrwt) Uid: ( 0/ root)
    Gid: ( 0/ root)
    Access: 2013-08-08 22:47:49.435308999 +0000
    Modify: 2013-08-09 00:01:17.251309001 +0000
    Change: 2013-08-09 00:01:17.251309001 +0000
    Birth: -



    On Thu, Aug 8, 2013 at 5:00 PM, Abraham Elmahrek <
    abe@cloudera.com> wrote:
    It looks like Beeswaxd cannot create a temporary
    directory in '/tmp/'. What do your permissions on that directory look like?
    What is the output of "stat /tmp/"?


    On Thu, Aug 8, 2013 at 4:52 PM, <
    vsingal5@stanford.edu> wrote:
    I'm getting the following while trying to build
    Hue:















    (6211) *** Controller starting at Thu Aug 8 11:29:50 2013
    Should start 1 new children
    Controller.spawn_children(number=1)
    $HADOOP_HOME=
    $HADOOP_BIN=/usr/local/hadoop/bin/hadoop
    $HIVE_CONF_DIR=~/hive-0.10.0/conf
    $HIVE_HOME=~/hive-0.10.0
    find: `~/hive-0.10.0/lib': No such file or directory
    $HADOOP_CLASSPATH=:
    $HADOOP_OPTS=-Dlog4j.configuration=log4j.properties
    $HADOOP_CONF_DIR=~/hive-0.10.0/conf:/usr/local/hadoop/conf
    $HADOOP_MAPRED_HOME=/usr/lib/hadoop-0.20-mapreduce
    CWD=/usr/local/hue/desktop/conf
    Executing /usr/local/hadoop/bin/hadoop jar /usr/local/hue/apps/beeswax/src/beeswax/../../java-lib/BeeswaxServer.jar --beeswax 8002 --desktop-host 127.0.0.1 --desktop-port 8888 --query-lifetime 604800000 --metastore 8003
    Exception in thread "main" java.io.IOException: Permission denied
    at java.io.UnixFileSystem.createFileExclusively(Native Method)
    at java.io.File.createTempFile(File.java:1879)
    at org.apache.hadoop.util.RunJar.main(RunJar.java:119)

    I've changed the configuration file so it doesn't
    use hue but the user that I'm logged in as which has read and write
    permissions in the hadoop dfs, hadoop, hive, etc. Not sure why it's doing
    this...

    --
    Best,
    Vinamrata Singal | BS Computer Science | Stanford
    Class of 2016 | (c) 650.215.3775 | (w)
    https://stanford.edu/~vsingal5

    --
    Best,
    Vinamrata Singal | BS Computer Science | Stanford
    Class of 2016 | (c) 650.215.3775 | (w)
    https://stanford.edu/~vsingal5

    --
    Best,
    Vinamrata Singal | BS Computer Science | Stanford Class
    of 2016 | (c) 650.215.3775 | (w)
    https://stanford.edu/~vsingal5

    --
    Best,
    Vinamrata Singal | BS Computer Science | Stanford Class of
    2016 | (c) 650.215.3775 | (w)
    https://stanford.edu/~vsingal5

    --
    Best,
    Vinamrata Singal | BS Computer Science | Stanford Class of
    2016 | (c) 650.215.3775 | (w) https://stanford.edu/~vsingal5

    --
    Best,
    Vinamrata Singal | BS Computer Science | Stanford Class of
    2016 | (c) 650.215.3775 | (w) https://stanford.edu/~vsingal5

    --
    Best,
    Vinamrata Singal | BS Computer Science | Stanford Class of 2016
    (c) 650.215.3775 | (w) https://stanford.edu/~vsingal5


    --
    Best,
    Vinamrata Singal | BS Computer Science | Stanford Class of 2016 |
    (c) 650.215.3775 | (w) https://stanford.edu/~vsingal5

    --
    Best,
    Vinamrata Singal | BS Computer Science | Stanford Class of 2016 |
    (c) 650.215.3775 | (w) https://stanford.edu/~vsingal5

    --
    Best,
    Vinamrata Singal | BS Computer Science | Stanford Class of 2016 | (c)
    650.215.3775 | (w) https://stanford.edu/~vsingal5

    --
    Best,
    Vinamrata Singal | BS Computer Science | Stanford Class of 2016 | (c)
    650.215.3775 | (w) https://stanford.edu/~vsingal5

    --
    Best,
    Vinamrata Singal | BS Computer Science | Stanford Class of 2016 | (c)
    650.215.3775 | (w) https://stanford.edu/~vsingal5

    --
    Best,
    Vinamrata Singal | BS Computer Science | Stanford Class of 2016 |
    (c) 650.215.3775 | (w) https://stanford.edu/~vsingal5
  • Vinamrata Singal at Aug 9, 2013 at 7:33 pm
    what i mean is: I think it's running as hue as that's the user specified on
    supervisor.py, but I'm logged in as another user (i.e. not hue). Is that
    fine? If so, then how do I fix this permission issue?

    On Fri, Aug 9, 2013 at 12:32 PM, Vinamrata Singal wrote:

    Hue is being run as hue, as that is what is specified in supervisor.py

    I am logged in as another user, is that OK?

    And apparently it's trying to create a new metastore_db instead of using
    the one that's already created as specified in the hive_site.xml as well as
    the hive lib variables?

    On Fri, Aug 9, 2013 at 12:29 PM, Romain Rigaux wrote:

    We recommend to run Hue as 'hue' as it is the standard. If not you need
    to chown or chmod all the log, db directories/files.

    Romain

    On Fri, Aug 9, 2013 at 12:24 PM, Vinamrata Singal wrote:

    Even though I gave permissions to derby.log and where the metastore gets
    created, why am I still getting this error:

    2013-08-09 19:23:15.059 GMT Thread[MetaServerThread,5,main]
    java.io.FileNotFoundException: derby.log (Permission denied)
    2013-08-09 19:23:15.193 GMT Thread[MetaServerThread,5,main] Cleanup
    action starting
    ERROR XBM0H: Directory /home/vsingal/metastore_db cannot be created.
    at org.apache.derby.iapi.error.StandardException.newException(Unknown
    Source)
    at
    org.apache.derby.impl.services.monitor.StorageFactoryService$9.run(Unknown
    Source)
    at java.security.AccessController.doPrivileged(Native Method)
    at
    org.apache.derby.impl.services.monitor.StorageFactoryService.createServiceRoot(Unknown
    Source)
    at
    org.apache.derby.impl.services.monitor.BaseMonitor.bootService(Unknown
    Source)
    at
    org.apache.derby.impl.services.monitor.BaseMonitor.createPersistentService(Unknown
    Source)
    at
    org.apache.derby.iapi.services.monitor.Monitor.createPersistentService(Unknown
    Source)
    at org.apache.derby.impl.jdbc.EmbedConnection.createDatabase(Unknown
    Source)
    at org.apache.derby.impl.jdbc.EmbedConnection.<init>(Unknown Source)
    at org.apache.derby.impl.jdbc.EmbedConnection30.<init>(Unknown Source)
    at org.apache.derby.impl.jdbc.EmbedConnection40.<init>(Unknown Source)
    at org.apache.derby.jdbc.Driver40.getNewEmbedConnection(Unknown Source)
    at org.apache.derby.jdbc.InternalDriver.connect(Unknown Source)
    at org.apache.derby.jdbc.AutoloadedDriver.connect(Unknown Source)
    at java.sql.DriverManager.getConnection(DriverManager.java:571)
    at java.sql.DriverManager.getConnection(DriverManager.java:215)
    at
    org.apache.commons.dbcp.DriverManagerConnectionFactory.createConnection(DriverManagerConnectionFactory.java:75)
    at
    org.apache.commons.dbcp.PoolableConnectionFactory.makeObject(PoolableConnectionFactory.java:582)
    at
    org.apache.commons.pool.impl.GenericObjectPool.borrowObject(GenericObjectPool.java:1148)
    at
    org.apache.commons.dbcp.PoolingDataSource.getConnection(PoolingDataSource.java:106)
    at
    org.datanucleus.store.rdbms.ConnectionFactoryImpl$ManagedConnectionImpl.getConnection(ConnectionFactoryImpl.java:521)
    at
    org.datanucleus.store.rdbms.RDBMSStoreManager.<init>(RDBMSStoreManager.java:290)
    at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
    at
    sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:57)
    at
    sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
    at java.lang.reflect.Constructor.newInstance(Constructor.java:526)
    at
    org.datanucleus.plugin.NonManagedPluginRegistry.createExecutableExtension(NonManagedPluginRegistry.java:588)
    at
    org.datanucleus.plugin.PluginManager.createExecutableExtension(PluginManager.java:300)
    at
    org.datanucleus.ObjectManagerFactoryImpl.initialiseStoreManager(ObjectManagerFactoryImpl.java:161)
    at
    org.datanucleus.jdo.JDOPersistenceManagerFactory.freezeConfiguration(JDOPersistenceManagerFactory.java:583)
    at
    org.datanucleus.jdo.JDOPersistenceManagerFactory.createPersistenceManagerFactory(JDOPersistenceManagerFactory.java:286)
    at
    org.datanucleus.jdo.JDOPersistenceManagerFactory.getPersistenceManagerFactory(JDOPersistenceManagerFactory.java:182)
    at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
    at
    sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
    at
    sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
    at java.lang.reflect.Method.invoke(Method.java:606)
    at javax.jdo.JDOHelper$16.run(JDOHelper.java:1958)
    at java.security.AccessController.doPrivileged(Native Method)
    at javax.jdo.JDOHelper.invoke(JDOHelper.java:1953)
    at
    javax.jdo.JDOHelper.invokeGetPersistenceManagerFactoryOnImplementation(JDOHelper.java:1159)
    at javax.jdo.JDOHelper.getPersistenceManagerFactory(JDOHelper.java:803)
    at javax.jdo.JDOHelper.getPersistenceManagerFactory(JDOHelper.java:698)
    at
    org.apache.hadoop.hive.metastore.ObjectStore.getPMF(ObjectStore.java:262)
    at
    org.apache.hadoop.hive.metastore.ObjectStore.getPersistenceManager(ObjectStore.java:291)
    at
    org.apache.hadoop.hive.metastore.ObjectStore.initialize(ObjectStore.java:224)
    at
    org.apache.hadoop.hive.metastore.ObjectStore.setConf(ObjectStore.java:199)
    at
    org.apache.hadoop.util.ReflectionUtils.setConf(ReflectionUtils.java:62)
    at
    org.apache.hadoop.util.ReflectionUtils.newInstance(ReflectionUtils.java:117)
    at
    org.apache.hadoop.hive.metastore.RetryingRawStore.<init>(RetryingRawStore.java:62)
    at
    org.apache.hadoop.hive.metastore.RetryingRawStore.getProxy(RetryingRawStore.java:71)
    at
    org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.newRawStore(HiveMetaStore.java:413)
    at
    org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.getMS(HiveMetaStore.java:401)
    at
    org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.createDefaultDB(HiveMetaStore.java:439)
    at
    org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.init(HiveMetaStore.java:325)
    at
    org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.<init>(HiveMetaStore.java:279)
    at com.cloudera.beeswax.Server.serveMeta(Server.java:349)
    at com.cloudera.beeswax.Server$1.run(Server.java:200)
    at java.lang.Thread.run(Thread.java:724)


    On Fri, Aug 9, 2013 at 12:20 PM, Romain Rigaux wrote:

    Make sure all ps -ef | grep hue or grep beeswax return nothing.

    Also curious, what is the use case of not running Hue as 'hue'?

    Romain


    On Fri, Aug 9, 2013 at 12:16 PM, Vinamrata Singal <
    vsingal5@stanford.edu> wrote:
    It didn't send this part:

    Exception in thread "main"
    org.apache.thrift.transport.TTransportException: Could not create
    ServerSocket on address0.0.0.0/0.0.0.0:8002.
    at
    org.apache.thrift.transport.TServerSocket.<init>(TServerSocket.java:93)
    at
    org.apache.thrift.transport.TServerSocket.<init>(TServerSocket.java:75)
    at
    org.apache.thrift.transport.TServerSocket.<init>(TServerSocket.java:68)
    at com.cloudera.beeswax.Server.serveBeeswax(Server.java:250)
    at com.cloudera.beeswax.Server.main(Server.java:214)
    at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
    at
    sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
    at
    sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
    at java.lang.reflect.Method.invoke(Method.java:606)
    at org.apache.hadoop.util.RunJar.main(RunJar.java:160)
    org.apache.thrift.transport.TTransportException: Could not create
    ServerSocket on address 0.0.0.0/0.0.0.0:8003.
    at
    org.apache.thrift.transport.TServerSocket.<init>(TServerSocket.java:93)
    at
    org.apache.thrift.transport.TServerSocket.<init>(TServerSocket.java:75)
    at
    org.apache.thrift.transport.TServerSocket.<init>(TServerSocket.java:68)
    at com.cloudera.beeswax.Server.serveMeta(Server.java:348)
    at com.cloudera.beeswax.Server$1.run(Server.java:200)
    at java.lang.Thread.run(Thread.java:724)

    On Fri, Aug 9, 2013 at 12:14 PM, Abraham Elmahrek wrote:

    The logs you've just posted mean that beeswaxd is starting :).


    On Fri, Aug 9, 2013 at 12:07 PM, Vinamrata Singal <
    vsingal5@stanford.edu> wrote:
    Done, but I still get a bind to socket error, but when I run netstat
    there is no pid associated with the port:

    13/08/09 12:04:09 INFO beeswax.Server: Starting metastore at port
    8003
    13/08/09 12:04:09 INFO beeswax.Server: Starting beeswaxd at port 8002
    Exception in thread "main"
    org.apache.thrift.transport.TTransportException: Could not create
    ServerSocket on address 0.0.0.0/0.0.0.0:8002.
    at
    org.apache.thrift.transport.TServerSocket.<init>(TServerSocket.java:93)
    at
    org.apache.thrift.transport.TServerSocket.<init>(TServerSocket.java:75)
    at
    org.apache.thrift.transport.TServerSocket.<init>(TServerSocket.java:68)
    at com.cloudera.beeswax.Server.serveBeeswax(Server.java:250)
    at com.cloudera.beeswax.Server.main(Server.java:214)
    at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
    at
    sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
    at
    sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
    at java.lang.reflect.Method.invoke(Method.java:606)
    at org.apache.hadoop.util.RunJar.main(RunJar.java:160)
    org.apache.thrift.transport.TTransportException: Could not create
    ServerSocket on address 0.0.0.0/0.0.0.0:8003.
    at
    org.apache.thrift.transport.TServerSocket.<init>(TServerSocket.java:93)
    at
    org.apache.thrift.transport.TServerSocket.<init>(TServerSocket.java:75)
    at
    org.apache.thrift.transport.TServerSocket.<init>(TServerSocket.java:68)
    at com.cloudera.beeswax.Server.serveMeta(Server.java:348)
    at com.cloudera.beeswax.Server$1.run(Server.java:200)
    at java.lang.Thread.run(Thread.java:724)


    On Fri, Aug 9, 2013 at 11:59 AM, Romain Rigaux wrote:

    Beeswax is trying to create the Hive metastore DB on /usr/share/hue
    and your custom user does not have the permissions:

    chown your_user:your_user /usr/share/hue

    Romain


    On Fri, Aug 9, 2013 at 11:56 AM, Vinamrata Singal <
    vsingal5@stanford.edu> wrote:
    After killing the appropriate processes, I still get the following
    error:

    java.sql.SQLException: Failed to create database 'metastore_db',
    see the next exception for details.
    at
    org.apache.derby.impl.jdbc.SQLExceptionFactory40.getSQLException(Unknown
    Source)
    at org.apache.derby.impl.jdbc.Util.newEmbedSQLException(Unknown
    Source)
    at org.apache.derby.impl.jdbc.Util.seeNextException(Unknown
    Source)
    at
    org.apache.derby.impl.jdbc.EmbedConnection.createDatabase(Unknown Source)
    at org.apache.derby.impl.jdbc.EmbedConnection.<init>(Unknown
    Source)
    at org.apache.derby.impl.jdbc.EmbedConnection30.<init>(Unknown
    Source)
    at org.apache.derby.impl.jdbc.EmbedConnection40.<init>(Unknown
    Source)
    at org.apache.derby.jdbc.Driver40.getNewEmbedConnection(Unknown
    Source)
    at org.apache.derby.jdbc.InternalDriver.connect(Unknown Source)
    at org.apache.derby.jdbc.AutoloadedDriver.connect(Unknown Source)
    at java.sql.DriverManager.getConnection(DriverManager.java:571)
    at java.sql.DriverManager.getConnection(DriverManager.java:215)
    at
    org.apache.commons.dbcp.DriverManagerConnectionFactory.createConnection(DriverManagerConnectionFactory.java:75)
    at
    org.apache.commons.dbcp.PoolableConnectionFactory.makeObject(PoolableConnectionFactory.java:582)
    at
    org.apache.commons.pool.impl.GenericObjectPool.borrowObject(GenericObjectPool.java:1148)
    at
    org.apache.commons.dbcp.PoolingDataSource.getConnection(PoolingDataSource.java:106)
    at
    org.datanucleus.store.rdbms.ConnectionFactoryImpl$ManagedConnectionImpl.getConnection(ConnectionFactoryImpl.java:521)
    at
    org.datanucleus.store.rdbms.RDBMSStoreManager.<init>(RDBMSStoreManager.java:290)
    at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native
    Method)
    at
    sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:57)
    at
    sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
    at java.lang.reflect.Constructor.newInstance(Constructor.java:526)
    at
    org.datanucleus.plugin.NonManagedPluginRegistry.createExecutableExtension(NonManagedPluginRegistry.java:588)
    at
    org.datanucleus.plugin.PluginManager.createExecutableExtension(PluginManager.java:300)
    at
    org.datanucleus.ObjectManagerFactoryImpl.initialiseStoreManager(ObjectManagerFactoryImpl.java:161)
    at
    org.datanucleus.jdo.JDOPersistenceManagerFactory.freezeConfiguration(JDOPersistenceManagerFactory.java:583)
    at
    org.datanucleus.jdo.JDOPersistenceManagerFactory.createPersistenceManagerFactory(JDOPersistenceManagerFactory.java:286)
    at
    org.datanucleus.jdo.JDOPersistenceManagerFactory.getPersistenceManagerFactory(JDOPersistenceManagerFactory.java:182)
    at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
    at
    sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
    at
    sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
    at java.lang.reflect.Method.invoke(Method.java:606)
    at javax.jdo.JDOHelper$16.run(JDOHelper.java:1958)
    at java.security.AccessController.doPrivileged(Native Method)
    at javax.jdo.JDOHelper.invoke(JDOHelper.java:1953)
    at
    javax.jdo.JDOHelper.invokeGetPersistenceManagerFactoryOnImplementation(JDOHelper.java:1159)
    at
    javax.jdo.JDOHelper.getPersistenceManagerFactory(JDOHelper.java:803)
    at
    javax.jdo.JDOHelper.getPersistenceManagerFactory(JDOHelper.java:698)
    at
    org.apache.hadoop.hive.metastore.ObjectStore.getPMF(ObjectStore.java:262)
    at
    org.apache.hadoop.hive.metastore.ObjectStore.getPersistenceManager(ObjectStore.java:291)
    at
    org.apache.hadoop.hive.metastore.ObjectStore.initialize(ObjectStore.java:224)
    at
    org.apache.hadoop.hive.metastore.ObjectStore.setConf(ObjectStore.java:199)
    at
    org.apache.hadoop.util.ReflectionUtils.setConf(ReflectionUtils.java:62)
    at
    org.apache.hadoop.util.ReflectionUtils.newInstance(ReflectionUtils.java:117)
    at
    org.apache.hadoop.hive.metastore.RetryingRawStore.<init>(RetryingRawStore.java:62)
    at
    org.apache.hadoop.hive.metastore.RetryingRawStore.getProxy(RetryingRawStore.java:71)
    at
    org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.newRawStore(HiveMetaStore.java:413)
    at
    org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.getMS(HiveMetaStore.java:401)
    at
    org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.createDefaultDB(HiveMetaStore.java:439)
    at
    org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.init(HiveMetaStore.java:325)
    at
    org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.<init>(HiveMetaStore.java:279)
    at com.cloudera.beeswax.Server.serveMeta(Server.java:349)
    at com.cloudera.beeswax.Server$1.run(Server.java:200)
    at java.lang.Thread.run(Thread.java:724)
    Caused by: java.sql.SQLException: Failed to create database
    'metastore_db', see the next exception for details.
    at
    org.apache.derby.impl.jdbc.SQLExceptionFactory.getSQLException(Unknown
    Source)
    at
    org.apache.derby.impl.jdbc.SQLExceptionFactory40.wrapArgsForTransportAcrossDRDA(Unknown
    Source)
    ... 54 more
    Caused by: java.sql.SQLException: Directory
    /usr/share/hue/metastore_db cannot be created.
    at
    org.apache.derby.impl.jdbc.SQLExceptionFactory.getSQLException(Unknown
    Source)
    at
    org.apache.derby.impl.jdbc.SQLExceptionFactory40.wrapArgsForTransportAcrossDRDA(Unknown
    Source)
    at
    org.apache.derby.impl.jdbc.SQLExceptionFactory40.getSQLException(Unknown
    Source)
    at
    org.apache.derby.impl.jdbc.Util.generateCsSQLException(Unknown Source)
    at
    org.apache.derby.impl.jdbc.TransactionResourceImpl.wrapInSQLException(Unknown
    Source)
    at
    org.apache.derby.impl.jdbc.TransactionResourceImpl.handleException(Unknown
    Source)
    at
    org.apache.derby.impl.jdbc.EmbedConnection.handleException(Unknown Source)
    ... 51 more
    Caused by: ERROR XBM0H: Directory /usr/share/hue/metastore_db
    cannot be created.
    at
    org.apache.derby.iapi.error.StandardException.newException(Unknown Source)
    at
    org.apache.derby.impl.services.monitor.StorageFactoryService$9.run(Unknown
    Source)
    at java.security.AccessController.doPrivileged(Native Method)
    at
    org.apache.derby.impl.services.monitor.StorageFactoryService.createServiceRoot(Unknown
    Source)
    at
    org.apache.derby.impl.services.monitor.BaseMonitor.bootService(Unknown
    Source)
    at
    org.apache.derby.impl.services.monitor.BaseMonitor.createPersistentService(Unknown
    Source)
    at
    org.apache.derby.iapi.services.monitor.Monitor.createPersistentService(Unknown
    Source)
    ... 51 more
    Exception in thread "MetaServerThread"
    javax.jdo.JDOFatalDataStoreException: Failed to create database
    'metastore_db', see the next exception for details.
    at
    org.datanucleus.jdo.NucleusJDOHelper.getJDOExceptionForNucleusException(NucleusJDOHelper.java:298)
    at
    org.datanucleus.jdo.JDOPersistenceManagerFactory.freezeConfiguration(JDOPersistenceManagerFactory.java:601)
    at
    org.datanucleus.jdo.JDOPersistenceManagerFactory.createPersistenceManagerFactory(JDOPersistenceManagerFactory.java:286)
    at
    org.datanucleus.jdo.JDOPersistenceManagerFactory.getPersistenceManagerFactory(JDOPersistenceManagerFactory.java:182)
    at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
    at
    sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
    at
    sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
    at java.lang.reflect.Method.invoke(Method.java:606)
    at javax.jdo.JDOHelper$16.run(JDOHelper.java:1958)
    at java.security.AccessController.doPrivileged(Native Method)
    at javax.jdo.JDOHelper.invoke(JDOHelper.java:1953)
    at
    javax.jdo.JDOHelper.invokeGetPersistenceManagerFactoryOnImplementation(JDOHelper.java:1159)
    at
    javax.jdo.JDOHelper.getPersistenceManagerFactory(JDOHelper.java:803)
    at
    javax.jdo.JDOHelper.getPersistenceManagerFactory(JDOHelper.java:698)
    at
    org.apache.hadoop.hive.metastore.ObjectStore.getPMF(ObjectStore.java:262)
    at
    org.apache.hadoop.hive.metastore.ObjectStore.getPersistenceManager(ObjectStore.java:291)
    at
    org.apache.hadoop.hive.metastore.ObjectStore.initialize(ObjectStore.java:224)
    at
    org.apache.hadoop.hive.metastore.ObjectStore.setConf(ObjectStore.java:199)
    at
    org.apache.hadoop.util.ReflectionUtils.setConf(ReflectionUtils.java:62)
    at
    org.apache.hadoop.util.ReflectionUtils.newInstance(ReflectionUtils.java:117)
    at
    org.apache.hadoop.hive.metastore.RetryingRawStore.<init>(RetryingRawStore.java:62)
    at
    org.apache.hadoop.hive.metastore.RetryingRawStore.getProxy(RetryingRawStore.java:71)
    at
    org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.newRawStore(HiveMetaStore.java:413)
    at
    org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.getMS(HiveMetaStore.java:401)
    at
    org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.createDefaultDB(HiveMetaStore.java:439)
    at
    org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.init(HiveMetaStore.java:325)
    at
    org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.<init>(HiveMetaStore.java:279)
    at com.cloudera.beeswax.Server.serveMeta(Server.java:349)
    at com.cloudera.beeswax.Server$1.run(Server.java:200)
    at java.lang.Thread.run(Thread.java:724)
    NestedThrowablesStackTrace:
    java.sql.SQLException: Failed to create database 'metastore_db',
    see the next exception for details.
    at
    org.apache.derby.impl.jdbc.SQLExceptionFactory40.getSQLException(Unknown
    Source)
    at org.apache.derby.impl.jdbc.Util.newEmbedSQLException(Unknown
    Source)
    at org.apache.derby.impl.jdbc.Util.seeNextException(Unknown Source)
    at
    org.apache.derby.impl.jdbc.EmbedConnection.createDatabase(Unknown Source)
    at org.apache.derby.impl.jdbc.EmbedConnection.<init>(Unknown
    Source)
    at org.apache.derby.impl.jdbc.EmbedConnection30.<init>(Unknown
    Source)
    at org.apache.derby.impl.jdbc.EmbedConnection40.<init>(Unknown
    Source)
    at org.apache.derby.jdbc.Driver40.getNewEmbedConnection(Unknown
    Source)
    at org.apache.derby.jdbc.InternalDriver.connect(Unknown Source)
    at org.apache.derby.jdbc.AutoloadedDriver.connect(Unknown Source)
    at java.sql.DriverManager.getConnection(DriverManager.java:571)
    at java.sql.DriverManager.getConnection(DriverManager.java:215)
    at
    org.apache.commons.dbcp.DriverManagerConnectionFactory.createConnection(DriverManagerConnectionFactory.java:75)
    at
    org.apache.commons.dbcp.PoolableConnectionFactory.makeObject(PoolableConnectionFactory.java:582)
    at
    org.apache.commons.pool.impl.GenericObjectPool.borrowObject(GenericObjectPool.java:1148)
    at
    org.apache.commons.dbcp.PoolingDataSource.getConnection(PoolingDataSource.java:106)
    at
    org.datanucleus.store.rdbms.ConnectionFactoryImpl$ManagedConnectionImpl.getConnection(ConnectionFactoryImpl.java:521)
    at
    org.datanucleus.store.rdbms.RDBMSStoreManager.<init>(RDBMSStoreManager.java:290)
    at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native
    Method)
    at
    sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:57)
    at
    sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
    at
    java.lang.reflect.Constructor.newInstance(Constructor.java:526)
    at
    org.datanucleus.plugin.NonManagedPluginRegistry.createExecutableExtension(NonManagedPluginRegistry.java:588)
    at
    org.datanucleus.plugin.PluginManager.createExecutableExtension(PluginManager.java:300)
    at
    org.datanucleus.ObjectManagerFactoryImpl.initialiseStoreManager(ObjectManagerFactoryImpl.java:161)
    at
    org.datanucleus.jdo.JDOPersistenceManagerFactory.freezeConfiguration(JDOPersistenceManagerFactory.java:583)
    at
    org.datanucleus.jdo.JDOPersistenceManagerFactory.createPersistenceManagerFactory(JDOPersistenceManagerFactory.java:286)
    at
    org.datanucleus.jdo.JDOPersistenceManagerFactory.getPersistenceManagerFactory(JDOPersistenceManagerFactory.java:182)
    at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
    at
    sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
    at
    sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
    at java.lang.reflect.Method.invoke(Method.java:606)
    at javax.jdo.JDOHelper$16.run(JDOHelper.java:1958)
    at java.security.AccessController.doPrivileged(Native Method)
    at javax.jdo.JDOHelper.invoke(JDOHelper.java:1953)
    at
    javax.jdo.JDOHelper.invokeGetPersistenceManagerFactoryOnImplementation(JDOHelper.java:1159)
    at
    javax.jdo.JDOHelper.getPersistenceManagerFactory(JDOHelper.java:803)
    at
    javax.jdo.JDOHelper.getPersistenceManagerFactory(JDOHelper.java:698)
    at
    org.apache.hadoop.hive.metastore.ObjectStore.getPMF(ObjectStore.java:262)
    at
    org.apache.hadoop.hive.metastore.ObjectStore.getPersistenceManager(ObjectStore.java:291)
    at
    org.apache.hadoop.hive.metastore.ObjectStore.initialize(ObjectStore.java:224)
    at
    org.apache.hadoop.hive.metastore.ObjectStore.setConf(ObjectStore.java:199)
    at
    org.apache.hadoop.util.ReflectionUtils.setConf(ReflectionUtils.java:62)
    at
    org.apache.hadoop.util.ReflectionUtils.newInstance(ReflectionUtils.java:117)
    at
    org.apache.hadoop.hive.metastore.RetryingRawStore.<init>(RetryingRawStore.java:62)
    at
    org.apache.hadoop.hive.metastore.RetryingRawStore.getProxy(RetryingRawStore.java:71)
    at
    org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.newRawStore(HiveMetaStore.java:413)
    at
    org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.getMS(HiveMetaStore.java:401)
    at
    org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.createDefaultDB(HiveMetaStore.java:439)
    at
    org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.init(HiveMetaStore.java:325)
    at
    org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.<init>(HiveMetaStore.java:279)
    at com.cloudera.beeswax.Server.serveMeta(Server.java:349)
    at com.cloudera.beeswax.Server$1.run(Server.java:200)
    at java.lang.Thread.run(Thread.java:724)
    Caused by: java.sql.SQLException: Failed to create database
    'metastore_db', see the next exception for details.
    at
    org.apache.derby.impl.jdbc.SQLExceptionFactory.getSQLException(Unknown
    Source)
    at
    org.apache.derby.impl.jdbc.SQLExceptionFactory40.wrapArgsForTransportAcrossDRDA(Unknown
    Source)
    ... 54 more
    Caused by: java.sql.SQLException: Directory
    /usr/share/hue/metastore_db cannot be created.
    at
    org.apache.derby.impl.jdbc.SQLExceptionFactory.getSQLException(Unknown
    Source)
    at
    org.apache.derby.impl.jdbc.SQLExceptionFactory40.wrapArgsForTransportAcrossDRDA(Unknown
    Source)
    at
    org.apache.derby.impl.jdbc.SQLExceptionFactory40.getSQLException(Unknown
    Source)
    at
    org.apache.derby.impl.jdbc.Util.generateCsSQLException(Unknown Source)
    at
    org.apache.derby.impl.jdbc.TransactionResourceImpl.wrapInSQLException(Unknown
    Source)
    at
    org.apache.derby.impl.jdbc.TransactionResourceImpl.handleException(Unknown
    Source)
    at
    org.apache.derby.impl.jdbc.EmbedConnection.handleException(Unknown Source)
    ... 51 more
    Caused by: ERROR XBM0H: Directory /usr/share/hue/metastore_db
    cannot be created.
    at
    org.apache.derby.iapi.error.StandardException.newException(Unknown Source)
    at
    org.apache.derby.impl.services.monitor.StorageFactoryService$9.run(Unknown
    Source)
    at java.security.AccessController.doPrivileged(Native Method)
    at
    org.apache.derby.impl.services.monitor.StorageFactoryService.createServiceRoot(Unknown
    Source)
    at
    org.apache.derby.impl.services.monitor.BaseMonitor.bootService(Unknown
    Source)
    at
    org.apache.derby.impl.services.monitor.BaseMonitor.createPersistentService(Unknown
    Source)
    at
    org.apache.derby.iapi.services.monitor.Monitor.createPersistentService(Unknown
    Source)
    ... 51 more
    (31713) socket 0.0.0.0:8888 already in use, retrying after 4.0
    seconds...
    (31713) socket 0.0.0.0:8888 already in use, retrying after 8.0
    seconds...
    (31713) socket 0.0.0.0:8888 already in use, retrying after 16.0
    seconds...


    On Fri, Aug 9, 2013 at 11:51 AM, Abraham Elmahrek <
    abe@cloudera.com> wrote:
    Port 8888 is PID 22739. You should be able to execute "ps
    auxwww" or a similar command to find Beeswax and Hue. They should both be
    running as use Hue. Then just kill them both.

    -Abe


    On Fri, Aug 9, 2013 at 11:45 AM, Vinamrata Singal <
    vsingal5@stanford.edu> wrote:
    also no pid's for 8002 and 8003


    On Fri, Aug 9, 2013 at 11:44 AM, Vinamrata Singal <
    vsingal5@stanford.edu> wrote:
    I have the following output, can't find port 8888

    Proto Recv-Q Send-Q Local Address Foreign Address
    State PID/Program name
    tcp 0 0 0.0.0.0:50060 0.0.0.0:*
    LISTEN 30843/java
    tcp 0 0 0.0.0.0:44141 0.0.0.0:*
    LISTEN 30312/java
    tcp 0 0 0.0.0.0:50030 0.0.0.0:*
    LISTEN 30697/java
    tcp 0 0 0.0.0.0:111 0.0.0.0:*
    LISTEN -
    tcp 0 0 0.0.0.0:50070 0.0.0.0:*
    LISTEN 30312/java
    tcp 0 0 0.0.0.0:22 0.0.0.0:*
    LISTEN -
    tcp 0 0 0.0.0.0:58776 0.0.0.0:*
    LISTEN 30442/java
    tcp 0 0 0.0.0.0:8888 0.0.0.0:*
    LISTEN 22739/python2.7
    tcp 0 0 0.0.0.0:50010 0.0.0.0:*
    LISTEN 30442/java
    tcp 0 0 0.0.0.0:50075 0.0.0.0:*
    LISTEN 30442/java
    tcp 0 0 0.0.0.0:55035 0.0.0.0:*
    LISTEN 30572/java
    tcp 0 0 0.0.0.0:55039 0.0.0.0:*
    LISTEN 30697/java
    tcp 0 0 127.0.0.1:60353 0.0.0.0:*
    LISTEN 30843/java
    tcp 0 0 0.0.0.0:8002 0.0.0.0:*
    LISTEN -
    tcp 0 0 0.0.0.0:8003 0.0.0.0:*
    LISTEN -
    tcp 0 0 0.0.0.0:50020 0.0.0.0:*
    LISTEN 30442/java
    tcp 0 0 127.0.0.1:54310 0.0.0.0:*
    LISTEN 30312/java
    tcp 0 0 0.0.0.0:40358 0.0.0.0:*
    LISTEN -
    tcp 0 0 127.0.0.1:54311 0.0.0.0:*
    LISTEN 30697/java
    tcp 0 0 0.0.0.0:50090 0.0.0.0:*
    LISTEN 30572/java
    tcp6 0 0 :::49932 :::*
    LISTEN -
    tcp6 0 0 :::111 :::*
    LISTEN -
    tcp6 0 0 :::22 :::*
    LISTEN -


    On Fri, Aug 9, 2013 at 11:27 AM, Abraham Elmahrek <
    abe@cloudera.com> wrote:
    It definitely looks like they're in use. The command "netstat
    -tlnp" should show you their PIDs and process names. If they're Hue, you
    can simply 'kill' those processes using "kill <pid1> <pid2> ..." and start
    Hue again.


    On Fri, Aug 9, 2013 at 11:22 AM, Vinamrata Singal <
    vsingal5@stanford.edu> wrote:
    tcp 0 0 0.0.0.0:50060 0.0.0.0:*
    LISTEN
    tcp 0 0 0.0.0.0:44141 0.0.0.0:*
    LISTEN
    tcp 0 0 0.0.0.0:50030 0.0.0.0:*
    LISTEN
    tcp 0 0 0.0.0.0:111 0.0.0.0:*
    LISTEN
    tcp 0 0 0.0.0.0:50070 0.0.0.0:*
    LISTEN
    tcp 0 0 0.0.0.0:22 0.0.0.0:*
    LISTEN
    tcp 0 0 0.0.0.0:58776 0.0.0.0:*
    LISTEN
    tcp 0 0 0.0.0.0:8888 0.0.0.0:*
    LISTEN
    tcp 0 0 0.0.0.0:50010 0.0.0.0:*
    LISTEN
    tcp 0 0 0.0.0.0:50075 0.0.0.0:*
    LISTEN
    tcp 0 0 0.0.0.0:55035 0.0.0.0:*
    LISTEN
    tcp 0 0 0.0.0.0:55039 0.0.0.0:*
    LISTEN
    tcp 0 0 127.0.0.1:60353 0.0.0.0:*
    LISTEN
    tcp 0 0 0.0.0.0:8002 0.0.0.0:*
    LISTEN
    tcp 0 0 0.0.0.0:8003 0.0.0.0:*
    LISTEN
    tcp 0 0 0.0.0.0:50020 0.0.0.0:*
    LISTEN
    tcp 0 0 127.0.0.1:54310 0.0.0.0:*
    LISTEN
    tcp 0 0 0.0.0.0:40358 0.0.0.0:*
    LISTEN
    tcp 0 0 127.0.0.1:54311 0.0.0.0:*
    LISTEN
    tcp 0 0 0.0.0.0:50090 0.0.0.0:*
    LISTEN
    tcp6 0 0 :::49932 :::*
    LISTEN
    tcp6 0 0 :::111 :::*
    LISTEN
    tcp6 0 0 :::22 :::*
    LISTEN


    On Fri, Aug 9, 2013 at 11:20 AM, Abraham Elmahrek <
    abe@cloudera.com> wrote:
    It looks like port 8888 is in use. Maybe 8002 and 8003 are
    in use as well. Could you provide the output of 'netstat -tln'?

    -Abe


    On Fri, Aug 9, 2013 at 11:16 AM, Vinamrata Singal <
    vsingal5@stanford.edu> wrote:
    Abe,

    Thanks for all your help and quick replies.

    So I went ahead and fixed everything (including the
    username as you suggested) yet now I'm getting a different error:

    Executing /usr/local/hadoop/bin/hadoop jar
    /usr/share/hue/apps/beeswax/src/beeswax/../../java-lib/BeeswaxServer.jar
    --beeswax 8002 --desktop-host 127.0.0.1 --desktop-port 8888
    --query-lifetime 604800000 --metastore 8003
    (31443) socket 0.0.0.0:8888 already in use, retrying after
    0.5 seconds...
    (31443) socket 0.0.0.0:8888 already in use, retrying after
    1.0 seconds...
    (31443) socket 0.0.0.0:8888 already in use, retrying after
    2.0 seconds...
    (31443) socket 0.0.0.0:8888 already in use, retrying after
    4.0 seconds...
    13/08/09 11:15:37 INFO beeswax.Server: Starting metastore
    at port 8003
    13/08/09 11:15:37 INFO beeswax.Server: Starting beeswaxd at
    port 8002
    Exception in thread "main"
    org.apache.thrift.transport.TTransportException: Could not create
    ServerSocket on address 0.0.0.0/0.0.0.0:8002.
    at
    org.apache.thrift.transport.TServerSocket.<init>(TServerSocket.java:93)
    at
    org.apache.thrift.transport.TServerSocket.<init>(TServerSocket.java:75)
    at
    org.apache.thrift.transport.TServerSocket.<init>(TServerSocket.java:68)
    at com.cloudera.beeswax.Server.serveBeeswax(Server.java:250)
    at com.cloudera.beeswax.Server.main(Server.java:214)
    at sun.reflect.NativeMethodAccessorImpl.invoke0(Native
    Method)
    at
    sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
    at
    sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
    at java.lang.reflect.Method.invoke(Method.java:606)
    at org.apache.hadoop.util.RunJar.main(RunJar.java:160)
    org.apache.thrift.transport.TTransportException: Could not
    create ServerSocket on address 0.0.0.0/0.0.0.0:8003.
    at
    org.apache.thrift.transport.TServerSocket.<init>(TServerSocket.java:93)
    at
    org.apache.thrift.transport.TServerSocket.<init>(TServerSocket.java:75)
    at
    org.apache.thrift.transport.TServerSocket.<init>(TServerSocket.java:68)
    at com.cloudera.beeswax.Server.serveMeta(Server.java:348)
    at com.cloudera.beeswax.Server$1.run(Server.java:200)
    at java.lang.Thread.run(Thread.java:724)



    On Thu, Aug 8, 2013 at 5:53 PM, Abraham Elmahrek <
    abe@cloudera.com> wrote:
    Vinamrata,

    I think it will work with the exception of the job browser
    application. It depends on the thrift service code for the job tracker
    packaged in hue-plugins.jar, which only runs on CDH. There have been users
    who've ported that code over to the upstream job tracker in the past though.

    To solve the rest of your issues, take a look at
    apps/beeswax/beeswax_server.sh. There are environment variables that you
    need to set when you start Hue. In particular, "HIVE_LIB" should be set or
    some how grok'd into Hue's environment.

    -Abe


    On Thu, Aug 8, 2013 at 5:43 PM, Vinamrata Singal <
    vsingal5@stanford.edu> wrote:
    I have an Apache installation. Is that an issue for Hue?

    Got it, will look into the hue user installation issues.


    On Thu, Aug 8, 2013 at 5:25 PM, Abraham Elmahrek <
    abe@cloudera.com> wrote:
    That error is likely because of the error: find:
    `~/hive-0.10.0/lib': No such file or directory. Are you
    using a custom installation of Hive? Perhaps, the apache distribution
    rather than the CDH distribution?

    Note: It's probably not a good idea to change the user
    from 'hue' to something else.


    On Thu, Aug 8, 2013 at 5:20 PM, Vinamrata Singal <
    vsingal5@stanford.edu> wrote:
    I actually think I fixed the issue by changing the user
    in the supervisor.py script to be the user that owns that directory.

    Question: now I'm getting this error:

    xception in thread "main"
    java.lang.NoClassDefFoundError: org/apache/hadoop/hive/conf/HiveConf
    at java.lang.Class.forName0(Native Method)
    at java.lang.Class.forName(Class.java:270)
    at org.apache.hadoop.util.RunJar.main(RunJar.java:153)
    Caused by: java.lang.ClassNotFoundException:
    org.apache.hadoop.hive.conf.HiveConf
    at
    java.net.URLClassLoader$1.run(URLClassLoader.java:366)
    at
    java.net.URLClassLoader$1.run(URLClassLoader.java:355)
    at java.security.AccessController.doPrivileged(Native
    Method)
    at
    java.net.URLClassLoader.findClass(URLClassLoader.java:354)
    at
    java.lang.ClassLoader.loadClass(ClassLoader.java:424)
    at java.lang.ClassLoader.loadClass(ClassLoader.java:357)


    On Thu, Aug 8, 2013 at 5:19 PM, Abraham Elmahrek <
    abe@cloudera.com> wrote:
    Your 'hadoop.tmp.dir' doesn't vary per user. I'm not
    entirely sure if that directory is chown'd or if the first user to create
    it takes ownership. Either way, you probably don't have access to that
    directory using the user starting Beeswaxd (hue). Take a look at
    http://hadoop.apache.org/docs/current/hadoop-project-dist/hadoop-common/core-default.xmlfor an example 'hadoop.tmp.dir' that varies by username.

    -Abe


    On Thu, Aug 8, 2013 at 5:15 PM, Vinamrata Singal <
    vsingal5@stanford.edu> wrote:
    <configuration>
    <property>
    <name>hadoop.tmp.dir </name>
    <value>/app/hadoop/tmp</value>
    <description> A base for other temporary
    directories </description>
    </property>

    <property>
    <name>fs.default.name</name>
    <value>hdfs://master:54310</value>
    </property>
    </configuration>


    On Thu, Aug 8, 2013 at 5:11 PM, Abraham Elmahrek <
    abe@cloudera.com> wrote:
    Could you provide your core-site.xml? This should be
    creating a file in 'hadoop.tmp.dir'.

    -Abe


    On Thu, Aug 8, 2013 at 5:03 PM, Vinamrata Singal <
    vsingal5@stanford.edu> wrote:
    File: `/tmp'
    Size: 4096 Blocks: 8 IO Block:
    4096 directory
    Device: fd00h/64768d Inode: 131082 Links: 11
    Access: (1777/drwxrwxrwt) Uid: ( 0/ root)
    Gid: ( 0/ root)
    Access: 2013-08-08 22:47:49.435308999 +0000
    Modify: 2013-08-09 00:01:17.251309001 +0000
    Change: 2013-08-09 00:01:17.251309001 +0000
    Birth: -



    On Thu, Aug 8, 2013 at 5:00 PM, Abraham Elmahrek <
    abe@cloudera.com> wrote:
    It looks like Beeswaxd cannot create a temporary
    directory in '/tmp/'. What do your permissions on that directory look like?
    What is the output of "stat /tmp/"?


    On Thu, Aug 8, 2013 at 4:52 PM, <
    vsingal5@stanford.edu> wrote:
    I'm getting the following while trying to build
    Hue:















    (6211) *** Controller starting at Thu Aug 8 11:29:50 2013
    Should start 1 new children
    Controller.spawn_children(number=1)
    $HADOOP_HOME=
    $HADOOP_BIN=/usr/local/hadoop/bin/hadoop
    $HIVE_CONF_DIR=~/hive-0.10.0/conf
    $HIVE_HOME=~/hive-0.10.0
    find: `~/hive-0.10.0/lib': No such file or directory
    $HADOOP_CLASSPATH=:
    $HADOOP_OPTS=-Dlog4j.configuration=log4j.properties
    $HADOOP_CONF_DIR=~/hive-0.10.0/conf:/usr/local/hadoop/conf
    $HADOOP_MAPRED_HOME=/usr/lib/hadoop-0.20-mapreduce
    CWD=/usr/local/hue/desktop/conf
    Executing /usr/local/hadoop/bin/hadoop jar /usr/local/hue/apps/beeswax/src/beeswax/../../java-lib/BeeswaxServer.jar --beeswax 8002 --desktop-host 127.0.0.1 --desktop-port 8888 --query-lifetime 604800000 --metastore 8003
    Exception in thread "main" java.io.IOException: Permission denied
    at java.io.UnixFileSystem.createFileExclusively(Native Method)
    at java.io.File.createTempFile(File.java:1879)
    at org.apache.hadoop.util.RunJar.main(RunJar.java:119)

    I've changed the configuration file so it doesn't
    use hue but the user that I'm logged in as which has read and write
    permissions in the hadoop dfs, hadoop, hive, etc. Not sure why it's doing
    this...

    --
    Best,
    Vinamrata Singal | BS Computer Science | Stanford
    Class of 2016 | (c) 650.215.3775 | (w)
    https://stanford.edu/~vsingal5

    --
    Best,
    Vinamrata Singal | BS Computer Science | Stanford
    Class of 2016 | (c) 650.215.3775 | (w)
    https://stanford.edu/~vsingal5

    --
    Best,
    Vinamrata Singal | BS Computer Science | Stanford Class
    of 2016 | (c) 650.215.3775 | (w)
    https://stanford.edu/~vsingal5

    --
    Best,
    Vinamrata Singal | BS Computer Science | Stanford Class
    of 2016 | (c) 650.215.3775 | (w)
    https://stanford.edu/~vsingal5

    --
    Best,
    Vinamrata Singal | BS Computer Science | Stanford Class of
    2016 | (c) 650.215.3775 | (w)
    https://stanford.edu/~vsingal5

    --
    Best,
    Vinamrata Singal | BS Computer Science | Stanford Class of
    2016 | (c) 650.215.3775 | (w) https://stanford.edu/~vsingal5

    --
    Best,
    Vinamrata Singal | BS Computer Science | Stanford Class of 2016
    (c) 650.215.3775 | (w) https://stanford.edu/~vsingal5


    --
    Best,
    Vinamrata Singal | BS Computer Science | Stanford Class of 2016
    (c) 650.215.3775 | (w) https://stanford.edu/~vsingal5

    --
    Best,
    Vinamrata Singal | BS Computer Science | Stanford Class of 2016 |
    (c) 650.215.3775 | (w) https://stanford.edu/~vsingal5

    --
    Best,
    Vinamrata Singal | BS Computer Science | Stanford Class of 2016 |
    (c) 650.215.3775 | (w) https://stanford.edu/~vsingal5

    --
    Best,
    Vinamrata Singal | BS Computer Science | Stanford Class of 2016 | (c)
    650.215.3775 | (w) https://stanford.edu/~vsingal5

    --
    Best,
    Vinamrata Singal | BS Computer Science | Stanford Class of 2016 | (c)
    650.215.3775 | (w) https://stanford.edu/~vsingal5

    --
    Best,
    Vinamrata Singal | BS Computer Science | Stanford Class of 2016 | (c)
    650.215.3775 | (w) https://stanford.edu/~vsingal5


    --
    Best,
    Vinamrata Singal | BS Computer Science | Stanford Class of 2016 |
    (c) 650.215.3775 | (w) https://stanford.edu/~vsingal5
  • Romain Rigaux at Aug 9, 2013 at 9:30 pm
    Yes, you can logging as anybody with a unix account.

    Make sure that hive-server is not running already and chmod 777
    /usr/share/hue/metastore_db

    Also FI:
    http://gethue.tumblr.com/post/56804308712/hadoop-tutorial-how-to-access-hive-in-pig-with

    sudo rm /var/lib/hive/metastore/metastore_db/*lck
    sudo chmod 777 -R /var/lib/hive/metastore/metastore_db


    Romain

    On Fri, Aug 9, 2013 at 12:32 PM, Vinamrata Singal wrote:

    what i mean is: I think it's running as hue as that's the user specified
    on supervisor.py, but I'm logged in as another user (i.e. not hue). Is that
    fine? If so, then how do I fix this permission issue?

    On Fri, Aug 9, 2013 at 12:32 PM, Vinamrata Singal wrote:

    Hue is being run as hue, as that is what is specified in supervisor.py

    I am logged in as another user, is that OK?

    And apparently it's trying to create a new metastore_db instead of using
    the one that's already created as specified in the hive_site.xml as well as
    the hive lib variables?

    On Fri, Aug 9, 2013 at 12:29 PM, Romain Rigaux wrote:

    We recommend to run Hue as 'hue' as it is the standard. If not you need
    to chown or chmod all the log, db directories/files.

    Romain


    On Fri, Aug 9, 2013 at 12:24 PM, Vinamrata Singal <vsingal5@stanford.edu
    wrote:
    Even though I gave permissions to derby.log and where the metastore
    gets created, why am I still getting this error:

    2013-08-09 19:23:15.059 GMT Thread[MetaServerThread,5,main]
    java.io.FileNotFoundException: derby.log (Permission denied)
    2013-08-09 19:23:15.193 GMT Thread[MetaServerThread,5,main] Cleanup
    action starting
    ERROR XBM0H: Directory /home/vsingal/metastore_db cannot be created.
    at org.apache.derby.iapi.error.StandardException.newException(Unknown
    Source)
    at
    org.apache.derby.impl.services.monitor.StorageFactoryService$9.run(Unknown
    Source)
    at java.security.AccessController.doPrivileged(Native Method)
    at
    org.apache.derby.impl.services.monitor.StorageFactoryService.createServiceRoot(Unknown
    Source)
    at
    org.apache.derby.impl.services.monitor.BaseMonitor.bootService(Unknown
    Source)
    at
    org.apache.derby.impl.services.monitor.BaseMonitor.createPersistentService(Unknown
    Source)
    at
    org.apache.derby.iapi.services.monitor.Monitor.createPersistentService(Unknown
    Source)
    at org.apache.derby.impl.jdbc.EmbedConnection.createDatabase(Unknown
    Source)
    at org.apache.derby.impl.jdbc.EmbedConnection.<init>(Unknown Source)
    at org.apache.derby.impl.jdbc.EmbedConnection30.<init>(Unknown Source)
    at org.apache.derby.impl.jdbc.EmbedConnection40.<init>(Unknown Source)
    at org.apache.derby.jdbc.Driver40.getNewEmbedConnection(Unknown
    Source)
    at org.apache.derby.jdbc.InternalDriver.connect(Unknown Source)
    at org.apache.derby.jdbc.AutoloadedDriver.connect(Unknown Source)
    at java.sql.DriverManager.getConnection(DriverManager.java:571)
    at java.sql.DriverManager.getConnection(DriverManager.java:215)
    at
    org.apache.commons.dbcp.DriverManagerConnectionFactory.createConnection(DriverManagerConnectionFactory.java:75)
    at
    org.apache.commons.dbcp.PoolableConnectionFactory.makeObject(PoolableConnectionFactory.java:582)
    at
    org.apache.commons.pool.impl.GenericObjectPool.borrowObject(GenericObjectPool.java:1148)
    at
    org.apache.commons.dbcp.PoolingDataSource.getConnection(PoolingDataSource.java:106)
    at
    org.datanucleus.store.rdbms.ConnectionFactoryImpl$ManagedConnectionImpl.getConnection(ConnectionFactoryImpl.java:521)
    at
    org.datanucleus.store.rdbms.RDBMSStoreManager.<init>(RDBMSStoreManager.java:290)
    at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
    at
    sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:57)
    at
    sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
    at java.lang.reflect.Constructor.newInstance(Constructor.java:526)
    at
    org.datanucleus.plugin.NonManagedPluginRegistry.createExecutableExtension(NonManagedPluginRegistry.java:588)
    at
    org.datanucleus.plugin.PluginManager.createExecutableExtension(PluginManager.java:300)
    at
    org.datanucleus.ObjectManagerFactoryImpl.initialiseStoreManager(ObjectManagerFactoryImpl.java:161)
    at
    org.datanucleus.jdo.JDOPersistenceManagerFactory.freezeConfiguration(JDOPersistenceManagerFactory.java:583)
    at
    org.datanucleus.jdo.JDOPersistenceManagerFactory.createPersistenceManagerFactory(JDOPersistenceManagerFactory.java:286)
    at
    org.datanucleus.jdo.JDOPersistenceManagerFactory.getPersistenceManagerFactory(JDOPersistenceManagerFactory.java:182)
    at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
    at
    sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
    at
    sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
    at java.lang.reflect.Method.invoke(Method.java:606)
    at javax.jdo.JDOHelper$16.run(JDOHelper.java:1958)
    at java.security.AccessController.doPrivileged(Native Method)
    at javax.jdo.JDOHelper.invoke(JDOHelper.java:1953)
    at
    javax.jdo.JDOHelper.invokeGetPersistenceManagerFactoryOnImplementation(JDOHelper.java:1159)
    at javax.jdo.JDOHelper.getPersistenceManagerFactory(JDOHelper.java:803)
    at
    javax.jdo.JDOHelper.getPersistenceManagerFactory(JDOHelper.java:698)
    at
    org.apache.hadoop.hive.metastore.ObjectStore.getPMF(ObjectStore.java:262)
    at
    org.apache.hadoop.hive.metastore.ObjectStore.getPersistenceManager(ObjectStore.java:291)
    at
    org.apache.hadoop.hive.metastore.ObjectStore.initialize(ObjectStore.java:224)
    at
    org.apache.hadoop.hive.metastore.ObjectStore.setConf(ObjectStore.java:199)
    at
    org.apache.hadoop.util.ReflectionUtils.setConf(ReflectionUtils.java:62)
    at
    org.apache.hadoop.util.ReflectionUtils.newInstance(ReflectionUtils.java:117)
    at
    org.apache.hadoop.hive.metastore.RetryingRawStore.<init>(RetryingRawStore.java:62)
    at
    org.apache.hadoop.hive.metastore.RetryingRawStore.getProxy(RetryingRawStore.java:71)
    at
    org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.newRawStore(HiveMetaStore.java:413)
    at
    org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.getMS(HiveMetaStore.java:401)
    at
    org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.createDefaultDB(HiveMetaStore.java:439)
    at
    org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.init(HiveMetaStore.java:325)
    at
    org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.<init>(HiveMetaStore.java:279)
    at com.cloudera.beeswax.Server.serveMeta(Server.java:349)
    at com.cloudera.beeswax.Server$1.run(Server.java:200)
    at java.lang.Thread.run(Thread.java:724)


    On Fri, Aug 9, 2013 at 12:20 PM, Romain Rigaux wrote:

    Make sure all ps -ef | grep hue or grep beeswax return nothing.

    Also curious, what is the use case of not running Hue as 'hue'?

    Romain


    On Fri, Aug 9, 2013 at 12:16 PM, Vinamrata Singal <
    vsingal5@stanford.edu> wrote:
    It didn't send this part:

    Exception in thread "main"
    org.apache.thrift.transport.TTransportException: Could not create
    ServerSocket on address0.0.0.0/0.0.0.0:8002.
    at
    org.apache.thrift.transport.TServerSocket.<init>(TServerSocket.java:93)
    at
    org.apache.thrift.transport.TServerSocket.<init>(TServerSocket.java:75)
    at
    org.apache.thrift.transport.TServerSocket.<init>(TServerSocket.java:68)
    at com.cloudera.beeswax.Server.serveBeeswax(Server.java:250)
    at com.cloudera.beeswax.Server.main(Server.java:214)
    at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
    at
    sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
    at
    sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
    at java.lang.reflect.Method.invoke(Method.java:606)
    at org.apache.hadoop.util.RunJar.main(RunJar.java:160)
    org.apache.thrift.transport.TTransportException: Could not create
    ServerSocket on address 0.0.0.0/0.0.0.0:8003.
    at
    org.apache.thrift.transport.TServerSocket.<init>(TServerSocket.java:93)
    at
    org.apache.thrift.transport.TServerSocket.<init>(TServerSocket.java:75)
    at
    org.apache.thrift.transport.TServerSocket.<init>(TServerSocket.java:68)
    at com.cloudera.beeswax.Server.serveMeta(Server.java:348)
    at com.cloudera.beeswax.Server$1.run(Server.java:200)
    at java.lang.Thread.run(Thread.java:724)

    On Fri, Aug 9, 2013 at 12:14 PM, Abraham Elmahrek wrote:

    The logs you've just posted mean that beeswaxd is starting :).


    On Fri, Aug 9, 2013 at 12:07 PM, Vinamrata Singal <
    vsingal5@stanford.edu> wrote:
    Done, but I still get a bind to socket error, but when I run
    netstat there is no pid associated with the port:

    13/08/09 12:04:09 INFO beeswax.Server: Starting metastore at port
    8003
    13/08/09 12:04:09 INFO beeswax.Server: Starting beeswaxd at port
    8002
    Exception in thread "main"
    org.apache.thrift.transport.TTransportException: Could not create
    ServerSocket on address 0.0.0.0/0.0.0.0:8002.
    at
    org.apache.thrift.transport.TServerSocket.<init>(TServerSocket.java:93)
    at
    org.apache.thrift.transport.TServerSocket.<init>(TServerSocket.java:75)
    at
    org.apache.thrift.transport.TServerSocket.<init>(TServerSocket.java:68)
    at com.cloudera.beeswax.Server.serveBeeswax(Server.java:250)
    at com.cloudera.beeswax.Server.main(Server.java:214)
    at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
    at
    sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
    at
    sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
    at java.lang.reflect.Method.invoke(Method.java:606)
    at org.apache.hadoop.util.RunJar.main(RunJar.java:160)
    org.apache.thrift.transport.TTransportException: Could not create
    ServerSocket on address 0.0.0.0/0.0.0.0:8003.
    at
    org.apache.thrift.transport.TServerSocket.<init>(TServerSocket.java:93)
    at
    org.apache.thrift.transport.TServerSocket.<init>(TServerSocket.java:75)
    at
    org.apache.thrift.transport.TServerSocket.<init>(TServerSocket.java:68)
    at com.cloudera.beeswax.Server.serveMeta(Server.java:348)
    at com.cloudera.beeswax.Server$1.run(Server.java:200)
    at java.lang.Thread.run(Thread.java:724)



    On Fri, Aug 9, 2013 at 11:59 AM, Romain Rigaux <romain@cloudera.com
    wrote:
    Beeswax is trying to create the Hive metastore DB on
    /usr/share/hue and your custom user does not have the permissions:

    chown your_user:your_user /usr/share/hue

    Romain


    On Fri, Aug 9, 2013 at 11:56 AM, Vinamrata Singal <
    vsingal5@stanford.edu> wrote:
    After killing the appropriate processes, I still get the
    following error:

    java.sql.SQLException: Failed to create database 'metastore_db',
    see the next exception for details.
    at
    org.apache.derby.impl.jdbc.SQLExceptionFactory40.getSQLException(Unknown
    Source)
    at org.apache.derby.impl.jdbc.Util.newEmbedSQLException(Unknown
    Source)
    at org.apache.derby.impl.jdbc.Util.seeNextException(Unknown
    Source)
    at
    org.apache.derby.impl.jdbc.EmbedConnection.createDatabase(Unknown Source)
    at org.apache.derby.impl.jdbc.EmbedConnection.<init>(Unknown
    Source)
    at org.apache.derby.impl.jdbc.EmbedConnection30.<init>(Unknown
    Source)
    at org.apache.derby.impl.jdbc.EmbedConnection40.<init>(Unknown
    Source)
    at org.apache.derby.jdbc.Driver40.getNewEmbedConnection(Unknown
    Source)
    at org.apache.derby.jdbc.InternalDriver.connect(Unknown Source)
    at org.apache.derby.jdbc.AutoloadedDriver.connect(Unknown Source)
    at java.sql.DriverManager.getConnection(DriverManager.java:571)
    at java.sql.DriverManager.getConnection(DriverManager.java:215)
    at
    org.apache.commons.dbcp.DriverManagerConnectionFactory.createConnection(DriverManagerConnectionFactory.java:75)
    at
    org.apache.commons.dbcp.PoolableConnectionFactory.makeObject(PoolableConnectionFactory.java:582)
    at
    org.apache.commons.pool.impl.GenericObjectPool.borrowObject(GenericObjectPool.java:1148)
    at
    org.apache.commons.dbcp.PoolingDataSource.getConnection(PoolingDataSource.java:106)
    at
    org.datanucleus.store.rdbms.ConnectionFactoryImpl$ManagedConnectionImpl.getConnection(ConnectionFactoryImpl.java:521)
    at
    org.datanucleus.store.rdbms.RDBMSStoreManager.<init>(RDBMSStoreManager.java:290)
    at
    sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
    at
    sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:57)
    at
    sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
    at java.lang.reflect.Constructor.newInstance(Constructor.java:526)
    at
    org.datanucleus.plugin.NonManagedPluginRegistry.createExecutableExtension(NonManagedPluginRegistry.java:588)
    at
    org.datanucleus.plugin.PluginManager.createExecutableExtension(PluginManager.java:300)
    at
    org.datanucleus.ObjectManagerFactoryImpl.initialiseStoreManager(ObjectManagerFactoryImpl.java:161)
    at
    org.datanucleus.jdo.JDOPersistenceManagerFactory.freezeConfiguration(JDOPersistenceManagerFactory.java:583)
    at
    org.datanucleus.jdo.JDOPersistenceManagerFactory.createPersistenceManagerFactory(JDOPersistenceManagerFactory.java:286)
    at
    org.datanucleus.jdo.JDOPersistenceManagerFactory.getPersistenceManagerFactory(JDOPersistenceManagerFactory.java:182)
    at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
    at
    sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
    at
    sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
    at java.lang.reflect.Method.invoke(Method.java:606)
    at javax.jdo.JDOHelper$16.run(JDOHelper.java:1958)
    at java.security.AccessController.doPrivileged(Native Method)
    at javax.jdo.JDOHelper.invoke(JDOHelper.java:1953)
    at
    javax.jdo.JDOHelper.invokeGetPersistenceManagerFactoryOnImplementation(JDOHelper.java:1159)
    at
    javax.jdo.JDOHelper.getPersistenceManagerFactory(JDOHelper.java:803)
    at
    javax.jdo.JDOHelper.getPersistenceManagerFactory(JDOHelper.java:698)
    at
    org.apache.hadoop.hive.metastore.ObjectStore.getPMF(ObjectStore.java:262)
    at
    org.apache.hadoop.hive.metastore.ObjectStore.getPersistenceManager(ObjectStore.java:291)
    at
    org.apache.hadoop.hive.metastore.ObjectStore.initialize(ObjectStore.java:224)
    at
    org.apache.hadoop.hive.metastore.ObjectStore.setConf(ObjectStore.java:199)
    at
    org.apache.hadoop.util.ReflectionUtils.setConf(ReflectionUtils.java:62)
    at
    org.apache.hadoop.util.ReflectionUtils.newInstance(ReflectionUtils.java:117)
    at
    org.apache.hadoop.hive.metastore.RetryingRawStore.<init>(RetryingRawStore.java:62)
    at
    org.apache.hadoop.hive.metastore.RetryingRawStore.getProxy(RetryingRawStore.java:71)
    at
    org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.newRawStore(HiveMetaStore.java:413)
    at
    org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.getMS(HiveMetaStore.java:401)
    at
    org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.createDefaultDB(HiveMetaStore.java:439)
    at
    org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.init(HiveMetaStore.java:325)
    at
    org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.<init>(HiveMetaStore.java:279)
    at com.cloudera.beeswax.Server.serveMeta(Server.java:349)
    at com.cloudera.beeswax.Server$1.run(Server.java:200)
    at java.lang.Thread.run(Thread.java:724)
    Caused by: java.sql.SQLException: Failed to create database
    'metastore_db', see the next exception for details.
    at
    org.apache.derby.impl.jdbc.SQLExceptionFactory.getSQLException(Unknown
    Source)
    at
    org.apache.derby.impl.jdbc.SQLExceptionFactory40.wrapArgsForTransportAcrossDRDA(Unknown
    Source)
    ... 54 more
    Caused by: java.sql.SQLException: Directory
    /usr/share/hue/metastore_db cannot be created.
    at
    org.apache.derby.impl.jdbc.SQLExceptionFactory.getSQLException(Unknown
    Source)
    at
    org.apache.derby.impl.jdbc.SQLExceptionFactory40.wrapArgsForTransportAcrossDRDA(Unknown
    Source)
    at
    org.apache.derby.impl.jdbc.SQLExceptionFactory40.getSQLException(Unknown
    Source)
    at
    org.apache.derby.impl.jdbc.Util.generateCsSQLException(Unknown Source)
    at
    org.apache.derby.impl.jdbc.TransactionResourceImpl.wrapInSQLException(Unknown
    Source)
    at
    org.apache.derby.impl.jdbc.TransactionResourceImpl.handleException(Unknown
    Source)
    at
    org.apache.derby.impl.jdbc.EmbedConnection.handleException(Unknown Source)
    ... 51 more
    Caused by: ERROR XBM0H: Directory /usr/share/hue/metastore_db
    cannot be created.
    at
    org.apache.derby.iapi.error.StandardException.newException(Unknown Source)
    at
    org.apache.derby.impl.services.monitor.StorageFactoryService$9.run(Unknown
    Source)
    at java.security.AccessController.doPrivileged(Native Method)
    at
    org.apache.derby.impl.services.monitor.StorageFactoryService.createServiceRoot(Unknown
    Source)
    at
    org.apache.derby.impl.services.monitor.BaseMonitor.bootService(Unknown
    Source)
    at
    org.apache.derby.impl.services.monitor.BaseMonitor.createPersistentService(Unknown
    Source)
    at
    org.apache.derby.iapi.services.monitor.Monitor.createPersistentService(Unknown
    Source)
    ... 51 more
    Exception in thread "MetaServerThread"
    javax.jdo.JDOFatalDataStoreException: Failed to create database
    'metastore_db', see the next exception for details.
    at
    org.datanucleus.jdo.NucleusJDOHelper.getJDOExceptionForNucleusException(NucleusJDOHelper.java:298)
    at
    org.datanucleus.jdo.JDOPersistenceManagerFactory.freezeConfiguration(JDOPersistenceManagerFactory.java:601)
    at
    org.datanucleus.jdo.JDOPersistenceManagerFactory.createPersistenceManagerFactory(JDOPersistenceManagerFactory.java:286)
    at
    org.datanucleus.jdo.JDOPersistenceManagerFactory.getPersistenceManagerFactory(JDOPersistenceManagerFactory.java:182)
    at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
    at
    sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
    at
    sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
    at java.lang.reflect.Method.invoke(Method.java:606)
    at javax.jdo.JDOHelper$16.run(JDOHelper.java:1958)
    at java.security.AccessController.doPrivileged(Native Method)
    at javax.jdo.JDOHelper.invoke(JDOHelper.java:1953)
    at
    javax.jdo.JDOHelper.invokeGetPersistenceManagerFactoryOnImplementation(JDOHelper.java:1159)
    at
    javax.jdo.JDOHelper.getPersistenceManagerFactory(JDOHelper.java:803)
    at
    javax.jdo.JDOHelper.getPersistenceManagerFactory(JDOHelper.java:698)
    at
    org.apache.hadoop.hive.metastore.ObjectStore.getPMF(ObjectStore.java:262)
    at
    org.apache.hadoop.hive.metastore.ObjectStore.getPersistenceManager(ObjectStore.java:291)
    at
    org.apache.hadoop.hive.metastore.ObjectStore.initialize(ObjectStore.java:224)
    at
    org.apache.hadoop.hive.metastore.ObjectStore.setConf(ObjectStore.java:199)
    at
    org.apache.hadoop.util.ReflectionUtils.setConf(ReflectionUtils.java:62)
    at
    org.apache.hadoop.util.ReflectionUtils.newInstance(ReflectionUtils.java:117)
    at
    org.apache.hadoop.hive.metastore.RetryingRawStore.<init>(RetryingRawStore.java:62)
    at
    org.apache.hadoop.hive.metastore.RetryingRawStore.getProxy(RetryingRawStore.java:71)
    at
    org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.newRawStore(HiveMetaStore.java:413)
    at
    org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.getMS(HiveMetaStore.java:401)
    at
    org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.createDefaultDB(HiveMetaStore.java:439)
    at
    org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.init(HiveMetaStore.java:325)
    at
    org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.<init>(HiveMetaStore.java:279)
    at com.cloudera.beeswax.Server.serveMeta(Server.java:349)
    at com.cloudera.beeswax.Server$1.run(Server.java:200)
    at java.lang.Thread.run(Thread.java:724)
    NestedThrowablesStackTrace:
    java.sql.SQLException: Failed to create database 'metastore_db',
    see the next exception for details.
    at
    org.apache.derby.impl.jdbc.SQLExceptionFactory40.getSQLException(Unknown
    Source)
    at org.apache.derby.impl.jdbc.Util.newEmbedSQLException(Unknown
    Source)
    at org.apache.derby.impl.jdbc.Util.seeNextException(Unknown
    Source)
    at
    org.apache.derby.impl.jdbc.EmbedConnection.createDatabase(Unknown Source)
    at org.apache.derby.impl.jdbc.EmbedConnection.<init>(Unknown
    Source)
    at org.apache.derby.impl.jdbc.EmbedConnection30.<init>(Unknown
    Source)
    at org.apache.derby.impl.jdbc.EmbedConnection40.<init>(Unknown
    Source)
    at org.apache.derby.jdbc.Driver40.getNewEmbedConnection(Unknown
    Source)
    at org.apache.derby.jdbc.InternalDriver.connect(Unknown Source)
    at org.apache.derby.jdbc.AutoloadedDriver.connect(Unknown
    Source)
    at java.sql.DriverManager.getConnection(DriverManager.java:571)
    at java.sql.DriverManager.getConnection(DriverManager.java:215)
    at
    org.apache.commons.dbcp.DriverManagerConnectionFactory.createConnection(DriverManagerConnectionFactory.java:75)
    at
    org.apache.commons.dbcp.PoolableConnectionFactory.makeObject(PoolableConnectionFactory.java:582)
    at
    org.apache.commons.pool.impl.GenericObjectPool.borrowObject(GenericObjectPool.java:1148)
    at
    org.apache.commons.dbcp.PoolingDataSource.getConnection(PoolingDataSource.java:106)
    at
    org.datanucleus.store.rdbms.ConnectionFactoryImpl$ManagedConnectionImpl.getConnection(ConnectionFactoryImpl.java:521)
    at
    org.datanucleus.store.rdbms.RDBMSStoreManager.<init>(RDBMSStoreManager.java:290)
    at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native
    Method)
    at
    sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:57)
    at
    sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
    at
    java.lang.reflect.Constructor.newInstance(Constructor.java:526)
    at
    org.datanucleus.plugin.NonManagedPluginRegistry.createExecutableExtension(NonManagedPluginRegistry.java:588)
    at
    org.datanucleus.plugin.PluginManager.createExecutableExtension(PluginManager.java:300)
    at
    org.datanucleus.ObjectManagerFactoryImpl.initialiseStoreManager(ObjectManagerFactoryImpl.java:161)
    at
    org.datanucleus.jdo.JDOPersistenceManagerFactory.freezeConfiguration(JDOPersistenceManagerFactory.java:583)
    at
    org.datanucleus.jdo.JDOPersistenceManagerFactory.createPersistenceManagerFactory(JDOPersistenceManagerFactory.java:286)
    at
    org.datanucleus.jdo.JDOPersistenceManagerFactory.getPersistenceManagerFactory(JDOPersistenceManagerFactory.java:182)
    at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
    at
    sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
    at
    sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
    at java.lang.reflect.Method.invoke(Method.java:606)
    at javax.jdo.JDOHelper$16.run(JDOHelper.java:1958)
    at java.security.AccessController.doPrivileged(Native Method)
    at javax.jdo.JDOHelper.invoke(JDOHelper.java:1953)
    at
    javax.jdo.JDOHelper.invokeGetPersistenceManagerFactoryOnImplementation(JDOHelper.java:1159)
    at
    javax.jdo.JDOHelper.getPersistenceManagerFactory(JDOHelper.java:803)
    at
    javax.jdo.JDOHelper.getPersistenceManagerFactory(JDOHelper.java:698)
    at
    org.apache.hadoop.hive.metastore.ObjectStore.getPMF(ObjectStore.java:262)
    at
    org.apache.hadoop.hive.metastore.ObjectStore.getPersistenceManager(ObjectStore.java:291)
    at
    org.apache.hadoop.hive.metastore.ObjectStore.initialize(ObjectStore.java:224)
    at
    org.apache.hadoop.hive.metastore.ObjectStore.setConf(ObjectStore.java:199)
    at
    org.apache.hadoop.util.ReflectionUtils.setConf(ReflectionUtils.java:62)
    at
    org.apache.hadoop.util.ReflectionUtils.newInstance(ReflectionUtils.java:117)
    at
    org.apache.hadoop.hive.metastore.RetryingRawStore.<init>(RetryingRawStore.java:62)
    at
    org.apache.hadoop.hive.metastore.RetryingRawStore.getProxy(RetryingRawStore.java:71)
    at
    org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.newRawStore(HiveMetaStore.java:413)
    at
    org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.getMS(HiveMetaStore.java:401)
    at
    org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.createDefaultDB(HiveMetaStore.java:439)
    at
    org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.init(HiveMetaStore.java:325)
    at
    org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.<init>(HiveMetaStore.java:279)
    at com.cloudera.beeswax.Server.serveMeta(Server.java:349)
    at com.cloudera.beeswax.Server$1.run(Server.java:200)
    at java.lang.Thread.run(Thread.java:724)
    Caused by: java.sql.SQLException: Failed to create database
    'metastore_db', see the next exception for details.
    at
    org.apache.derby.impl.jdbc.SQLExceptionFactory.getSQLException(Unknown
    Source)
    at
    org.apache.derby.impl.jdbc.SQLExceptionFactory40.wrapArgsForTransportAcrossDRDA(Unknown
    Source)
    ... 54 more
    Caused by: java.sql.SQLException: Directory
    /usr/share/hue/metastore_db cannot be created.
    at
    org.apache.derby.impl.jdbc.SQLExceptionFactory.getSQLException(Unknown
    Source)
    at
    org.apache.derby.impl.jdbc.SQLExceptionFactory40.wrapArgsForTransportAcrossDRDA(Unknown
    Source)
    at
    org.apache.derby.impl.jdbc.SQLExceptionFactory40.getSQLException(Unknown
    Source)
    at
    org.apache.derby.impl.jdbc.Util.generateCsSQLException(Unknown Source)
    at
    org.apache.derby.impl.jdbc.TransactionResourceImpl.wrapInSQLException(Unknown
    Source)
    at
    org.apache.derby.impl.jdbc.TransactionResourceImpl.handleException(Unknown
    Source)
    at
    org.apache.derby.impl.jdbc.EmbedConnection.handleException(Unknown Source)
    ... 51 more
    Caused by: ERROR XBM0H: Directory /usr/share/hue/metastore_db
    cannot be created.
    at
    org.apache.derby.iapi.error.StandardException.newException(Unknown Source)
    at
    org.apache.derby.impl.services.monitor.StorageFactoryService$9.run(Unknown
    Source)
    at java.security.AccessController.doPrivileged(Native Method)
    at
    org.apache.derby.impl.services.monitor.StorageFactoryService.createServiceRoot(Unknown
    Source)
    at
    org.apache.derby.impl.services.monitor.BaseMonitor.bootService(Unknown
    Source)
    at
    org.apache.derby.impl.services.monitor.BaseMonitor.createPersistentService(Unknown
    Source)
    at
    org.apache.derby.iapi.services.monitor.Monitor.createPersistentService(Unknown
    Source)
    ... 51 more
    (31713) socket 0.0.0.0:8888 already in use, retrying after 4.0
    seconds...
    (31713) socket 0.0.0.0:8888 already in use, retrying after 8.0
    seconds...
    (31713) socket 0.0.0.0:8888 already in use, retrying after 16.0
    seconds...


    On Fri, Aug 9, 2013 at 11:51 AM, Abraham Elmahrek <
    abe@cloudera.com> wrote:
    Port 8888 is PID 22739. You should be able to execute "ps
    auxwww" or a similar command to find Beeswax and Hue. They should both be
    running as use Hue. Then just kill them both.

    -Abe


    On Fri, Aug 9, 2013 at 11:45 AM, Vinamrata Singal <
    vsingal5@stanford.edu> wrote:
    also no pid's for 8002 and 8003


    On Fri, Aug 9, 2013 at 11:44 AM, Vinamrata Singal <
    vsingal5@stanford.edu> wrote:
    I have the following output, can't find port 8888

    Proto Recv-Q Send-Q Local Address Foreign Address
    State PID/Program name
    tcp 0 0 0.0.0.0:50060 0.0.0.0:*
    LISTEN 30843/java
    tcp 0 0 0.0.0.0:44141 0.0.0.0:*
    LISTEN 30312/java
    tcp 0 0 0.0.0.0:50030 0.0.0.0:*
    LISTEN 30697/java
    tcp 0 0 0.0.0.0:111 0.0.0.0:*
    LISTEN -
    tcp 0 0 0.0.0.0:50070 0.0.0.0:*
    LISTEN 30312/java
    tcp 0 0 0.0.0.0:22 0.0.0.0:*
    LISTEN -
    tcp 0 0 0.0.0.0:58776 0.0.0.0:*
    LISTEN 30442/java
    tcp 0 0 0.0.0.0:8888 0.0.0.0:*
    LISTEN 22739/python2.7
    tcp 0 0 0.0.0.0:50010 0.0.0.0:*
    LISTEN 30442/java
    tcp 0 0 0.0.0.0:50075 0.0.0.0:*
    LISTEN 30442/java
    tcp 0 0 0.0.0.0:55035 0.0.0.0:*
    LISTEN 30572/java
    tcp 0 0 0.0.0.0:55039 0.0.0.0:*
    LISTEN 30697/java
    tcp 0 0 127.0.0.1:60353 0.0.0.0:*
    LISTEN 30843/java
    tcp 0 0 0.0.0.0:8002 0.0.0.0:*
    LISTEN -
    tcp 0 0 0.0.0.0:8003 0.0.0.0:*
    LISTEN -
    tcp 0 0 0.0.0.0:50020 0.0.0.0:*
    LISTEN 30442/java
    tcp 0 0 127.0.0.1:54310 0.0.0.0:*
    LISTEN 30312/java
    tcp 0 0 0.0.0.0:40358 0.0.0.0:*
    LISTEN -
    tcp 0 0 127.0.0.1:54311 0.0.0.0:*
    LISTEN 30697/java
    tcp 0 0 0.0.0.0:50090 0.0.0.0:*
    LISTEN 30572/java
    tcp6 0 0 :::49932 :::*
    LISTEN -
    tcp6 0 0 :::111 :::*
    LISTEN -
    tcp6 0 0 :::22 :::*
    LISTEN -


    On Fri, Aug 9, 2013 at 11:27 AM, Abraham Elmahrek <
    abe@cloudera.com> wrote:
    It definitely looks like they're in use. The command "netstat
    -tlnp" should show you their PIDs and process names. If they're Hue, you
    can simply 'kill' those processes using "kill <pid1> <pid2> ..." and start
    Hue again.


    On Fri, Aug 9, 2013 at 11:22 AM, Vinamrata Singal <
    vsingal5@stanford.edu> wrote:
    tcp 0 0 0.0.0.0:50060 0.0.0.0:*
    LISTEN
    tcp 0 0 0.0.0.0:44141 0.0.0.0:*
    LISTEN
    tcp 0 0 0.0.0.0:50030 0.0.0.0:*
    LISTEN
    tcp 0 0 0.0.0.0:111 0.0.0.0:*
    LISTEN
    tcp 0 0 0.0.0.0:50070 0.0.0.0:*
    LISTEN
    tcp 0 0 0.0.0.0:22 0.0.0.0:*
    LISTEN
    tcp 0 0 0.0.0.0:58776 0.0.0.0:*
    LISTEN
    tcp 0 0 0.0.0.0:8888 0.0.0.0:*
    LISTEN
    tcp 0 0 0.0.0.0:50010 0.0.0.0:*
    LISTEN
    tcp 0 0 0.0.0.0:50075 0.0.0.0:*
    LISTEN
    tcp 0 0 0.0.0.0:55035 0.0.0.0:*
    LISTEN
    tcp 0 0 0.0.0.0:55039 0.0.0.0:*
    LISTEN
    tcp 0 0 127.0.0.1:60353 0.0.0.0:*
    LISTEN
    tcp 0 0 0.0.0.0:8002 0.0.0.0:*
    LISTEN
    tcp 0 0 0.0.0.0:8003 0.0.0.0:*
    LISTEN
    tcp 0 0 0.0.0.0:50020 0.0.0.0:*
    LISTEN
    tcp 0 0 127.0.0.1:54310 0.0.0.0:*
    LISTEN
    tcp 0 0 0.0.0.0:40358 0.0.0.0:*
    LISTEN
    tcp 0 0 127.0.0.1:54311 0.0.0.0:*
    LISTEN
    tcp 0 0 0.0.0.0:50090 0.0.0.0:*
    LISTEN
    tcp6 0 0 :::49932 :::*
    LISTEN
    tcp6 0 0 :::111 :::*
    LISTEN
    tcp6 0 0 :::22 :::*
    LISTEN


    On Fri, Aug 9, 2013 at 11:20 AM, Abraham Elmahrek <
    abe@cloudera.com> wrote:
    It looks like port 8888 is in use. Maybe 8002 and 8003 are
    in use as well. Could you provide the output of 'netstat -tln'?

    -Abe


    On Fri, Aug 9, 2013 at 11:16 AM, Vinamrata Singal <
    vsingal5@stanford.edu> wrote:
    Abe,

    Thanks for all your help and quick replies.

    So I went ahead and fixed everything (including the
    username as you suggested) yet now I'm getting a different error:

    Executing /usr/local/hadoop/bin/hadoop jar
    /usr/share/hue/apps/beeswax/src/beeswax/../../java-lib/BeeswaxServer.jar
    --beeswax 8002 --desktop-host 127.0.0.1 --desktop-port 8888
    --query-lifetime 604800000 --metastore 8003
    (31443) socket 0.0.0.0:8888 already in use, retrying
    after 0.5 seconds...
    (31443) socket 0.0.0.0:8888 already in use, retrying
    after 1.0 seconds...
    (31443) socket 0.0.0.0:8888 already in use, retrying
    after 2.0 seconds...
    (31443) socket 0.0.0.0:8888 already in use, retrying
    after 4.0 seconds...
    13/08/09 11:15:37 INFO beeswax.Server: Starting metastore
    at port 8003
    13/08/09 11:15:37 INFO beeswax.Server: Starting beeswaxd
    at port 8002
    Exception in thread "main"
    org.apache.thrift.transport.TTransportException: Could not create
    ServerSocket on address 0.0.0.0/0.0.0.0:8002.
    at
    org.apache.thrift.transport.TServerSocket.<init>(TServerSocket.java:93)
    at
    org.apache.thrift.transport.TServerSocket.<init>(TServerSocket.java:75)
    at
    org.apache.thrift.transport.TServerSocket.<init>(TServerSocket.java:68)
    at
    com.cloudera.beeswax.Server.serveBeeswax(Server.java:250)
    at com.cloudera.beeswax.Server.main(Server.java:214)
    at sun.reflect.NativeMethodAccessorImpl.invoke0(Native
    Method)
    at
    sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
    at
    sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
    at java.lang.reflect.Method.invoke(Method.java:606)
    at org.apache.hadoop.util.RunJar.main(RunJar.java:160)
    org.apache.thrift.transport.TTransportException: Could not
    create ServerSocket on address 0.0.0.0/0.0.0.0:8003.
    at
    org.apache.thrift.transport.TServerSocket.<init>(TServerSocket.java:93)
    at
    org.apache.thrift.transport.TServerSocket.<init>(TServerSocket.java:75)
    at
    org.apache.thrift.transport.TServerSocket.<init>(TServerSocket.java:68)
    at com.cloudera.beeswax.Server.serveMeta(Server.java:348)
    at com.cloudera.beeswax.Server$1.run(Server.java:200)
    at java.lang.Thread.run(Thread.java:724)



    On Thu, Aug 8, 2013 at 5:53 PM, Abraham Elmahrek <
    abe@cloudera.com> wrote:
    Vinamrata,

    I think it will work with the exception of the job
    browser application. It depends on the thrift service code for the job
    tracker packaged in hue-plugins.jar, which only runs on CDH. There have
    been users who've ported that code over to the upstream job tracker in the
    past though.

    To solve the rest of your issues, take a look at
    apps/beeswax/beeswax_server.sh. There are environment variables that you
    need to set when you start Hue. In particular, "HIVE_LIB" should be set or
    some how grok'd into Hue's environment.

    -Abe


    On Thu, Aug 8, 2013 at 5:43 PM, Vinamrata Singal <
    vsingal5@stanford.edu> wrote:
    I have an Apache installation. Is that an issue for Hue?

    Got it, will look into the hue user installation issues.


    On Thu, Aug 8, 2013 at 5:25 PM, Abraham Elmahrek <
    abe@cloudera.com> wrote:
    That error is likely because of the error: find:
    `~/hive-0.10.0/lib': No such file or directory. Are
    you using a custom installation of Hive? Perhaps, the apache distribution
    rather than the CDH distribution?

    Note: It's probably not a good idea to change the user
    from 'hue' to something else.


    On Thu, Aug 8, 2013 at 5:20 PM, Vinamrata Singal <
    vsingal5@stanford.edu> wrote:
    I actually think I fixed the issue by changing the
    user in the supervisor.py script to be the user that owns that directory.

    Question: now I'm getting this error:

    xception in thread "main"
    java.lang.NoClassDefFoundError: org/apache/hadoop/hive/conf/HiveConf
    at java.lang.Class.forName0(Native Method)
    at java.lang.Class.forName(Class.java:270)
    at org.apache.hadoop.util.RunJar.main(RunJar.java:153)
    Caused by: java.lang.ClassNotFoundException:
    org.apache.hadoop.hive.conf.HiveConf
    at
    java.net.URLClassLoader$1.run(URLClassLoader.java:366)
    at
    java.net.URLClassLoader$1.run(URLClassLoader.java:355)
    at
    java.security.AccessController.doPrivileged(Native Method)
    at
    java.net.URLClassLoader.findClass(URLClassLoader.java:354)
    at
    java.lang.ClassLoader.loadClass(ClassLoader.java:424)
    at
    java.lang.ClassLoader.loadClass(ClassLoader.java:357)


    On Thu, Aug 8, 2013 at 5:19 PM, Abraham Elmahrek <
    abe@cloudera.com> wrote:
    Your 'hadoop.tmp.dir' doesn't vary per user. I'm not
    entirely sure if that directory is chown'd or if the first user to create
    it takes ownership. Either way, you probably don't have access to that
    directory using the user starting Beeswaxd (hue). Take a look at
    http://hadoop.apache.org/docs/current/hadoop-project-dist/hadoop-common/core-default.xmlfor an example 'hadoop.tmp.dir' that varies by username.

    -Abe


    On Thu, Aug 8, 2013 at 5:15 PM, Vinamrata Singal <
    vsingal5@stanford.edu> wrote:
    <configuration>
    <property>
    <name>hadoop.tmp.dir </name>
    <value>/app/hadoop/tmp</value>
    <description> A base for other temporary
    directories </description>
    </property>

    <property>
    <name>fs.default.name</name>
    <value>hdfs://master:54310</value>
    </property>
    </configuration>


    On Thu, Aug 8, 2013 at 5:11 PM, Abraham Elmahrek <
    abe@cloudera.com> wrote:
    Could you provide your core-site.xml? This should
    be creating a file in 'hadoop.tmp.dir'.

    -Abe


    On Thu, Aug 8, 2013 at 5:03 PM, Vinamrata Singal <
    vsingal5@stanford.edu> wrote:
    File: `/tmp'
    Size: 4096 Blocks: 8 IO Block:
    4096 directory
    Device: fd00h/64768d Inode: 131082 Links: 11
    Access: (1777/drwxrwxrwt) Uid: ( 0/ root)
    Gid: ( 0/ root)
    Access: 2013-08-08 22:47:49.435308999 +0000
    Modify: 2013-08-09 00:01:17.251309001 +0000
    Change: 2013-08-09 00:01:17.251309001 +0000
    Birth: -



    On Thu, Aug 8, 2013 at 5:00 PM, Abraham Elmahrek <
    abe@cloudera.com> wrote:
    It looks like Beeswaxd cannot create a temporary
    directory in '/tmp/'. What do your permissions on that directory look like?
    What is the output of "stat /tmp/"?


    On Thu, Aug 8, 2013 at 4:52 PM, <
    vsingal5@stanford.edu> wrote:
    I'm getting the following while trying to build
    Hue:
















    (6211) *** Controller starting at Thu Aug 8 11:29:50 2013
    Should start 1 new children
    Controller.spawn_children(number=1)
    $HADOOP_HOME=
    $HADOOP_BIN=/usr/local/hadoop/bin/hadoop
    $HIVE_CONF_DIR=~/hive-0.10.0/conf
    $HIVE_HOME=~/hive-0.10.0
    find: `~/hive-0.10.0/lib': No such file or directory
    $HADOOP_CLASSPATH=:
    $HADOOP_OPTS=-Dlog4j.configuration=log4j.properties
    $HADOOP_CONF_DIR=~/hive-0.10.0/conf:/usr/local/hadoop/conf
    $HADOOP_MAPRED_HOME=/usr/lib/hadoop-0.20-mapreduce
    CWD=/usr/local/hue/desktop/conf
    Executing /usr/local/hadoop/bin/hadoop jar /usr/local/hue/apps/beeswax/src/beeswax/../../java-lib/BeeswaxServer.jar --beeswax 8002 --desktop-host 127.0.0.1 --desktop-port 8888 --query-lifetime 604800000 --metastore 8003
    Exception in thread "main" java.io.IOException: Permission denied
    at java.io.UnixFileSystem.createFileExclusively(Native Method)
    at java.io.File.createTempFile(File.java:1879)
    at org.apache.hadoop.util.RunJar.main(RunJar.java:119)

    I've changed the configuration file so it
    doesn't use hue but the user that I'm logged in as which has read and write
    permissions in the hadoop dfs, hadoop, hive, etc. Not sure why it's doing
    this...

    --
    Best,
    Vinamrata Singal | BS Computer Science | Stanford
    Class of 2016 | (c) 650.215.3775 | (w)
    https://stanford.edu/~vsingal5

    --
    Best,
    Vinamrata Singal | BS Computer Science | Stanford
    Class of 2016 | (c) 650.215.3775 | (w)
    https://stanford.edu/~vsingal5

    --
    Best,
    Vinamrata Singal | BS Computer Science | Stanford
    Class of 2016 | (c) 650.215.3775 | (w)
    https://stanford.edu/~vsingal5

    --
    Best,
    Vinamrata Singal | BS Computer Science | Stanford Class
    of 2016 | (c) 650.215.3775 | (w)
    https://stanford.edu/~vsingal5

    --
    Best,
    Vinamrata Singal | BS Computer Science | Stanford Class of
    2016 | (c) 650.215.3775 | (w)
    https://stanford.edu/~vsingal5

    --
    Best,
    Vinamrata Singal | BS Computer Science | Stanford Class of
    2016 | (c) 650.215.3775 | (w) https://stanford.edu/~vsingal5

    --
    Best,
    Vinamrata Singal | BS Computer Science | Stanford Class of
    2016 | (c) 650.215.3775 | (w) https://stanford.edu/~vsingal5


    --
    Best,
    Vinamrata Singal | BS Computer Science | Stanford Class of 2016
    (c) 650.215.3775 | (w) https://stanford.edu/~vsingal5

    --
    Best,
    Vinamrata Singal | BS Computer Science | Stanford Class of 2016 |
    (c) 650.215.3775 | (w) https://stanford.edu/~vsingal5

    --
    Best,
    Vinamrata Singal | BS Computer Science | Stanford Class of 2016 |
    (c) 650.215.3775 | (w) https://stanford.edu/~vsingal5

    --
    Best,
    Vinamrata Singal | BS Computer Science | Stanford Class of 2016 | (c)
    650.215.3775 | (w) https://stanford.edu/~vsingal5

    --
    Best,
    Vinamrata Singal | BS Computer Science | Stanford Class of 2016 | (c)
    650.215.3775 | (w) https://stanford.edu/~vsingal5

    --
    Best,
    Vinamrata Singal | BS Computer Science | Stanford Class of 2016 | (c)
    650.215.3775 | (w) https://stanford.edu/~vsingal5


    --
    Best,
    Vinamrata Singal | BS Computer Science | Stanford Class of 2016 | (c)
    650.215.3775 | (w) https://stanford.edu/~vsingal5

Related Discussions

Discussion Navigation
viewthread | post
Discussion Overview
grouphue-user @
categorieshadoop
postedAug 8, '13 at 11:52p
activeAug 9, '13 at 9:30p
posts28
users3
websitecloudera.com
irc#hadoop

People

Translate

site design / logo © 2022 Grokbase