FAQ
Is it possible to configure the CM so that it shows 50 entries by
default? This setting does not seem to "stick". I rather like to
scroll a bit more down the hosts/instances pages instead of browsing
next pages or changing to 50 entries every time. (Very tedious if you
just have a few more hosts/roles than 25).

Thanks in advance.

Search Discussions

  • Andrew Yao at Jun 21, 2012 at 2:18 pm
    Hi Ferdy,

    Sorry, there isn't a proper way to make this value stick in CM today.

    I will see if we can address this in the next minor release.

    https://jira.cloudera.com/browse/OPSAPS-8072

    thanks,

    Andrew

    On Thu, Jun 21, 2012 at 2:50 AM, Ferdy Galema wrote:

    Is it possible to configure the CM so that it shows 50 entries by
    default? This setting does not seem to "stick". I rather like to
    scroll a bit more down the hosts/instances pages instead of browsing
    next pages or changing to 50 entries every time. (Very tedious if you
    just have a few more hosts/roles than 25).

    Thanks in advance.
  • Alex Soto at Jun 21, 2012 at 4:53 pm
    I've installed CM 4.01 in a single server (for testing purposes) but HBase does not work.

    I have changed HDFS replica factor to 1.
    Errors in the log files complain that files could only be replicated to 0 nodes instead of 1.

    Has anybody tested with a single node?
    Is there a setting I need to change to make this work?

    Regards,
    Alex
    --
    Amicus Plato, sed magis amica veritas.
  • bc Wong at Jun 21, 2012 at 4:57 pm

    On Thu, Jun 21, 2012 at 9:52 AM, Alex Soto wrote:

    I've installed CM 4.01 in a single server (for testing purposes) but HBase
    does not work.

    I have changed HDFS replica factor to 1.
    Errors in the log files complain that files could only be replicated to 0
    nodes instead of 1.

    Has anybody tested with a single node?
    Is there a setting I need to change to make this work?
    Is your datanode running? Do you see errors in the DN log?
    --
    bc Wong
    Cloudera Software Engineer
  • Mark Schnegelberger at Jun 21, 2012 at 5:13 pm
    Any chance this pseudo cluster is on a virtual machine, Alex? If so, how
    much hard drive space did you allocate to the VM?
    --
    Mark Schnegelberger
    Customer Operations Engineer
    Cloudera
    On Thu, Jun 21, 2012 at 12:57 PM, bc Wong wrote:
    On Thu, Jun 21, 2012 at 9:52 AM, Alex Soto wrote:

    I've installed CM 4.01 in a single server (for testing purposes) but
    HBase does not work.

    I have changed HDFS replica factor to 1.
    Errors in the log files complain that files could only be replicated to 0
    nodes instead of 1.

    Has anybody tested with a single node?
    Is there a setting I need to change to make this work?
    Is your datanode running? Do you see errors in the DN log?
    --
    bc Wong
    Cloudera Software Engineer


    --
    Mark Schnegelberger
    Customer Operations Engineer
    Cloudera
  • Alex Soto at Jun 21, 2012 at 5:21 pm
    Data node is running fine; there are no errors in its log.
    Yes, this is VM, and there is plenty of space available.

    I have changed the replication factor to 0 but the problem persists (after restart of services).
    This is brand new installation so there is practically no data. I can't eve open the HBase Web UI because of this error.

    Regards,
    Alex
    --
    Amicus Plato, sed magis amica veritas.
    On Jun 21, 2012, at 1:12 PM, Mark Schnegelberger wrote:

    Any chance this pseudo cluster is on a virtual machine, Alex? If so, how much hard drive space did you allocate to the VM?
    --
    Mark Schnegelberger
    Customer Operations Engineer
    Cloudera

    On Thu, Jun 21, 2012 at 12:57 PM, bc Wong wrote:
    On Thu, Jun 21, 2012 at 9:52 AM, Alex Soto wrote:
    I've installed CM 4.01 in a single server (for testing purposes) but HBase does not work.

    I have changed HDFS replica factor to 1.
    Errors in the log files complain that files could only be replicated to 0 nodes instead of 1.

    Has anybody tested with a single node?
    Is there a setting I need to change to make this work?


    Is your datanode running? Do you see errors in the DN log?
    --
    bc Wong
    Cloudera Software Engineer



    --
    Mark Schnegelberger
    Customer Operations Engineer
    Cloudera
  • Brian Burton at Jun 21, 2012 at 5:26 pm
    Now, when you say plenty of space…The default setting in reserved space may
    be larger than your VM. Check “Reserved Space for Non DFS Use” in the
    DataNode settings in CM (Services > HDFS > Configuration > DataNode). Make
    sure that this is set to a much smaller value than the size of your VM disk.



    Thank you,



    *Brian Burton*

    *Customer Operations Engineer, Cloudera*



    *From:* Alex Soto
    *Sent:* Thursday, June 21, 2012 1:21 PM
    *To:* Mark Schnegelberger
    *Cc:* Cloudera Manager Users
    *Subject:* Re: Pseudo distributed mode



    Data node is running fine; there are no errors in its log.

    Yes, this is VM, and there is plenty of space available.



    I have changed the replication factor to 0 but the problem persists (after
    restart of services).

    This is brand new installation so there is practically no data. I can't
    eve open the HBase Web UI because of this error.



    Regards,

    Alex
    --

    Amicus Plato, sed magis amica veritas.


    On Jun 21, 2012, at 1:12 PM, Mark Schnegelberger wrote:

    Any chance this pseudo cluster is on a virtual machine, Alex? If so, how
    much hard drive space did you allocate to the VM?

    --

    Mark Schnegelberger

    Customer Operations Engineer

    Cloudera

    On Thu, Jun 21, 2012 at 12:57 PM, bc Wong wrote:

    On Thu, Jun 21, 2012 at 9:52 AM, Alex Soto wrote:

    I've installed CM 4.01 in a single server (for testing purposes) but HBase
    does not work.

    I have changed HDFS replica factor to 1.
    Errors in the log files complain that files could only be replicated to 0
    nodes instead of 1.

    Has anybody tested with a single node?
    Is there a setting I need to change to make this work?



    Is your datanode running? Do you see errors in the DN log?

    --
    bc Wong
    Cloudera Software Engineer





    --

    Mark Schnegelberger

    Customer Operations Engineer

    Cloudera
  • Alex Soto at Jun 21, 2012 at 5:59 pm
    That was it!

    I assumed that 9G was enough but the "Reserved Space for Non DFS Use" configuration was somewhere around 10G. Changed to 5G and it started working right away. Too bad the error message is not more explicit.

    It would be nice, though, if CM would at least give me a warning, or better yet, automatically adjust this parameter to fit the available space in the machine.

    Thanks for help!

    Regards,
    Alex
    --
    Amicus Plato, sed magis amica veritas.
    On Jun 21, 2012, at 1:26 PM, Brian Burton wrote:

    Now, when you say plenty of space…The default setting in reserved space may be larger than your VM. Check “Reserved Space for Non DFS Use” in the DataNode settings in CM (Services > HDFS > Configuration > DataNode). Make sure that this is set to a much smaller value than the size of your VM disk.

    Thank you,

    Brian Burton
    Customer Operations Engineer, Cloudera

    From: Alex Soto
    Sent: Thursday, June 21, 2012 1:21 PM
    To: Mark Schnegelberger
    Cc: Cloudera Manager Users
    Subject: Re: Pseudo distributed mode

    Data node is running fine; there are no errors in its log.
    Yes, this is VM, and there is plenty of space available.

    I have changed the replication factor to 0 but the problem persists (after restart of services).
    This is brand new installation so there is practically no data. I can't eve open the HBase Web UI because of this error.

    Regards,
    Alex
    --
    Amicus Plato, sed magis amica veritas.

    On Jun 21, 2012, at 1:12 PM, Mark Schnegelberger wrote:

    Any chance this pseudo cluster is on a virtual machine, Alex? If so, how much hard drive space did you allocate to the VM?
    --
    Mark Schnegelberger
    Customer Operations Engineer
    Cloudera

    On Thu, Jun 21, 2012 at 12:57 PM, bc Wong wrote:
    On Thu, Jun 21, 2012 at 9:52 AM, Alex Soto wrote:
    I've installed CM 4.01 in a single server (for testing purposes) but HBase does not work.

    I have changed HDFS replica factor to 1.
    Errors in the log files complain that files could only be replicated to 0 nodes instead of 1.

    Has anybody tested with a single node?
    Is there a setting I need to change to make this work?


    Is your datanode running? Do you see errors in the DN log?
    --
    bc Wong
    Cloudera Software Engineer



    --
    Mark Schnegelberger
    Customer Operations Engineer
    Cloudera
  • Vinithra Varadharajan at Jun 21, 2012 at 6:33 pm
    Alex,
    On Thu, Jun 21, 2012 at 10:59 AM, Alex Soto wrote:


    That was it!

    I assumed that 9G was enough but the "Reserved Space for Non DFS Use"
    configuration was somewhere around 10G. Changed to 5G and it started
    working right away. Too bad the error message is not more explicit.

    It would be nice, though, if CM would at least give me a warning, or
    better yet, automatically adjust this parameter to fit the available
    space in the machine.
    This will be addressed in the next release of CM.

    Thanks for help!

    Regards,
    Alex
    --
    Amicus Plato, sed magis amica veritas.

    On Jun 21, 2012, at 1:26 PM, Brian Burton wrote:

    Now, when you say plenty of space…The default setting in reserved space
    may be larger than your VM. Check “Reserved Space for Non DFS Use” in the
    DataNode settings in CM (Services > HDFS > Configuration > DataNode). Make
    sure that this is set to a much smaller value than the size of your VM disk.



    Thank you,



    *Brian Burton*

    *Customer Operations Engineer, Cloudera*



    *From:* Alex Soto
    *Sent:* Thursday, June 21, 2012 1:21 PM
    *To:* Mark Schnegelberger
    *Cc:* Cloudera Manager Users
    *Subject:* Re: Pseudo distributed mode



    Data node is running fine; there are no errors in its log.

    Yes, this is VM, and there is plenty of space available.



    I have changed the replication factor to 0 but the problem persists (after
    restart of services).

    This is brand new installation so there is practically no data. I can't
    eve open the HBase Web UI because of this error.



    Regards,

    Alex
    --

    Amicus Plato, sed magis amica veritas.


    On Jun 21, 2012, at 1:12 PM, Mark Schnegelberger wrote:

    Any chance this pseudo cluster is on a virtual machine, Alex? If so, how
    much hard drive space did you allocate to the VM?

    --

    Mark Schnegelberger

    Customer Operations Engineer

    Cloudera

    On Thu, Jun 21, 2012 at 12:57 PM, bc Wong wrote:

    On Thu, Jun 21, 2012 at 9:52 AM, Alex Soto wrote:

    I've installed CM 4.01 in a single server (for testing purposes) but HBase
    does not work.

    I have changed HDFS replica factor to 1.
    Errors in the log files complain that files could only be replicated to 0
    nodes instead of 1.

    Has anybody tested with a single node?
    Is there a setting I need to change to make this work?



    Is your datanode running? Do you see errors in the DN log?

    --
    bc Wong
    Cloudera Software Engineer





    --

    Mark Schnegelberger

    Customer Operations Engineer

    Cloudera


  • bc Wong at Jun 21, 2012 at 5:38 pm

    On Thu, Jun 21, 2012 at 10:21 AM, Alex Soto wrote:

    Data node is running fine; there are no errors in its log.
    Yes, this is VM, and there is plenty of space available.

    I have changed the replication factor to 0 but the problem persists (after
    restart of services).
    This is brand new installation so there is practically no data. I can't
    eve open the HBase Web UI because of this error.
    The "replicated to 0 nodes instead of 1" error usually indicates HDFS-level
    errors. Can you create a file in HDFS, and read it back?

    Cheers,
    bc

    On Jun 21, 2012, at 1:12 PM, Mark Schnegelberger wrote:

    Any chance this pseudo cluster is on a virtual machine, Alex? If so, how
    much hard drive space did you allocate to the VM?
    --
    Mark Schnegelberger
    Customer Operations Engineer
    Cloudera
    On Thu, Jun 21, 2012 at 12:57 PM, bc Wong wrote:
    On Thu, Jun 21, 2012 at 9:52 AM, Alex Soto wrote:

    I've installed CM 4.01 in a single server (for testing purposes) but
    HBase does not work.

    I have changed HDFS replica factor to 1.
    Errors in the log files complain that files could only be replicated to
    0 nodes instead of 1.

    Has anybody tested with a single node?
    Is there a setting I need to change to make this work?
    Is your datanode running? Do you see errors in the DN log?
    --
    bc Wong
    Cloudera Software Engineer


    --
    Mark Schnegelberger
    Customer Operations Engineer
    Cloudera

    --
    bc Wong
    Cloudera Software Engineer

Related Discussions

Discussion Navigation
viewthread | post
Discussion Overview
groupscm-users @
categorieshadoop
postedJun 21, '12 at 9:50a
activeJun 21, '12 at 6:33p
posts10
users7
websitecloudera.com
irc#hadoop

People

Translate

site design / logo © 2022 Grokbase