FAQ
Hi

I try hdfsEnableNnHa API, but it return Internal Server Error.

2014-04-22 19:53:48,193 INFO
[1421807247@scm-web-354:cmf.AuthenticationSuccessEventListener@32]
Authentication success for user: admin
2014-04-22 19:53:48,194 DEBUG
[1421807247@scm-web-354:api.LoggingInInterceptor@50] API request:
---------- id: 414
POST /api/v6/clusters/Test/services/hdfs1/commands/hdfsEnableNnHa
Encoding: UTF-8
Authentication: admin [AUTH_LIMITED, ROLE_ADMIN, ROLE_USER]
Headers:
Accept=[*/*]
accept-encoding=[gzip;q=1.0,deflate;q=0.6,identity;q=0.3]
Authorization=[Basic YWRtaW46YWRtaW4=]
Content-Length=[492]
content-type=[application/json]
Host=[10.211.55.100:7180]
User-Agent=[Ruby]
Body:

{"activeNnName":"hdfs1-nn1","standbyNnName":"hdfs1-nn2","standbyNnHostId":"cdh2","nameservice":"test","qjName":"journalhdfs1","activeFcName":"hdfs1-fc1","standbyFcName":"hdfs1-fc2","zkServiceName":"zk1","jns":[{"jnName":"hdfs1-jn1","jnHostId":"cdh1","jnEditsDir":"/dfs/jn"},{"jnName":"hdfs1-jn2","jnHostId":"cdh2","jnEditsDir":"/dfs/jn"},{"jnName":"hdfs1-jn3","jnHostId":"cdh3","jnEditsDir":"/dfs/jn"}],"forceInitZNode":true,"clearExistingStandbyNameDirs":true,"clearExistingJnEditsDir":true}
2014-04-22 19:53:48,644 INFO
[ProcessStalenessDetector-0:components.ProcessStalenessDetector@227]
Staleness check done. Duration: PT0.456S
2014-04-22 19:53:49,104 INFO
[1421807247@scm-web-354:service.ServiceHandlerRegistry@738] Executing
service command EnableNNHA EnableNNHACmdArgs{targetRoles=[], args=[]}.
Service: DbService{id=376, name=hdfs1}
2014-04-22 19:53:52,971 INFO [JvmPauseMonitor:debug.JvmPauseMonitor@236]
Detected pause in JVM or host machine (e.g. a stop the world GC, or JVM not
scheduled): paused approximately 3501ms: GC pool 'PS Scavenge' had
collection(s): count=1 time=3517ms
2014-04-22 19:53:52,971 INFO [JvmPauseMonitor:debug.JvmPauseMonitor@236]
Detected pause in JVM or host machine (e.g. a stop the world GC, or JVM not
scheduled): paused approximately 3503ms: GC pool 'PS Scavenge' had
collection(s): count=1 time=3517ms
2014-04-22 19:53:53,186 INFO
[SearchRepositoryManager-0:components.SearchRepositoryManager@407] Num
entities:1195
2014-04-22 19:53:53,186 INFO
[SearchRepositoryManager-0:components.SearchRepositoryManager@409]
Generating documents:2014-04-22T10:53:53.186Z
2014-04-22 19:53:53,247 INFO
[SearchRepositoryManager-0:components.SearchRepositoryManager@411] Num
docs:1153
2014-04-22 19:53:53,247 INFO
[SearchRepositoryManager-0:components.SearchRepositoryManager@352]
Constructing repo:2014-04-22T10:53:53.247Z
2014-04-22 19:53:53,413 WARN
[1421807247@scm-web-354:api.ApiExceptionMapper@150] Unexpected exception.
java.lang.NullPointerException
at
com.google.common.base.Preconditions.checkNotNull(Preconditions.java:191)
at
com.cloudera.cmf.command.flow.work.DeleteRoleCmdWork.of(DeleteRoleCmdWork.java:51)
at
com.cloudera.cmf.service.hdfs.EnableNNHACommand.constructWork(EnableNNHACommand.java:308)
at
com.cloudera.cmf.service.hdfs.EnableNNHACommand.constructWork(EnableNNHACommand.java:49)
at
com.cloudera.cmf.command.CmdWorkCommand.execute(CmdWorkCommand.java:52)
at
com.cloudera.cmf.service.ServiceHandlerRegistry.executeCommandHelper(ServiceHandlerRegistry.java:740)
at
com.cloudera.cmf.service.ServiceHandlerRegistry.executeCommand(ServiceHandlerRegistry.java:705)
at
com.cloudera.cmf.service.ServiceHandlerRegistry.executeCommand(ServiceHandlerRegistry.java:700)
at
com.cloudera.server.cmf.components.OperationsManagerImpl.executeServiceCmd(OperationsManagerImpl.java:1480)
at
com.cloudera.api.dao.impl.CommandManagerDaoImpl.issueHdfsEnableNnHaCommand(CommandManagerDaoImpl.java:319)
at sun.reflect.GeneratedMethodAccessor1411.invoke(Unknown Source)
at
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at
com.cloudera.api.dao.impl.ManagerDaoBase.runInNewTransaction(ManagerDaoBase.java:158)
at
com.cloudera.api.dao.impl.ManagerDaoBase.invoke(ManagerDaoBase.java:203)
at com.sun.proxy.$Proxy112.issueHdfsEnableNnHaCommand(Unknown
Source)
at
com.cloudera.api.v6.impl.ServicesResourceV6Impl.hdfsEnableNnHaCommand(ServicesResourceV6Impl.java:239)
at sun.reflect.GeneratedMethodAccessor1410.invoke(Unknown Source)
at
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at
org.apache.cxf.service.invoker.AbstractInvoker.performInvocation(AbstractInvoker.java:180)
at
org.apache.cxf.service.invoker.AbstractInvoker.invoke(AbstractInvoker.java:96)
at org.apache.cxf.jaxrs.JAXRSInvoker.invoke(JAXRSInvoker.java:194)
at com.cloudera.api.ApiInvoker.invoke(ApiInvoker.java:104)
...
2014-04-22 19:53:53,424 DEBUG [1421807247@scm-web-354:api.ApiInvoker@124]
API Error 500
[/api/v6/clusters/Test/services/hdfs1/commands/hdfsEnableNnHa]:
ApiErrorMessage{null}
2014-04-22 19:53:53,472 DEBUG [1421807247@scm-web-354:api.ApiInvoker@124]
API Error 500
[/api/v6/clusters/Test/services/hdfs1/commands/hdfsEnableNnHa]:
ApiErrorMessage{null}
2014-04-22 19:53:53,473 DEBUG [1421807247@scm-web-354:api.ApiInvoker@124]
API Error 500
[/api/v6/clusters/Test/services/hdfs1/commands/hdfsEnableNnHa]:
ApiErrorMessage{null}
2014-04-22 19:53:53,473 DEBUG [1421807247@scm-web-354:api.ApiInvoker@124]
API Error 500
[/api/v6/clusters/Test/services/hdfs1/commands/hdfsEnableNnHa]:
ApiErrorMessage{null}
2014-04-22 19:53:53,474 DEBUG
[1421807247@scm-web-354:api.LoggingOutInterceptor@101] API response:
---------- id: 414
Response code: 500
Content-Type: application/json
Headers:
Date=[Tue, 22 Apr 2014 10:53:53 GMT]
Body:
{ }

I think that api will delete SecondaryNameNode.

Document is written it.

The SecondaryNameNode associated with the Active NameNode will be deleted.


But I didn't create SecondaryNameNode.

Can I make HDFS-HA Cluster by using this API ?
Should I use API of v5 ?

thx

To unsubscribe from this group and stop receiving emails from it, send an email to scm-users+unsubscribe@cloudera.org.

Search Discussions

  • Vikram Bajaj at Apr 22, 2014 at 5:56 pm
    I had the same issue, I solved it by creating the secondary name node and
    formatting the primary name node before calling the ha enable api.

    The document is a bit fuzzy on this, as one document stated that you don't
    need to create a secondary name node, and the other one said you need to
    create it. When I wrote the scripts for CDH4, this was not the requirement,
    but in CDH5 it is.

    The error handling is a problem too, should not see NPE :)

    Regards,
    Vikram

    On Tue, Apr 22, 2014 at 4:18 AM, Mahito OGURA wrote:

    Hi

    I try hdfsEnableNnHa API, but it return Internal Server Error.

    2014-04-22 19:53:48,193 INFO [1421807247@scm-web-354
    :cmf.AuthenticationSuccessEventListener@32] Authentication success for
    user: admin
    2014-04-22 19:53:48,194 DEBUG [1421807247@scm-web-354
    :api.LoggingInInterceptor@50] API request:
    ---------- id: 414
    POST /api/v6/clusters/Test/services/hdfs1/commands/hdfsEnableNnHa
    Encoding: UTF-8
    Authentication: admin [AUTH_LIMITED, ROLE_ADMIN, ROLE_USER]
    Headers:
    Accept=[*/*]
    accept-encoding=[gzip;q=1.0,deflate;q=0.6,identity;q=0.3]
    Authorization=[Basic YWRtaW46YWRtaW4=]
    Content-Length=[492]
    content-type=[application/json]
    Host=[10.211.55.100:7180]
    User-Agent=[Ruby]
    Body:

    {"activeNnName":"hdfs1-nn1","standbyNnName":"hdfs1-nn2","standbyNnHostId":"cdh2","nameservice":"test","qjName":"journalhdfs1","activeFcName":"hdfs1-fc1","standbyFcName":"hdfs1-fc2","zkServiceName":"zk1","jns":[{"jnName":"hdfs1-jn1","jnHostId":"cdh1","jnEditsDir":"/dfs/jn"},{"jnName":"hdfs1-jn2","jnHostId":"cdh2","jnEditsDir":"/dfs/jn"},{"jnName":"hdfs1-jn3","jnHostId":"cdh3","jnEditsDir":"/dfs/jn"}],"forceInitZNode":true,"clearExistingStandbyNameDirs":true,"clearExistingJnEditsDir":true}
    2014-04-22 19:53:48,644 INFO
    [ProcessStalenessDetector-0:components.ProcessStalenessDetector@227]
    Staleness check done. Duration: PT0.456S
    2014-04-22 19:53:49,104 INFO [1421807247@scm-web-354
    :service.ServiceHandlerRegistry@738] Executing service command
    EnableNNHA EnableNNHACmdArgs{targetRoles=[], args=[]}. Service:
    DbService{id=376, name=hdfs1}
    2014-04-22 19:53:52,971 INFO [JvmPauseMonitor:debug.JvmPauseMonitor@236]
    Detected pause in JVM or host machine (e.g. a stop the world GC, or JVM not
    scheduled): paused approximately 3501ms: GC pool 'PS Scavenge' had
    collection(s): count=1 time=3517ms
    2014-04-22 19:53:52,971 INFO [JvmPauseMonitor:debug.JvmPauseMonitor@236]
    Detected pause in JVM or host machine (e.g. a stop the world GC, or JVM not
    scheduled): paused approximately 3503ms: GC pool 'PS Scavenge' had
    collection(s): count=1 time=3517ms
    2014-04-22 19:53:53,186 INFO
    [SearchRepositoryManager-0:components.SearchRepositoryManager@407] Num
    entities:1195
    2014-04-22 19:53:53,186 INFO
    [SearchRepositoryManager-0:components.SearchRepositoryManager@409]
    Generating documents:2014-04-22T10:53:53.186Z
    2014-04-22 19:53:53,247 INFO
    [SearchRepositoryManager-0:components.SearchRepositoryManager@411] Num
    docs:1153
    2014-04-22 19:53:53,247 INFO
    [SearchRepositoryManager-0:components.SearchRepositoryManager@352]
    Constructing repo:2014-04-22T10:53:53.247Z
    2014-04-22 19:53:53,413 WARN [1421807247@scm-web-354
    :api.ApiExceptionMapper@150] Unexpected exception.
    java.lang.NullPointerException
    at
    com.google.common.base.Preconditions.checkNotNull(Preconditions.java:191)
    at
    com.cloudera.cmf.command.flow.work.DeleteRoleCmdWork.of(DeleteRoleCmdWork.java:51)
    at
    com.cloudera.cmf.service.hdfs.EnableNNHACommand.constructWork(EnableNNHACommand.java:308)
    at
    com.cloudera.cmf.service.hdfs.EnableNNHACommand.constructWork(EnableNNHACommand.java:49)
    at
    com.cloudera.cmf.command.CmdWorkCommand.execute(CmdWorkCommand.java:52)
    at
    com.cloudera.cmf.service.ServiceHandlerRegistry.executeCommandHelper(ServiceHandlerRegistry.java:740)
    at
    com.cloudera.cmf.service.ServiceHandlerRegistry.executeCommand(ServiceHandlerRegistry.java:705)
    at
    com.cloudera.cmf.service.ServiceHandlerRegistry.executeCommand(ServiceHandlerRegistry.java:700)
    at
    com.cloudera.server.cmf.components.OperationsManagerImpl.executeServiceCmd(OperationsManagerImpl.java:1480)
    at
    com.cloudera.api.dao.impl.CommandManagerDaoImpl.issueHdfsEnableNnHaCommand(CommandManagerDaoImpl.java:319)
    at sun.reflect.GeneratedMethodAccessor1411.invoke(Unknown Source)
    at
    sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
    at java.lang.reflect.Method.invoke(Method.java:606)
    at
    com.cloudera.api.dao.impl.ManagerDaoBase.runInNewTransaction(ManagerDaoBase.java:158)
    at
    com.cloudera.api.dao.impl.ManagerDaoBase.invoke(ManagerDaoBase.java:203)
    at com.sun.proxy.$Proxy112.issueHdfsEnableNnHaCommand(Unknown
    Source)
    at
    com.cloudera.api.v6.impl.ServicesResourceV6Impl.hdfsEnableNnHaCommand(ServicesResourceV6Impl.java:239)
    at sun.reflect.GeneratedMethodAccessor1410.invoke(Unknown Source)
    at
    sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
    at java.lang.reflect.Method.invoke(Method.java:606)
    at
    org.apache.cxf.service.invoker.AbstractInvoker.performInvocation(AbstractInvoker.java:180)
    at
    org.apache.cxf.service.invoker.AbstractInvoker.invoke(AbstractInvoker.java:96)
    at org.apache.cxf.jaxrs.JAXRSInvoker.invoke(JAXRSInvoker.java:194)
    at com.cloudera.api.ApiInvoker.invoke(ApiInvoker.java:104)
    ...
    2014-04-22 19:53:53,424 DEBUG [1421807247@scm-web-354:api.ApiInvoker@124]
    API Error 500
    [/api/v6/clusters/Test/services/hdfs1/commands/hdfsEnableNnHa]:
    ApiErrorMessage{null}
    2014-04-22 19:53:53,472 DEBUG [1421807247@scm-web-354:api.ApiInvoker@124]
    API Error 500
    [/api/v6/clusters/Test/services/hdfs1/commands/hdfsEnableNnHa]:
    ApiErrorMessage{null}
    2014-04-22 19:53:53,473 DEBUG [1421807247@scm-web-354:api.ApiInvoker@124]
    API Error 500
    [/api/v6/clusters/Test/services/hdfs1/commands/hdfsEnableNnHa]:
    ApiErrorMessage{null}
    2014-04-22 19:53:53,473 DEBUG [1421807247@scm-web-354:api.ApiInvoker@124]
    API Error 500
    [/api/v6/clusters/Test/services/hdfs1/commands/hdfsEnableNnHa]:
    ApiErrorMessage{null}
    2014-04-22 19:53:53,474 DEBUG [1421807247@scm-web-354
    :api.LoggingOutInterceptor@101] API response:
    ---------- id: 414
    Response code: 500
    Content-Type: application/json
    Headers:
    Date=[Tue, 22 Apr 2014 10:53:53 GMT]
    Body:
    { }

    I think that api will delete SecondaryNameNode.

    Document is written it.

    The SecondaryNameNode associated with the Active NameNode will be deleted.


    But I didn't create SecondaryNameNode.

    Can I make HDFS-HA Cluster by using this API ?
    Should I use API of v5 ?

    thx

    To unsubscribe from this group and stop receiving emails from it, send an
    email to scm-users+unsubscribe@cloudera.org.
    To unsubscribe from this group and stop receiving emails from it, send an email to scm-users+unsubscribe@cloudera.org.
  • Vikram Srivastava at Apr 22, 2014 at 6:32 pm
    We've created a new API hdfsEnableNnHa endpoint which takes care of
    deleting SNN when you are enabling HA. This also assumes that SNN is
    present. The old api endpoint hdfsEnableHa which needed SNN to be deleted
    has now been deprecated and will be removed in a future release.

    I've created an internal issue to take care of NPE. Thanks for pointing
    this out.

    Vikram

    On Tue, Apr 22, 2014 at 10:56 AM, Vikram Bajaj wrote:

    I had the same issue, I solved it by creating the secondary name node and
    formatting the primary name node before calling the ha enable api.

    The document is a bit fuzzy on this, as one document stated that you don't
    need to create a secondary name node, and the other one said you need to
    create it. When I wrote the scripts for CDH4, this was not the requirement,
    but in CDH5 it is.

    The error handling is a problem too, should not see NPE :)

    Regards,
    Vikram

    On Tue, Apr 22, 2014 at 4:18 AM, Mahito OGURA wrote:

    Hi

    I try hdfsEnableNnHa API, but it return Internal Server Error.

    2014-04-22 19:53:48,193 INFO [1421807247@scm-web-354
    :cmf.AuthenticationSuccessEventListener@32] Authentication success for
    user: admin
    2014-04-22 19:53:48,194 DEBUG [1421807247@scm-web-354
    :api.LoggingInInterceptor@50] API request:
    ---------- id: 414
    POST /api/v6/clusters/Test/services/hdfs1/commands/hdfsEnableNnHa
    Encoding: UTF-8
    Authentication: admin [AUTH_LIMITED, ROLE_ADMIN, ROLE_USER]
    Headers:
    Accept=[*/*]
    accept-encoding=[gzip;q=1.0,deflate;q=0.6,identity;q=0.3]
    Authorization=[Basic YWRtaW46YWRtaW4=]
    Content-Length=[492]
    content-type=[application/json]
    Host=[10.211.55.100:7180]
    User-Agent=[Ruby]
    Body:

    {"activeNnName":"hdfs1-nn1","standbyNnName":"hdfs1-nn2","standbyNnHostId":"cdh2","nameservice":"test","qjName":"journalhdfs1","activeFcName":"hdfs1-fc1","standbyFcName":"hdfs1-fc2","zkServiceName":"zk1","jns":[{"jnName":"hdfs1-jn1","jnHostId":"cdh1","jnEditsDir":"/dfs/jn"},{"jnName":"hdfs1-jn2","jnHostId":"cdh2","jnEditsDir":"/dfs/jn"},{"jnName":"hdfs1-jn3","jnHostId":"cdh3","jnEditsDir":"/dfs/jn"}],"forceInitZNode":true,"clearExistingStandbyNameDirs":true,"clearExistingJnEditsDir":true}
    2014-04-22 19:53:48,644 INFO
    [ProcessStalenessDetector-0:components.ProcessStalenessDetector@227]
    Staleness check done. Duration: PT0.456S
    2014-04-22 19:53:49,104 INFO [1421807247@scm-web-354
    :service.ServiceHandlerRegistry@738] Executing service command
    EnableNNHA EnableNNHACmdArgs{targetRoles=[], args=[]}. Service:
    DbService{id=376, name=hdfs1}
    2014-04-22 19:53:52,971 INFO [JvmPauseMonitor:debug.JvmPauseMonitor@236]
    Detected pause in JVM or host machine (e.g. a stop the world GC, or JVM not
    scheduled): paused approximately 3501ms: GC pool 'PS Scavenge' had
    collection(s): count=1 time=3517ms
    2014-04-22 19:53:52,971 INFO [JvmPauseMonitor:debug.JvmPauseMonitor@236]
    Detected pause in JVM or host machine (e.g. a stop the world GC, or JVM not
    scheduled): paused approximately 3503ms: GC pool 'PS Scavenge' had
    collection(s): count=1 time=3517ms
    2014-04-22 19:53:53,186 INFO
    [SearchRepositoryManager-0:components.SearchRepositoryManager@407] Num
    entities:1195
    2014-04-22 19:53:53,186 INFO
    [SearchRepositoryManager-0:components.SearchRepositoryManager@409]
    Generating documents:2014-04-22T10:53:53.186Z
    2014-04-22 19:53:53,247 INFO
    [SearchRepositoryManager-0:components.SearchRepositoryManager@411] Num
    docs:1153
    2014-04-22 19:53:53,247 INFO
    [SearchRepositoryManager-0:components.SearchRepositoryManager@352]
    Constructing repo:2014-04-22T10:53:53.247Z
    2014-04-22 19:53:53,413 WARN [1421807247@scm-web-354
    :api.ApiExceptionMapper@150] Unexpected exception.
    java.lang.NullPointerException
    at
    com.google.common.base.Preconditions.checkNotNull(Preconditions.java:191)
    at
    com.cloudera.cmf.command.flow.work.DeleteRoleCmdWork.of(DeleteRoleCmdWork.java:51)
    at
    com.cloudera.cmf.service.hdfs.EnableNNHACommand.constructWork(EnableNNHACommand.java:308)
    at
    com.cloudera.cmf.service.hdfs.EnableNNHACommand.constructWork(EnableNNHACommand.java:49)
    at
    com.cloudera.cmf.command.CmdWorkCommand.execute(CmdWorkCommand.java:52)
    at
    com.cloudera.cmf.service.ServiceHandlerRegistry.executeCommandHelper(ServiceHandlerRegistry.java:740)
    at
    com.cloudera.cmf.service.ServiceHandlerRegistry.executeCommand(ServiceHandlerRegistry.java:705)
    at
    com.cloudera.cmf.service.ServiceHandlerRegistry.executeCommand(ServiceHandlerRegistry.java:700)
    at
    com.cloudera.server.cmf.components.OperationsManagerImpl.executeServiceCmd(OperationsManagerImpl.java:1480)
    at
    com.cloudera.api.dao.impl.CommandManagerDaoImpl.issueHdfsEnableNnHaCommand(CommandManagerDaoImpl.java:319)
    at sun.reflect.GeneratedMethodAccessor1411.invoke(Unknown Source)
    at
    sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
    at java.lang.reflect.Method.invoke(Method.java:606)
    at
    com.cloudera.api.dao.impl.ManagerDaoBase.runInNewTransaction(ManagerDaoBase.java:158)
    at
    com.cloudera.api.dao.impl.ManagerDaoBase.invoke(ManagerDaoBase.java:203)
    at com.sun.proxy.$Proxy112.issueHdfsEnableNnHaCommand(Unknown
    Source)
    at
    com.cloudera.api.v6.impl.ServicesResourceV6Impl.hdfsEnableNnHaCommand(ServicesResourceV6Impl.java:239)
    at sun.reflect.GeneratedMethodAccessor1410.invoke(Unknown Source)
    at
    sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
    at java.lang.reflect.Method.invoke(Method.java:606)
    at
    org.apache.cxf.service.invoker.AbstractInvoker.performInvocation(AbstractInvoker.java:180)
    at
    org.apache.cxf.service.invoker.AbstractInvoker.invoke(AbstractInvoker.java:96)
    at
    org.apache.cxf.jaxrs.JAXRSInvoker.invoke(JAXRSInvoker.java:194)
    at com.cloudera.api.ApiInvoker.invoke(ApiInvoker.java:104)
    ...
    2014-04-22 19:53:53,424 DEBUG [1421807247@scm-web-354:api.ApiInvoker@124]
    API Error 500
    [/api/v6/clusters/Test/services/hdfs1/commands/hdfsEnableNnHa]:
    ApiErrorMessage{null}
    2014-04-22 19:53:53,472 DEBUG [1421807247@scm-web-354:api.ApiInvoker@124]
    API Error 500
    [/api/v6/clusters/Test/services/hdfs1/commands/hdfsEnableNnHa]:
    ApiErrorMessage{null}
    2014-04-22 19:53:53,473 DEBUG [1421807247@scm-web-354:api.ApiInvoker@124]
    API Error 500
    [/api/v6/clusters/Test/services/hdfs1/commands/hdfsEnableNnHa]:
    ApiErrorMessage{null}
    2014-04-22 19:53:53,473 DEBUG [1421807247@scm-web-354:api.ApiInvoker@124]
    API Error 500
    [/api/v6/clusters/Test/services/hdfs1/commands/hdfsEnableNnHa]:
    ApiErrorMessage{null}
    2014-04-22 19:53:53,474 DEBUG [1421807247@scm-web-354
    :api.LoggingOutInterceptor@101] API response:
    ---------- id: 414
    Response code: 500
    Content-Type: application/json
    Headers:
    Date=[Tue, 22 Apr 2014 10:53:53 GMT]
    Body:
    { }

    I think that api will delete SecondaryNameNode.

    Document is written it.

    The SecondaryNameNode associated with the Active NameNode will be deleted.


    But I didn't create SecondaryNameNode.

    Can I make HDFS-HA Cluster by using this API ?
    Should I use API of v5 ?

    thx

    To unsubscribe from this group and stop receiving emails from it, send an
    email to scm-users+unsubscribe@cloudera.org.
    To unsubscribe from this group and stop receiving emails from it, send an
    email to scm-users+unsubscribe@cloudera.org.
    To unsubscribe from this group and stop receiving emails from it, send an email to scm-users+unsubscribe@cloudera.org.
  • Mahito OGURA at Apr 23, 2014 at 6:53 am
    Hi, Vikram Bajaj

    I create SNN and call hdfsEnableNnHa.
    I got HDFS-HA cluster !

    Thank you for your reply !

    2014年4月23日水曜日 2時56分27秒 UTC+9 Vikram Bajaj:
    I had the same issue, I solved it by creating the secondary name node and
    formatting the primary name node before calling the ha enable api.

    The document is a bit fuzzy on this, as one document stated that you don't
    need to create a secondary name node, and the other one said you need to
    create it. When I wrote the scripts for CDH4, this was not the requirement,
    but in CDH5 it is.

    The error handling is a problem too, should not see NPE :)

    Regards,
    Vikram


    On Tue, Apr 22, 2014 at 4:18 AM, Mahito OGURA <earthd...@gmail.com<javascript:>
    wrote:
    Hi

    I try hdfsEnableNnHa API, but it return Internal Server Error.

    2014-04-22 19:53:48,193 INFO
    [1421807247@scm-web-354:cmf.AuthenticationSuccessEventListener@32]
    Authentication success for user: admin
    2014-04-22 19:53:48,194 DEBUG
    [1421807247@scm-web-354:api.LoggingInInterceptor@50] API request:
    ---------- id: 414
    POST /api/v6/clusters/Test/services/hdfs1/commands/hdfsEnableNnHa
    Encoding: UTF-8
    Authentication: admin [AUTH_LIMITED, ROLE_ADMIN, ROLE_USER]
    Headers:
    Accept=[*/*]
    accept-encoding=[gzip;q=1.0,deflate;q=0.6,identity;q=0.3]
    Authorization=[Basic YWRtaW46YWRtaW4=]
    Content-Length=[492]
    content-type=[application/json]
    Host=[10.211.55.100:7180]
    User-Agent=[Ruby]
    Body:

    {"activeNnName":"hdfs1-nn1","standbyNnName":"hdfs1-nn2","standbyNnHostId":"cdh2","nameservice":"test","qjName":"journalhdfs1","activeFcName":"hdfs1-fc1","standbyFcName":"hdfs1-fc2","zkServiceName":"zk1","jns":[{"jnName":"hdfs1-jn1","jnHostId":"cdh1","jnEditsDir":"/dfs/jn"},{"jnName":"hdfs1-jn2","jnHostId":"cdh2","jnEditsDir":"/dfs/jn"},{"jnName":"hdfs1-jn3","jnHostId":"cdh3","jnEditsDir":"/dfs/jn"}],"forceInitZNode":true,"clearExistingStandbyNameDirs":true,"clearExistingJnEditsDir":true}
    2014-04-22 19:53:48,644 INFO
    [ProcessStalenessDetector-0:components.ProcessStalenessDetector@227]
    Staleness check done. Duration: PT0.456S
    2014-04-22 19:53:49,104 INFO
    [1421807247@scm-web-354:service.ServiceHandlerRegistry@738] Executing
    service command EnableNNHA EnableNNHACmdArgs{targetRoles=[], args=[]}.
    Service: DbService{id=376, name=hdfs1}
    2014-04-22 19:53:52,971 INFO
    [JvmPauseMonitor:debug.JvmPauseMonitor@236] Detected pause in JVM or host
    machine (e.g. a stop the world GC, or JVM not scheduled): paused
    approximately 3501ms: GC pool 'PS Scavenge' had collection(s): count=1
    time=3517ms
    2014-04-22 19:53:52,971 INFO
    [JvmPauseMonitor:debug.JvmPauseMonitor@236] Detected pause in JVM or host
    machine (e.g. a stop the world GC, or JVM not scheduled): paused
    approximately 3503ms: GC pool 'PS Scavenge' had collection(s): count=1
    time=3517ms
    2014-04-22 19:53:53,186 INFO
    [SearchRepositoryManager-0:components.SearchRepositoryManager@407] Num
    entities:1195
    2014-04-22 19:53:53,186 INFO
    [SearchRepositoryManager-0:components.SearchRepositoryManager@409]
    Generating documents:2014-04-22T10:53:53.186Z
    2014-04-22 19:53:53,247 INFO
    [SearchRepositoryManager-0:components.SearchRepositoryManager@411] Num
    docs:1153
    2014-04-22 19:53:53,247 INFO
    [SearchRepositoryManager-0:components.SearchRepositoryManager@352]
    Constructing repo:2014-04-22T10:53:53.247Z
    2014-04-22 19:53:53,413 WARN
    [1421807247@scm-web-354:api.ApiExceptionMapper@150] Unexpected exception.
    java.lang.NullPointerException
    at
    com.google.common.base.Preconditions.checkNotNull(Preconditions.java:191)
    at
    com.cloudera.cmf.command.flow.work.DeleteRoleCmdWork.of(DeleteRoleCmdWork.java:51)
    at
    com.cloudera.cmf.service.hdfs.EnableNNHACommand.constructWork(EnableNNHACommand.java:308)
    at
    com.cloudera.cmf.service.hdfs.EnableNNHACommand.constructWork(EnableNNHACommand.java:49)
    at
    com.cloudera.cmf.command.CmdWorkCommand.execute(CmdWorkCommand.java:52)
    at
    com.cloudera.cmf.service.ServiceHandlerRegistry.executeCommandHelper(ServiceHandlerRegistry.java:740)
    at
    com.cloudera.cmf.service.ServiceHandlerRegistry.executeCommand(ServiceHandlerRegistry.java:705)
    at
    com.cloudera.cmf.service.ServiceHandlerRegistry.executeCommand(ServiceHandlerRegistry.java:700)
    at
    com.cloudera.server.cmf.components.OperationsManagerImpl.executeServiceCmd(OperationsManagerImpl.java:1480)
    at
    com.cloudera.api.dao.impl.CommandManagerDaoImpl.issueHdfsEnableNnHaCommand(CommandManagerDaoImpl.java:319)
    at sun.reflect.GeneratedMethodAccessor1411.invoke(Unknown Source)
    at
    sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
    at java.lang.reflect.Method.invoke(Method.java:606)
    at
    com.cloudera.api.dao.impl.ManagerDaoBase.runInNewTransaction(ManagerDaoBase.java:158)
    at
    com.cloudera.api.dao.impl.ManagerDaoBase.invoke(ManagerDaoBase.java:203)
    at com.sun.proxy.$Proxy112.issueHdfsEnableNnHaCommand(Unknown
    Source)
    at
    com.cloudera.api.v6.impl.ServicesResourceV6Impl.hdfsEnableNnHaCommand(ServicesResourceV6Impl.java:239)
    at sun.reflect.GeneratedMethodAccessor1410.invoke(Unknown Source)
    at
    sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
    at java.lang.reflect.Method.invoke(Method.java:606)
    at
    org.apache.cxf.service.invoker.AbstractInvoker.performInvocation(AbstractInvoker.java:180)
    at
    org.apache.cxf.service.invoker.AbstractInvoker.invoke(AbstractInvoker.java:96)
    at
    org.apache.cxf.jaxrs.JAXRSInvoker.invoke(JAXRSInvoker.java:194)
    at com.cloudera.api.ApiInvoker.invoke(ApiInvoker.java:104)
    ...
    2014-04-22 19:53:53,424 DEBUG [1421807247@scm-web-354:api.ApiInvoker@124]
    API Error 500
    [/api/v6/clusters/Test/services/hdfs1/commands/hdfsEnableNnHa]:
    ApiErrorMessage{null}
    2014-04-22 19:53:53,472 DEBUG
    [1421807247@scm-web-354:api.ApiInvoker@124] API Error 500
    [/api/v6/clusters/Test/services/hdfs1/commands/hdfsEnableNnHa]:
    ApiErrorMessage{null}
    2014-04-22 19:53:53,473 DEBUG
    [1421807247@scm-web-354:api.ApiInvoker@124] API Error 500
    [/api/v6/clusters/Test/services/hdfs1/commands/hdfsEnableNnHa]:
    ApiErrorMessage{null}
    2014-04-22 19:53:53,473 DEBUG
    [1421807247@scm-web-354:api.ApiInvoker@124] API Error 500
    [/api/v6/clusters/Test/services/hdfs1/commands/hdfsEnableNnHa]:
    ApiErrorMessage{null}
    2014-04-22 19:53:53,474 DEBUG
    [1421807247@scm-web-354:api.LoggingOutInterceptor@101] API response:
    ---------- id: 414
    Response code: 500
    Content-Type: application/json
    Headers:
    Date=[Tue, 22 Apr 2014 10:53:53 GMT]
    Body:
    { }

    I think that api will delete SecondaryNameNode.

    Document is written it.

    The SecondaryNameNode associated with the Active NameNode will be deleted.


    But I didn't create SecondaryNameNode.

    Can I make HDFS-HA Cluster by using this API ?
    Should I use API of v5 ?

    thx

    To unsubscribe from this group and stop receiving emails from it, send an
    email to scm-users+...@cloudera.org <javascript:>.
    To unsubscribe from this group and stop receiving emails from it, send an email to scm-users+unsubscribe@cloudera.org.
  • Mahito OGURA at Apr 23, 2014 at 8:33 am
    Hi

    I created HDFS-HA cluster by using hdfsEnableNnHa API.

    This api log show that HDFS service start with dependent services.

    Start hdfs1 and its dependent services
    Finished without any work.

    NameNode, JournalNode, FailoverController is running, but DATANODE is not
    running.

    DataNode can start manually.

    2014年4月23日水曜日 15時53分55秒 UTC+9 Mahito OGURA:
    Hi, Vikram Bajaj

    I create SNN and call hdfsEnableNnHa.
    I got HDFS-HA cluster !

    Thank you for your reply !

    2014年4月23日水曜日 2時56分27秒 UTC+9 Vikram Bajaj:
    I had the same issue, I solved it by creating the secondary name node and
    formatting the primary name node before calling the ha enable api.

    The document is a bit fuzzy on this, as one document stated that you
    don't need to create a secondary name node, and the other one said you need
    to create it. When I wrote the scripts for CDH4, this was not the
    requirement, but in CDH5 it is.

    The error handling is a problem too, should not see NPE :)

    Regards,
    Vikram

    On Tue, Apr 22, 2014 at 4:18 AM, Mahito OGURA wrote:

    Hi

    I try hdfsEnableNnHa API, but it return Internal Server Error.

    2014-04-22 19:53:48,193 INFO
    [1421807247@scm-web-354:cmf.AuthenticationSuccessEventListener@32]
    Authentication success for user: admin
    2014-04-22 19:53:48,194 DEBUG
    [1421807247@scm-web-354:api.LoggingInInterceptor@50] API request:
    ---------- id: 414
    POST /api/v6/clusters/Test/services/hdfs1/commands/hdfsEnableNnHa
    Encoding: UTF-8
    Authentication: admin [AUTH_LIMITED, ROLE_ADMIN, ROLE_USER]
    Headers:
    Accept=[*/*]
    accept-encoding=[gzip;q=1.0,deflate;q=0.6,identity;q=0.3]
    Authorization=[Basic YWRtaW46YWRtaW4=]
    Content-Length=[492]
    content-type=[application/json]
    Host=[10.211.55.100:7180]
    User-Agent=[Ruby]
    Body:

    {"activeNnName":"hdfs1-nn1","standbyNnName":"hdfs1-nn2","standbyNnHostId":"cdh2","nameservice":"test","qjName":"journalhdfs1","activeFcName":"hdfs1-fc1","standbyFcName":"hdfs1-fc2","zkServiceName":"zk1","jns":[{"jnName":"hdfs1-jn1","jnHostId":"cdh1","jnEditsDir":"/dfs/jn"},{"jnName":"hdfs1-jn2","jnHostId":"cdh2","jnEditsDir":"/dfs/jn"},{"jnName":"hdfs1-jn3","jnHostId":"cdh3","jnEditsDir":"/dfs/jn"}],"forceInitZNode":true,"clearExistingStandbyNameDirs":true,"clearExistingJnEditsDir":true}
    2014-04-22 19:53:48,644 INFO
    [ProcessStalenessDetector-0:components.ProcessStalenessDetector@227]
    Staleness check done. Duration: PT0.456S
    2014-04-22 19:53:49,104 INFO
    [1421807247@scm-web-354:service.ServiceHandlerRegistry@738] Executing
    service command EnableNNHA EnableNNHACmdArgs{targetRoles=[], args=[]}.
    Service: DbService{id=376, name=hdfs1}
    2014-04-22 19:53:52,971 INFO
    [JvmPauseMonitor:debug.JvmPauseMonitor@236] Detected pause in JVM or host
    machine (e.g. a stop the world GC, or JVM not scheduled): paused
    approximately 3501ms: GC pool 'PS Scavenge' had collection(s): count=1
    time=3517ms
    2014-04-22 19:53:52,971 INFO
    [JvmPauseMonitor:debug.JvmPauseMonitor@236] Detected pause in JVM or host
    machine (e.g. a stop the world GC, or JVM not scheduled): paused
    approximately 3503ms: GC pool 'PS Scavenge' had collection(s): count=1
    time=3517ms
    2014-04-22 19:53:53,186 INFO
    [SearchRepositoryManager-0:components.SearchRepositoryManager@407] Num
    entities:1195
    2014-04-22 19:53:53,186 INFO
    [SearchRepositoryManager-0:components.SearchRepositoryManager@409]
    Generating documents:2014-04-22T10:53:53.186Z
    2014-04-22 19:53:53,247 INFO
    [SearchRepositoryManager-0:components.SearchRepositoryManager@411] Num
    docs:1153
    2014-04-22 19:53:53,247 INFO
    [SearchRepositoryManager-0:components.SearchRepositoryManager@352]
    Constructing repo:2014-04-22T10:53:53.247Z
    2014-04-22 19:53:53,413 WARN
    [1421807247@scm-web-354:api.ApiExceptionMapper@150] Unexpected exception.
    java.lang.NullPointerException
    at
    com.google.common.base.Preconditions.checkNotNull(Preconditions.java:191)
    at
    com.cloudera.cmf.command.flow.work.DeleteRoleCmdWork.of(DeleteRoleCmdWork.java:51)
    at
    com.cloudera.cmf.service.hdfs.EnableNNHACommand.constructWork(EnableNNHACommand.java:308)
    at
    com.cloudera.cmf.service.hdfs.EnableNNHACommand.constructWork(EnableNNHACommand.java:49)
    at
    com.cloudera.cmf.command.CmdWorkCommand.execute(CmdWorkCommand.java:52)
    at
    com.cloudera.cmf.service.ServiceHandlerRegistry.executeCommandHelper(ServiceHandlerRegistry.java:740)
    at
    com.cloudera.cmf.service.ServiceHandlerRegistry.executeCommand(ServiceHandlerRegistry.java:705)
    at
    com.cloudera.cmf.service.ServiceHandlerRegistry.executeCommand(ServiceHandlerRegistry.java:700)
    at
    com.cloudera.server.cmf.components.OperationsManagerImpl.executeServiceCmd(OperationsManagerImpl.java:1480)
    at
    com.cloudera.api.dao.impl.CommandManagerDaoImpl.issueHdfsEnableNnHaCommand(CommandManagerDaoImpl.java:319)
    at sun.reflect.GeneratedMethodAccessor1411.invoke(Unknown
    Source)
    at
    sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
    at java.lang.reflect.Method.invoke(Method.java:606)
    at
    com.cloudera.api.dao.impl.ManagerDaoBase.runInNewTransaction(ManagerDaoBase.java:158)
    at
    com.cloudera.api.dao.impl.ManagerDaoBase.invoke(ManagerDaoBase.java:203)
    at com.sun.proxy.$Proxy112.issueHdfsEnableNnHaCommand(Unknown
    Source)
    at
    com.cloudera.api.v6.impl.ServicesResourceV6Impl.hdfsEnableNnHaCommand(ServicesResourceV6Impl.java:239)
    at sun.reflect.GeneratedMethodAccessor1410.invoke(Unknown
    Source)
    at
    sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
    at java.lang.reflect.Method.invoke(Method.java:606)
    at
    org.apache.cxf.service.invoker.AbstractInvoker.performInvocation(AbstractInvoker.java:180)
    at
    org.apache.cxf.service.invoker.AbstractInvoker.invoke(AbstractInvoker.java:96)
    at
    org.apache.cxf.jaxrs.JAXRSInvoker.invoke(JAXRSInvoker.java:194)
    at com.cloudera.api.ApiInvoker.invoke(ApiInvoker.java:104)
    ...
    2014-04-22 19:53:53,424 DEBUG
    [1421807247@scm-web-354:api.ApiInvoker@124] API Error 500
    [/api/v6/clusters/Test/services/hdfs1/commands/hdfsEnableNnHa]:
    ApiErrorMessage{null}
    2014-04-22 19:53:53,472 DEBUG
    [1421807247@scm-web-354:api.ApiInvoker@124] API Error 500
    [/api/v6/clusters/Test/services/hdfs1/commands/hdfsEnableNnHa]:
    ApiErrorMessage{null}
    2014-04-22 19:53:53,473 DEBUG
    [1421807247@scm-web-354:api.ApiInvoker@124] API Error 500
    [/api/v6/clusters/Test/services/hdfs1/commands/hdfsEnableNnHa]:
    ApiErrorMessage{null}
    2014-04-22 19:53:53,473 DEBUG
    [1421807247@scm-web-354:api.ApiInvoker@124] API Error 500
    [/api/v6/clusters/Test/services/hdfs1/commands/hdfsEnableNnHa]:
    ApiErrorMessage{null}
    2014-04-22 19:53:53,474 DEBUG
    [1421807247@scm-web-354:api.LoggingOutInterceptor@101] API response:
    ---------- id: 414
    Response code: 500
    Content-Type: application/json
    Headers:
    Date=[Tue, 22 Apr 2014 10:53:53 GMT]
    Body:
    { }

    I think that api will delete SecondaryNameNode.

    Document is written it.

    The SecondaryNameNode associated with the Active NameNode will be
    deleted.

    But I didn't create SecondaryNameNode.

    Can I make HDFS-HA Cluster by using this API ?
    Should I use API of v5 ?

    thx

    To unsubscribe from this group and stop receiving emails from it, send
    an email to scm-users+...@cloudera.org.
    To unsubscribe from this group and stop receiving emails from it, send an email to scm-users+unsubscribe@cloudera.org.
  • Vikram Srivastava at Apr 23, 2014 at 4:41 pm
    The command returns the cluster in the old state. So if the services were
    stopped to begin with, they won't get started in hdfsEnableNnHa.

    On Wed, Apr 23, 2014 at 1:33 AM, Mahito OGURA wrote:

    Hi

    I created HDFS-HA cluster by using hdfsEnableNnHa API.

    This api log show that HDFS service start with dependent services.

    Start hdfs1 and its dependent services
    Finished without any work.

    NameNode, JournalNode, FailoverController is running, but DATANODE is not
    running.

    DataNode can start manually.

    2014年4月23日水曜日 15時53分55秒 UTC+9 Mahito OGURA:
    Hi, Vikram Bajaj

    I create SNN and call hdfsEnableNnHa.
    I got HDFS-HA cluster !

    Thank you for your reply !

    2014年4月23日水曜日 2時56分27秒 UTC+9 Vikram Bajaj:
    I had the same issue, I solved it by creating the secondary name node
    and formatting the primary name node before calling the ha enable api.

    The document is a bit fuzzy on this, as one document stated that you
    don't need to create a secondary name node, and the other one said you need
    to create it. When I wrote the scripts for CDH4, this was not the
    requirement, but in CDH5 it is.

    The error handling is a problem too, should not see NPE :)

    Regards,
    Vikram

    On Tue, Apr 22, 2014 at 4:18 AM, Mahito OGURA wrote:

    Hi

    I try hdfsEnableNnHa API, but it return Internal Server Error.

    2014-04-22 19:53:48,193 INFO [1421807247@scm-web-354:cmf.
    AuthenticationSuccessEventListener@32] Authentication success for
    user: admin
    2014-04-22 19:53:48,194 DEBUG [1421807247@scm-web-354:api.
    LoggingInInterceptor@50] API request:
    ---------- id: 414
    POST /api/v6/clusters/Test/services/hdfs1/commands/hdfsEnableNnHa
    Encoding: UTF-8
    Authentication: admin [AUTH_LIMITED, ROLE_ADMIN, ROLE_USER]
    Headers:
    Accept=[*/*]
    accept-encoding=[gzip;q=1.0,deflate;q=0.6,identity;q=0.3]
    Authorization=[Basic YWRtaW46YWRtaW4=]
    Content-Length=[492]
    content-type=[application/json]
    Host=[10.211.55.100:7180]
    User-Agent=[Ruby]
    Body:
    {"activeNnName":"hdfs1-nn1","standbyNnName":"hdfs1-nn2","
    standbyNnHostId":"cdh2","nameservice":"test","qjName":"
    journalhdfs1","activeFcName":"hdfs1-fc1","standbyFcName":"
    hdfs1-fc2","zkServiceName":"zk1","jns":[{"jnName":"hdfs1-
    jn1","jnHostId":"cdh1","jnEditsDir":"/dfs/jn"},{"
    jnName":"hdfs1-jn2","jnHostId":"cdh2","jnEditsDir":"/dfs/jn"
    },{"jnName":"hdfs1-jn3","jnHostId":"cdh3","jnEditsDir":
    "/dfs/jn"}],"forceInitZNode":true,"clearExistingStandbyNameDirs":
    true,"clearExistingJnEditsDir":true}
    2014-04-22 19:53:48,644 INFO [ProcessStalenessDetector-0:components.
    ProcessStalenessDetector@227] Staleness check done. Duration: PT0.456S
    2014-04-22 19:53:49,104 INFO [1421807247@scm-web-354:service.
    ServiceHandlerRegistry@738] Executing service command EnableNNHA
    EnableNNHACmdArgs{targetRoles=[], args=[]}. Service:
    DbService{id=376, name=hdfs1}
    2014-04-22 19:53:52,971 INFO [JvmPauseMonitor:debug.
    JvmPauseMonitor@236] Detected pause in JVM or host machine (e.g. a
    stop the world GC, or JVM not scheduled): paused approximately 3501ms: GC
    pool 'PS Scavenge' had collection(s): count=1 time=3517ms
    2014-04-22 19:53:52,971 INFO [JvmPauseMonitor:debug.
    JvmPauseMonitor@236] Detected pause in JVM or host machine (e.g. a
    stop the world GC, or JVM not scheduled): paused approximately 3503ms: GC
    pool 'PS Scavenge' had collection(s): count=1 time=3517ms
    2014-04-22 19:53:53,186 INFO [SearchRepositoryManager-0:components.
    SearchRepositoryManager@407] Num entities:1195
    2014-04-22 19:53:53,186 INFO [SearchRepositoryManager-0:components.
    SearchRepositoryManager@409] Generating documents:2014-04-22T10:53:53.
    186Z
    2014-04-22 19:53:53,247 INFO [SearchRepositoryManager-0:components.
    SearchRepositoryManager@411] Num docs:1153
    2014-04-22 19:53:53,247 INFO [SearchRepositoryManager-0:components.
    SearchRepositoryManager@352] Constructing
    repo:2014-04-22T10:53:53.247Z
    2014-04-22 19:53:53,413 WARN [1421807247@scm-web-354:api.
    ApiExceptionMapper@150] Unexpected exception.
    java.lang.NullPointerException
    at com.google.common.base.Preconditions.checkNotNull(
    Preconditions.java:191)
    at com.cloudera.cmf.command.flow.work.DeleteRoleCmdWork.of(
    DeleteRoleCmdWork.java:51)
    at com.cloudera.cmf.service.hdfs.EnableNNHACommand.
    constructWork(EnableNNHACommand.java:308)
    at com.cloudera.cmf.service.hdfs.EnableNNHACommand.
    constructWork(EnableNNHACommand.java:49)
    at com.cloudera.cmf.command.CmdWorkCommand.execute(
    CmdWorkCommand.java:52)
    at com.cloudera.cmf.service.ServiceHandlerRegistry.
    executeCommandHelper(ServiceHandlerRegistry.java:740)
    at com.cloudera.cmf.service.ServiceHandlerRegistry.
    executeCommand(ServiceHandlerRegistry.java:705)
    at com.cloudera.cmf.service.ServiceHandlerRegistry.
    executeCommand(ServiceHandlerRegistry.java:700)
    at com.cloudera.server.cmf.components.OperationsManagerImpl.
    executeServiceCmd(OperationsManagerImpl.java:1480)
    at com.cloudera.api.dao.impl.CommandManagerDaoImpl.
    issueHdfsEnableNnHaCommand(CommandManagerDaoImpl.java:319)
    at sun.reflect.GeneratedMethodAccessor1411.invoke(Unknown
    Source)
    at sun.reflect.DelegatingMethodAccessorImpl.invoke(
    DelegatingMethodAccessorImpl.java:43)
    at java.lang.reflect.Method.invoke(Method.java:606)
    at com.cloudera.api.dao.impl.ManagerDaoBase.
    runInNewTransaction(ManagerDaoBase.java:158)
    at com.cloudera.api.dao.impl.ManagerDaoBase.invoke(
    ManagerDaoBase.java:203)
    at com.sun.proxy.$Proxy112.issueHdfsEnableNnHaCommand(Unknown
    Source)
    at com.cloudera.api.v6.impl.ServicesResourceV6Impl.
    hdfsEnableNnHaCommand(ServicesResourceV6Impl.java:239)
    at sun.reflect.GeneratedMethodAccessor1410.invoke(Unknown
    Source)
    at sun.reflect.DelegatingMethodAccessorImpl.invoke(
    DelegatingMethodAccessorImpl.java:43)
    at java.lang.reflect.Method.invoke(Method.java:606)
    at org.apache.cxf.service.invoker.AbstractInvoker.
    performInvocation(AbstractInvoker.java:180)
    at org.apache.cxf.service.invoker.AbstractInvoker.
    invoke(AbstractInvoker.java:96)
    at org.apache.cxf.jaxrs.JAXRSInvoker.invoke(
    JAXRSInvoker.java:194)
    at com.cloudera.api.ApiInvoker.invoke(ApiInvoker.java:104)
    ...
    2014-04-22 19:53:53,424 DEBUG [1421807247@scm-web-354:api.
    ApiInvoker@124] API Error 500 [/api/v6/clusters/Test/
    services/hdfs1/commands/hdfsEnableNnHa]: ApiErrorMessage{null}
    2014-04-22 19:53:53,472 DEBUG [1421807247@scm-web-354:api.
    ApiInvoker@124] API Error 500 [/api/v6/clusters/Test/
    services/hdfs1/commands/hdfsEnableNnHa]: ApiErrorMessage{null}
    2014-04-22 19:53:53,473 DEBUG [1421807247@scm-web-354:api.
    ApiInvoker@124] API Error 500 [/api/v6/clusters/Test/
    services/hdfs1/commands/hdfsEnableNnHa]: ApiErrorMessage{null}
    2014-04-22 19:53:53,473 DEBUG [1421807247@scm-web-354:api.
    ApiInvoker@124] API Error 500 [/api/v6/clusters/Test/
    services/hdfs1/commands/hdfsEnableNnHa]: ApiErrorMessage{null}
    2014-04-22 19:53:53,474 DEBUG [1421807247@scm-web-354:api.
    LoggingOutInterceptor@101] API response:
    ---------- id: 414
    Response code: 500
    Content-Type: application/json
    Headers:
    Date=[Tue, 22 Apr 2014 10:53:53 GMT]
    Body:
    { }

    I think that api will delete SecondaryNameNode.

    Document is written it.

    The SecondaryNameNode associated with the Active NameNode will be
    deleted.

    But I didn't create SecondaryNameNode.

    Can I make HDFS-HA Cluster by using this API ?
    Should I use API of v5 ?

    thx

    To unsubscribe from this group and stop receiving emails from it, send
    an email to scm-users+...@cloudera.org.
    To unsubscribe from this group and stop receiving emails from it, send
    an email to scm-users+unsubscribe@cloudera.org.
    To unsubscribe from this group and stop receiving emails from it, send an email to scm-users+unsubscribe@cloudera.org.
  • Mahito OGURA at Apr 25, 2014 at 3:13 pm
    Hi, Vikram Srivastava

    I got it.
    I created DataNode roles, but it didn't start.

    thank you for your reply !

    2014年4月24日木曜日 1時40分54秒 UTC+9 Vikram Srivastava:
    The command returns the cluster in the old state. So if the services were
    stopped to begin with, they won't get started in hdfsEnableNnHa.


    On Wed, Apr 23, 2014 at 1:33 AM, Mahito OGURA <earthd...@gmail.com<javascript:>
    wrote:
    Hi

    I created HDFS-HA cluster by using hdfsEnableNnHa API.

    This api log show that HDFS service start with dependent services.

    Start hdfs1 and its dependent services
    Finished without any work.

    NameNode, JournalNode, FailoverController is running, but DATANODE is not
    running.

    DataNode can start manually.

    2014年4月23日水曜日 15時53分55秒 UTC+9 Mahito OGURA:
    Hi, Vikram Bajaj

    I create SNN and call hdfsEnableNnHa.
    I got HDFS-HA cluster !

    Thank you for your reply !

    2014年4月23日水曜日 2時56分27秒 UTC+9 Vikram Bajaj:
    I had the same issue, I solved it by creating the secondary name node
    and formatting the primary name node before calling the ha enable api.

    The document is a bit fuzzy on this, as one document stated that you
    don't need to create a secondary name node, and the other one said you need
    to create it. When I wrote the scripts for CDH4, this was not the
    requirement, but in CDH5 it is.

    The error handling is a problem too, should not see NPE :)

    Regards,
    Vikram

    On Tue, Apr 22, 2014 at 4:18 AM, Mahito OGURA wrote:

    Hi

    I try hdfsEnableNnHa API, but it return Internal Server Error.

    2014-04-22 19:53:48,193 INFO [1421807247@scm-web-354:cmf.
    AuthenticationSuccessEventListener@32] Authentication success for
    user: admin
    2014-04-22 19:53:48,194 DEBUG [1421807247@scm-web-354:api.LoggingInInterceptor@50]
    API request:
    ---------- id: 414
    POST /api/v6/clusters/Test/services/hdfs1/commands/hdfsEnableNnHa
    Encoding: UTF-8
    Authentication: admin [AUTH_LIMITED, ROLE_ADMIN, ROLE_USER]
    Headers:
    Accept=[*/*]
    accept-encoding=[gzip;q=1.0,deflate;q=0.6,identity;q=0.3]
    Authorization=[Basic YWRtaW46YWRtaW4=]
    Content-Length=[492]
    content-type=[application/json]
    Host=[10.211.55.100:7180]
    User-Agent=[Ruby]
    Body:
    {"activeNnName":"hdfs1-nn1","standbyNnName":"hdfs1-nn2","
    standbyNnHostId":"cdh2","nameservice":"test","qjName":"
    journalhdfs1","activeFcName":"hdfs1-fc1","standbyFcName":"
    hdfs1-fc2","zkServiceName":"zk1","jns":[{"jnName":"hdfs1-
    jn1","jnHostId":"cdh1","jnEditsDir":"/dfs/jn"},{"
    jnName":"hdfs1-jn2","jnHostId":"cdh2","jnEditsDir":"/dfs/jn"
    },{"jnName":"hdfs1-jn3","jnHostId":"cdh3","jnEditsDir":
    "/dfs/jn"}],"forceInitZNode":true,"clearExistingStandbyNameDirs":
    true,"clearExistingJnEditsDir":true}
    2014-04-22 19:53:48,644 INFO [ProcessStalenessDetector-0:components.ProcessStalenessDetector@227]
    Staleness check done. Duration: PT0.456S
    2014-04-22 19:53:49,104 INFO [1421807247@scm-web-354:service.ServiceHandlerRegistry@738]
    Executing service command EnableNNHA EnableNNHACmdArgs{targetRoles=[],
    args=[]}. Service: DbService{id=376, name=hdfs1}
    2014-04-22 19:53:52,971 INFO [JvmPauseMonitor:debug.JvmPauseMonitor@236]
    Detected pause in JVM or host machine (e.g. a stop the world GC, or JVM not
    scheduled): paused approximately 3501ms: GC pool 'PS Scavenge' had
    collection(s): count=1 time=3517ms
    2014-04-22 19:53:52,971 INFO [JvmPauseMonitor:debug.JvmPauseMonitor@236]
    Detected pause in JVM or host machine (e.g. a stop the world GC, or JVM not
    scheduled): paused approximately 3503ms: GC pool 'PS Scavenge' had
    collection(s): count=1 time=3517ms
    2014-04-22 19:53:53,186 INFO [SearchRepositoryManager-0:components.SearchRepositoryManager@407]
    Num entities:1195
    2014-04-22 19:53:53,186 INFO [SearchRepositoryManager-0:components.SearchRepositoryManager@409]
    Generating documents:2014-04-22T10:53:53.186Z
    2014-04-22 19:53:53,247 INFO [SearchRepositoryManager-0:components.SearchRepositoryManager@411]
    Num docs:1153
    2014-04-22 19:53:53,247 INFO [SearchRepositoryManager-0:components.SearchRepositoryManager@352]
    Constructing repo:2014-04-22T10:53:53.247Z
    2014-04-22 19:53:53,413 WARN [1421807247@scm-web-354:api.ApiExceptionMapper@150]
    Unexpected exception.
    java.lang.NullPointerException
    at com.google.common.base.Preconditions.checkNotNull(
    Preconditions.java:191)
    at com.cloudera.cmf.command.flow.work.DeleteRoleCmdWork.of(
    DeleteRoleCmdWork.java:51)
    at com.cloudera.cmf.service.hdfs.EnableNNHACommand.
    constructWork(EnableNNHACommand.java:308)
    at com.cloudera.cmf.service.hdfs.EnableNNHACommand.
    constructWork(EnableNNHACommand.java:49)
    at com.cloudera.cmf.command.CmdWorkCommand.execute(
    CmdWorkCommand.java:52)
    at com.cloudera.cmf.service.ServiceHandlerRegistry.
    executeCommandHelper(ServiceHandlerRegistry.java:740)
    at com.cloudera.cmf.service.ServiceHandlerRegistry.
    executeCommand(ServiceHandlerRegistry.java:705)
    at com.cloudera.cmf.service.ServiceHandlerRegistry.
    executeCommand(ServiceHandlerRegistry.java:700)
    at com.cloudera.server.cmf.components.OperationsManagerImpl.
    executeServiceCmd(OperationsManagerImpl.java:1480)
    at com.cloudera.api.dao.impl.CommandManagerDaoImpl.
    issueHdfsEnableNnHaCommand(CommandManagerDaoImpl.java:319)
    at sun.reflect.GeneratedMethodAccessor1411.invoke(Unknown
    Source)
    at sun.reflect.DelegatingMethodAccessorImpl.invoke(
    DelegatingMethodAccessorImpl.java:43)
    at java.lang.reflect.Method.invoke(Method.java:606)
    at com.cloudera.api.dao.impl.ManagerDaoBase.
    runInNewTransaction(ManagerDaoBase.java:158)
    at com.cloudera.api.dao.impl.ManagerDaoBase.invoke(
    ManagerDaoBase.java:203)
    at com.sun.proxy.$Proxy112.issueHdfsEnableNnHaCommand(Unknown
    Source)
    at com.cloudera.api.v6.impl.ServicesResourceV6Impl.
    hdfsEnableNnHaCommand(ServicesResourceV6Impl.java:239)
    at sun.reflect.GeneratedMethodAccessor1410.invoke(Unknown
    Source)
    at sun.reflect.DelegatingMethodAccessorImpl.invoke(
    DelegatingMethodAccessorImpl.java:43)
    at java.lang.reflect.Method.invoke(Method.java:606)
    at org.apache.cxf.service.invoker.AbstractInvoker.
    performInvocation(AbstractInvoker.java:180)
    at org.apache.cxf.service.invoker.AbstractInvoker.
    invoke(AbstractInvoker.java:96)
    at org.apache.cxf.jaxrs.JAXRSInvoker.invoke(
    JAXRSInvoker.java:194)
    at com.cloudera.api.ApiInvoker.invoke(ApiInvoker.java:104)
    ...
    2014-04-22 19:53:53,424 DEBUG [1421807247@scm-web-354:api.ApiInvoker@124]
    API Error 500 [/api/v6/clusters/Test/services/hdfs1/commands/hdfsEnableNnHa]:
    ApiErrorMessage{null}
    2014-04-22 19:53:53,472 DEBUG [1421807247@scm-web-354:api.ApiInvoker@124]
    API Error 500 [/api/v6/clusters/Test/services/hdfs1/commands/hdfsEnableNnHa]:
    ApiErrorMessage{null}
    2014-04-22 19:53:53,473 DEBUG [1421807247@scm-web-354:api.ApiInvoker@124]
    API Error 500 [/api/v6/clusters/Test/services/hdfs1/commands/hdfsEnableNnHa]:
    ApiErrorMessage{null}
    2014-04-22 19:53:53,473 DEBUG [1421807247@scm-web-354:api.ApiInvoker@124]
    API Error 500 [/api/v6/clusters/Test/services/hdfs1/commands/hdfsEnableNnHa]:
    ApiErrorMessage{null}
    2014-04-22 19:53:53,474 DEBUG [1421807247@scm-web-354:api.LoggingOutInterceptor@101]
    API response:
    ---------- id: 414
    Response code: 500
    Content-Type: application/json
    Headers:
    Date=[Tue, 22 Apr 2014 10:53:53 GMT]
    Body:
    { }

    I think that api will delete SecondaryNameNode.

    Document is written it.

    The SecondaryNameNode associated with the Active NameNode will be
    deleted.

    But I didn't create SecondaryNameNode.

    Can I make HDFS-HA Cluster by using this API ?
    Should I use API of v5 ?

    thx

    To unsubscribe from this group and stop receiving emails from it, send
    an email to scm-users+...@cloudera.org.
    To unsubscribe from this group and stop receiving emails from it,
    send an email to scm-users+...@cloudera.org <javascript:>.
    To unsubscribe from this group and stop receiving emails from it, send an email to scm-users+unsubscribe@cloudera.org.

Related Discussions

Discussion Navigation
viewthread | post
Discussion Overview
groupscm-users @
categorieshadoop
postedApr 22, '14 at 11:18a
activeApr 25, '14 at 3:13p
posts7
users3
websitecloudera.com
irc#hadoop

People

Translate

site design / logo © 2022 Grokbase