Velvet Star Monitor

Standout celebrity highlights with iconic style.

updates

Hadoop - namenode is not starting up

Writer Sebastian Wright

I am trying to run hadoop as a root user, i executed namenode format command hadoop namenode -format when the Hadoop file system is running.

After this, when i try to start the name node server, it shows error like below

13/05/23 04:11:37 ERROR namenode.FSNamesystem: FSNamesystem initialization failed.
java.io.IOException: NameNode is not formatted. at org.apache.hadoop.hdfs.server.namenode.FSImage.recoverTransitionRead(FSImage.java:330) at org.apache.hadoop.hdfs.server.namenode.FSDirectory.loadFSImage(FSDirectory.java:100) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.initialize(FSNamesystem.java:411)

I tried to search for any solution, but cannot find any clear solution.

Can anyone suggest?

Thanks.

4

7 Answers

DFS needs to be formatted. Just issue the following command after stopping all and then restart.

hadoop namenode -format
0

Cool, i have found the solution.

Stop all running server

1) stop-all.sh 

Edit the file /usr/local/hadoop/conf/hdfs-site.xml and add below configuration if its missing

<property> <name>dfs.data.dir</name> <value>/app/hadoop/tmp/dfs/name/data</value> <final>true</final> </property> <property> <name>dfs.name.dir</name> <value>/app/hadoop/tmp/dfs/name</value> <final>true</final>
</property>

Start both HDFS and MapReduce Daemons

2) start-dfs.sh
3) start-mapred.sh

Then now run the rest of the steps to run the map reduce sample given in this link

Note : You should be running the command bin/start-all.sh if the direct command is not running.

3

format hdfs when namenode stop.(just like the top answer).

I add some more details.

FORMAT command will check or create path/dfs/name, and initialize or reinitalize it. then running start-dfs.sh would run namenode, datanode, then namesecondary. when namenode check not exist path/dfs/name or not initialize, it occurs a fatal error, then exit. that's why namenode not start up.

more details you can check HADOOP_COMMON/logs/XXX.namenode.log

Make sure the directory you've specified for your namenode is completely empty. Something like a "lost+found" folder in said directory will trigger this error.

hdfs-site.xml your value is wrong. You input the wrong folder that's why is not starting the name node.

First mkdir [folder], then set hdfs-site.xml then format

make sure that the directory to name(dfs.name.dir) and data (dfs.data.dir) folder is correctly listed in hdfs-site.xml

Your Answer

Sign up or log in

Sign up using Google Sign up using Facebook Sign up using Email and Password

Post as a guest

By clicking “Post Your Answer”, you agree to our terms of service, privacy policy and cookie policy