2014年9月28日 星期日

[研究] Hadoop 2.5.1 Single Cluster 安裝 (CentOS 7.0 x86_64)

[研究] Hadoop 2.5.1 Single Cluster 安裝 (CentOS 7.0 x86_64)

2014-09-28

這是學習兼分享,可能不夠完善,或100%完全正確。

The Apache Hadoop project develops open-source software for reliable, scalable, distributed computing.
它參考Google Filesystem,以Java開發,提供HDFS與MapReduce API。

官方網站
http://hadoop.apache.org/

安裝參考
http://hadoop.apache.org/docs/r2.5.1/hadoop-project-dist/hadoop-common/SingleCluster.html

http://hadoop.apache.org/docs/r2.5.1/hadoop-project-dist/hadoop-common/ClusterSetup.html

下載
http://www.apache.org/dyn/closer.cgi/hadoop/common/

安裝

# 為了省事,避免意外的情況,關閉 SELinux (Security Linux ) 和 iptables

# 立刻關閉 SELinux
setenforce 0 

# 設定 reboot 後自動關閉 SELinux
#vi  /etc/selinux/config
#找到
#SELINUX=
#設為
#SELINUX=disabled  

sed -i -e "s@SELINUX=enforcing@#SELINUX=enforcing@"   /etc/selinux/config
sed -i -e "s@SELINUX=permissive@#SELINUX=permissive@"   /etc/selinux/config
sed -i -e "/SELINUX=/aSELINUX=disabled"   /etc/selinux/config


# 立刻停掉 iptables
#service iptables stop  
#service ip6tables stop 
systemctl   stop  firewalld 

# 設定 reboot 後自動關閉 iptable
#chkconfig iptables off  
#chkconfig ip6tables off  
systemctl   disable  firewalld 

yum -y install  java
# or
#yum -y install java-1.7.0-openjdk

yum -y install  java-1.7.0-openjdk-devel
#find / -name java
#echo 'export JAVA_HOME=/usr/lib/jvm/java-1.7.0-openjdk-1.7.0.65-2.5.1.2.el7_0.x86_64' >> /etc/profile
echo 'export JAVA_HOME=/usr/lib/jvm/java' >> /etc/profile
echo 'export PATH=$PATH:$JAVA_HOME/bin' >> /etc/profile

echo 'export CLASSPATH=$JAVA_HOME/lib/ext:$JAVA_HOME/lib/tools.jar' >> /etc/profile
source /etc/profile

#---------------------------
#wget http://apache.cdpa.nsysu.edu.tw/hadoop/common/hadoop-2.5.0/hadoop-2.5.0.tar.gz
cd /usr/local
wget http://ftp.twaren.net/Unix/Web/apache/hadoop/common/hadoop-2.5.1/hadoop-2.5.1.tar.gz
tar zxvf hadoop-2.5.1.tar.gz

echo 'export HADOOP_HOME=/usr/local/hadoop-2.5.1' >> /etc/profile
echo 'export PATH=$PATH:$HADOOP_HOME/bin' >> /etc/profile
echo 'export PATH=$PATH:$HADOOP_HOME/sbin' >> /etc/profile
echo 'export HADOOP_PREFIX=$HADOOP_HOME' >> /etc/profile

echo 'export HADOOP_COMMON_HOME=$HADOOP_HOME' >> /etc/profile
echo 'export HADOOP_MAPRED_HOME=$HADOOP_HOME' >> /etc/profile
echo 'export HADOOP_CONF_DIR=$HADOOP_HOME/etc/hadoop' >> /etc/profile
echo 'export HADOOP_HDFS_HOME=$HADOOP_HOME' >> /etc/profile
echo 'export HADOOP_YARN_HOME=$HADOOP_HOME' >> /etc/profile

echo 'export YARN_CONF_DIR=$HADOOP_CONF_DIR' >> /etc/profile
source /etc/profile

環境

192.168.128.10ㄅ  (CentOS 7.0 x64)
主機名稱: localhost 和 localhost.localdomain

******************************************************************************

Standalone Operation 模式 測試
( 要切換到 hadoop 根目錄操作)

[root@localhost local]# cd hadoop-2.5.1/
[root@localhost hadoop-2.5.0]# mkdir input
[root@localhost hadoop-2.5.0]# cp etc/hadoop/*.xml input
[root@localhost hadoop-2.5.0]# $HADOOP_HOME/bin/hadoop jar  /usr/local/hadoop-2.5.1/share/hadoop/mapreduce/hadoop-mapreduce-examples-2.5.1.jar   grep input output 'dfs[a-z.]+'
[root@localhost hadoop-2.5.0]#  cat output/*
1       dfsadmin
[root@localhost hadoop-2.5.0]#

******************************************************************************

Pseudo-Distributed Operation 模式 測試

設定執行環境

# 替 hadoop-env.sh  httpfs-env.sh  mapred-env.sh  yarn-env.sh 設定 JAVA_HOME
# 如果你已經執行過 export JAVA_HOME,且在 .bashrc 增加了,這部分可以不用設定,這邊是另一種方法,僅供參考

#1.設定hadoop-env.sh
[root@localhost ~]# vi /usr/local/hadoop-2.5.1/etc/hadoop/hadoop-env.sh

找到
# The java implementation to use.
export JAVA_HOME=${JAVA_HOME}

下面增加一行
export JAVA_HOME=/usr/lib/jvm/java

#2.設定httpfs-env.sh
[root@localhost ~]# vi /usr/local/hadoop-2.5.1/etc/hadoop/httpfs-env.sh
隨便找地方增加一行
export JAVA_HOME=/usr/lib/jvm/java

#3.設定mapred-env.sh
[root@localhost ~]# vi /usr/local/hadoop-2.5.0/etc/hadoop/mapred-env.sh
找到
# export JAVA_HOME=/home/y/libexec/jdk1.6.0/
下面增加一行
export JAVA_HOME=/usr/lib/jvm/java

#4.設定yarn-env.sh
[root@localhost ~]# vi /usr/local/hadoop-2.5.0/etc/hadoop/yarn-env.sh
找到
# export JAVA_HOME=/home/y/libexec/jdk1.6.0/
下面增加一行
export JAVA_HOME=/usr/lib/jvm/java

PS : 不知道為什麼,export JAVA_HOME=${JAVA_HOME} 沒有效果,只好再設定

Bash Shell 中可以這樣下令
sed -i -e "/JAVA_HOME=/aexport JAVA_HOME=\/usr\/lib\/jvm\/java"   $HADOOP_HOME/etc/hadoop/hadoop-env.sh
cat  $HADOOP_HOME/etc/hadoop/hadoop-env.sh  | grep "JAVA_HOME"


echo "export JAVA_HOME=/usr/lib/jvm/java" >>  $HADOOP_HOME/etc/hadoop/httpfs-env.sh
cat   $HADOOP_HOME/etc/hadoop/httpfs-env.sh  | grep "JAVA_HOME"

sed -i -e "/JAVA_HOME=/aexport JAVA_HOME=\/usr\/lib\/jvm\/java"   $HADOOP_HOME/etc/hadoop/mapred-env.sh
cat  $HADOOP_HOME/etc/hadoop/mapred-env.sh  | grep "JAVA_HOME"


sed -i -e "/JAVA_HOME=/aexport JAVA_HOME=\/usr\/lib\/jvm\/java"   $HADOOP_HOME/etc/hadoop/yarn-env.sh
cat  $HADOOP_HOME/etc/hadoop/yarn-env.sh  | grep "JAVA_HOME"


設定 ssh 不需要密碼就可登入


[root@localhost ~]# cd   $HADOOP_HOME
[root@localhost hadoop-2.5.1]# ssh-keygen -t dsa -P '' -f ~/.ssh/id_dsa
Generating public/private dsa key pair.
Created directory '/root/.ssh'.
Your identification has been saved in /root/.ssh/id_dsa.
Your public key has been saved in /root/.ssh/id_dsa.pub.
The key fingerprint is:
35:73:25:79:b3:35:60:04:2d:d2:c5:48:82:e5:16:b7 root@localhost.localdomain
The key's randomart image is:
+--[ DSA 1024]----+
|        o+o+O*o  |
|       ...+++=o..|
|         o=Eo. +.|
|        .. +  .  |
|        S        |
|                 |
|                 |
|                 |
|                 |
+-----------------+
[root@localhost hadoop-2.5.1]# cat ~/.ssh/id_dsa.pub >> ~/.ssh/authorized_keys
[root@localhost hadoop-2.5.1]# ssh localhost
The authenticity of host 'localhost (::1)' can't be established.
ECDSA key fingerprint is 7c:26:66:cf:e3:59:c1:a6:e0:7f:90:3a:c4:7a:5e:e5.
Are you sure you want to continue connecting (yes/no)? yes
Warning: Permanently added 'localhost' (ECDSA) to the list of known hosts.
Last login: Sun Sep 28 07:47:44 2014 from 192.168.128.1
[root@localhost ~]# exit
logout
Connection to localhost closed.
[root@localhost hadoop-2.5.1]#


設定組態檔案

[root@localhost ~]# cd  $HADOOP_HOME
[root@localhost hadoop-2.5.1]# vi etc/hadoop/core-site.xml

<configuration>
    <property>
        <name>fs.defaultFS</name>
        <value>hdfs://localhost:9000</value>
    </property>
</configuration>

[root@localhost hadoop-2.5.1]# vi etc/hadoop/hdfs-site.xml

<configuration>
    <property>
        <name>dfs.replication</name>
        <value>1</value>
    </property>
</configuration>

執行測試

1.格式化檔案系統
[root@localhost hadoop-2.5.1]#  bin/hdfs namenode -format

2.啟動  NameNode daemon 和 DataNode daemon:

hadoop daemon log 預設輸出目錄 $HADOOP_LOG_DIR (預設為 $HADOOP_HOME/logs)

[root@localhost hadoop-2.5.1]# sbin/start-dfs.sh
Starting namenodes on [localhost]
localhost: starting namenode, logging to /usr/local/hadoop-2.5.1/logs/hadoop-root-namenode-localhost.localdomain.out
localhost: starting datanode, logging to /usr/local/hadoop-2.5.1/logs/hadoop-root-datanode-localhost.localdomain.out
Starting secondary namenodes [0.0.0.0]
The authenticity of host '0.0.0.0 (0.0.0.0)' can't be established.
ECDSA key fingerprint is 7c:26:66:cf:e3:59:c1:a6:e0:7f:90:3a:c4:7a:5e:e5.
Are you sure you want to continue connecting (yes/no)? yes
0.0.0.0: Warning: Permanently added '0.0.0.0' (ECDSA) to the list of known hosts.
0.0.0.0: starting secondarynamenode, logging to /usr/local/hadoop-2.5.1/logs/hadoop-root-secondarynamenode-localhost.localdomain.out
[root@localhost hadoop-2.5.1]#

3. 瀏覽 NameNode 介面
http://localhost:50070/





4. 建立執行 MapReduce jobs 所需要的 HDFS 目錄
  $ bin/hdfs dfs -mkdir /user
  $ bin/hdfs dfs -mkdir /user/<username>

例如
cd  $HADOOP_HOME
bin/hdfs dfs -mkdir /user

bin/hdfs dfs -mkdir /user/root

其中 username 可用下面 whoami 指令查

[root@localhost hadoop-2.5.1]# whoami

root


5. 拷貝輸入檔案到分散式系統目錄
bin/hdfs dfs -put etc/hadoop input

6.執行範例程式
bin/hadoop jar share/hadoop/mapreduce/hadoop-mapreduce-examples-2.5.1.jar grep input output 'dfs[a-z.]+'

$HADOOP_HOME/bin/hadoop jar /usr/local/hadoop-2.5.0/share/hadoop/mapreduce/hadoop-mapreduce-examples-2.5.0.jar randomwriter out

7.檢查輸出
  $ bin/hdfs dfs -get output output
  $ cat output/*

[root@localhost hadoop-2.5.1]# cat output/*
cat: output/output: Is a directory
1       dfsadmin


  $ bin/hdfs dfs -cat output/*

8. 停止 Hadoop
  $ sbin/stop-dfs.sh

實際情況

[root@localhost hadoop-2.5.1]# sbin/stop-dfs.sh
Stopping namenodes on [localhost]
localhost: stopping namenode
localhost: stopping datanode
Stopping secondary namenodes [0.0.0.0]
0.0.0.0: stopping secondarynamenode
[root@localhost hadoop-2.5.1]#


# *******************************************************************************

# 修改 xml 設定檔 (此處只改其中幾個)

# capacity-scheduler.xml  hadoop-policy.xml  httpfs-site.xml  yarn-site.xml  core-site.xml  hdfs-site.xml mapred-site.xml
# 參考
# http://hadoop.apache.org/docs/r2.5.1/hadoop-project-dist/hadoop-common/SingleCluster.html

注意,.xml 裡面的 host 要用 hostname  -f 指令看到的,不可以用 IP
[root@localhost hadoop-2.5.1]# hostname
localhost.localdomain

[root@localhost hadoop-2.5.1]# hostname  -f
localhost

如果你要把主機名稱改掉 (例如改成 hadoop01 和 hadoop01.hadoopcluster )

設定主機名稱(立刻生效)
[root@localhost hadoop-2.5.1]# hostname  hadoop01.hadoopcluster

驗證
[root@localhost hadoop-2.5.1]# hostname
hadoop01.hadoopcluster

修改  /etc/sysconfig/network,設定 reboot 後的主機名稱
[root@localhost hadoop-2.5.1]# vi   /etc/sysconfig/network
找到
HOSTNAME=localhost.localdomain
改成
HOSTNAME=hadoop01.hadoopcluster

修改 hosts 設定
[root@localhost hadoop-2.5.1]# vi   /etc/hosts
增加
192.168.128.101    hadoop01   hadoop01.hadoopcluster

因為這篇只是單機安裝,所以敝人仍用 localhost 和 localhost.localdomain,沒有去修改

******************************

YARN on Single Node 模式

1. 修改設定

# 設定 mapred-site.xml

cp  $HADOOP_HOME/etc/hadoop/mapred-site.xml.template  $HADOOP_HOME/etc/hadoop/mapred-site.xml

vi  $HADOOP_HOME/etc/hadoop/mapred-site.xml


<configuration>
</configuration>

之間增加

    <property>
        <name>mapreduce.framework.name</name>
        <value>yarn</value>
    </property>


  <property>
    <name>mapreduce.cluster.temp.dir</name>
    <value></value>
    <description>No description</description>
    <final>true</final>
  </property>

  <property>
    <name>mapreduce.cluster.local.dir</name>
    <value></value>
    <description>No description</description>
    <final>true</final>
  </property>


# 設定 yarn-site.xml

vi  $HADOOP_HOME/etc/hadoop/yarn-site.xml


<configuration>
</configuration>
之間增加
    <property>
        <name>yarn.nodemanager.aux-services</name>
        <value>mapreduce_shuffle</value>
    </property>
 (注意 host:port 要依照實際情況修改,要用 hostname  -f 指令看到的)
(這些 port 可以自己改換別的)



  <property>
    <name>yarn.resourcemanager.resource-tracker.address</name>
    <value>localhost:9001</value>
    <description>host is the hostname of the resource manager and 
    port is the port on which the NodeManagers contact the Resource Manager.
    </description>
  </property>

  <property>
    <name>yarn.resourcemanager.scheduler.address</name>
    <value>localhost:9002</value>
    <description>host is the hostname of the resourcemanager and port is the port
    on which the Applications in the cluster talk to the Resource Manager.
    </description>
  </property>

  <property>
    <name>yarn.resourcemanager.scheduler.class</name>
    <value>org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CapacityScheduler</value>
    <description>In case you do not want to use the default scheduler</description>
  </property>

  <property>
    <name>yarn.resourcemanager.address</name>
    <value>localhost:9003</value>
    <description>the host is the hostname of the ResourceManager and the port is the port on
    which the clients can talk to the Resource Manager. </description>
  </property>

  <property>
    <name>yarn.nodemanager.local-dirs</name>
    <value></value>
    <description>the local directories used by the nodemanager</description>
  </property>

  <property>
    <name>yarn.nodemanager.address</name>
    <value>localhost:9004</value>
    <description>the nodemanagers bind to this port</description>
  </property>  

  <property>
    <name>yarn.nodemanager.resource.memory-mb</name>
    <value>10240</value>
    <description>the amount of memory on the NodeManager in GB</description>
  </property>

  <property>
    <name>yarn.nodemanager.remote-app-log-dir</name>
    <value>/app-logs</value>
    <description>directory on hdfs where the application logs are moved to </description>
  </property>

   <property>
    <name>yarn.nodemanager.log-dirs</name>
    <value></value>
    <description>the directories used by Nodemanagers as log directories</description>
  </property>

  <property>
    <name>yarn.nodemanager.aux-services</name>
    <value>mapreduce_shuffle</value>
    <description>shuffle service that needs to be set for Map Reduce to run </description>
  </property>

# 設定 capacity-scheduler.xml

vi   $HADOOP_HOME/etc/hadoop/capacity-scheduler.xml

找 root.queue

將 ( 用 關鍵字 default 搜尋)
  <property>
    <name>yarn.scheduler.capacity.root.queues</name>
    <value>default</value>
    <description>
      The queues at the this level (root is the root queue).
    </description>
  </property>

  <property>
    <name>yarn.scheduler.capacity.root.default.capacity</name>
    <value>100</value>
    <description>Default queue target capacity.</description>
  </property>

改為

  <property>
    <name>yarn.scheduler.capacity.root.queues</name>
    <value>unfunded,default</value>
  </property>
  
  <property>
    <name>yarn.scheduler.capacity.root.capacity</name>
    <value>100</value>
  </property>
  
  <property>
    <name>yarn.scheduler.capacity.root.unfunded.capacity</name>
    <value>50</value>
  </property>
  
  <property>
    <name>yarn.scheduler.capacity.root.default.capacity</name>
    <value>50</value>
  </property>


# **************************************************************************

2.啟動

[root@localhost hadoop-2.5.1]# sbin/start-yarn.sh
starting yarn daemons
starting resourcemanager, logging to /usr/local/hadoop-2.5.1/logs/yarn-root-resourcemanager-localhost.localdomain.out
localhost: starting nodemanager, logging to /usr/local/hadoop-2.5.1/logs/yarn-root-nodemanager-localhost.localdomain.out
[root@localhost hadoop-2.5.1]#


3. 瀏覽 http://localhost:8088/





4. 停止

[root@localhost hadoop-2.5.1]# sbin/stop-yarn.sh
stopping yarn daemons
stopping resourcemanager
localhost: stopping nodemanager
no proxyserver to stop

[root@localhost hadoop-2.5.1]#


------------------------------

如果需要個別啟動 Resource Manager 和 Node Manager
(這部分  2.5.1 網頁沒有,測試有問題不足為奇)

[root@localhost hadoop]# cd $HADOOP_MAPRED_HOME

# 啟動 Resource Manager

[root@localhost hadoop-2.5.1]# sbin/yarn-daemon.sh start resourcemanager
starting resourcemanager, logging to /usr/local/hadoop-2.5.1/logs/yarn-root-resourcemanager-localhost.localdomain.out
[root@localhost hadoop-2.5.1]#

# 一定要用 ps aux | grep resourcemanager 去驗證,因為上面執行失敗,可能未必有錯誤訊息,但是驗證會找不到,表示沒有執行成功

[root@localhost hadoop-2.5.1]# ps aux | grep resourcemanager
root     46986 13.2  5.0 1846704 95764 pts/1   Sl   08:45   0:03 /usr/lib/jvm/java/bin/java -Dproc_resourcemanager -Xmx1000m -Dhadoop.log.dir=/usr/local/hadoop-2.5.1/logs -Dyarn.log.dir=/usr/local/hadoop-2.5.1/logs -Dhadoop.log.file=yarn-root-resourcemanager-localhost.localdomain.log -Dyarn.log.file=yarn-root-resourcemanager-localhost.localdomain.log -Dyarn.home.dir= -Dyarn.id.str=root -Dhadoop.root.logger=INFO,RFA -Dyarn.root.logger=INFO,RFA -Djava.library.path=/usr/local/hadoop-2.5.1/lib/native -Dyarn.policy.file=hadoop-policy.xml -Dhadoop.log.dir=/usr/local/hadoop-2.5.1/logs -Dyarn.log.dir=/usr/local/hadoop-2.5.1/logs -Dhadoop.log.file=yarn-root-resourcemanager-localhost.localdomain.log -Dyarn.log.file=yarn-root-resourcemanager-localhost.localdomain.log -Dyarn.home.dir=/usr/local/hadoop-2.5.1 -Dhadoop.home.dir=/usr/local/hadoop-2.5.1 -Dhadoop.root.logger=INFO,RFA -Dyarn.root.logger=INFO,RFA -Djava.library.path=/usr/local/hadoop-2.5.1/lib/native -classpath /usr/local/hadoop-2.5.1/etc/hadoop:/usr/local/hadoop-2.5.1/etc/hadoop:/usr/local/hadoop-2.5.1/etc/hadoop:/usr/local/hadoop-2.5.1/share/hadoop/common/lib/*:/usr/local/hadoop-2.5.1/share/hadoop/common/*:/usr/local/hadoop-2.5.1/share/hadoop/hdfs:/usr/local/hadoop-2.5.1/share/hadoop/hdfs/lib/*:/usr/local/hadoop-2.5.1/share/hadoop/hdfs/*:/usr/local/hadoop-2.5.1/share/hadoop/yarn/lib/*:/usr/local/hadoop-2.5.1/share/hadoop/yarn/*:/usr/local/hadoop-2.5.1/share/hadoop/mapreduce/lib/*:/usr/local/hadoop-2.5.1/share/hadoop/mapreduce/*:/usr/local/hadoop-2.5.1/contrib/capacity-scheduler/*.jar:/usr/local/hadoop-2.5.1/contrib/capacity-scheduler/*.jar:/usr/local/hadoop-2.5.1/share/hadoop/yarn/*:/usr/local/hadoop-2.5.1/share/hadoop/yarn/lib/*:/usr/local/hadoop-2.5.1/etc/hadoop/rm-config/log4j.properties org.apache.hadoop.yarn.server.resourcemanager.ResourceManager
root     47212  0.0  0.0 112640   980 pts/1    S+   08:45   0:00 grep --color=auto resourcemanager
[root@localhost hadoop-2.5.1]#

# 啟動 Node Manager

[root@localhost hadoop-2.5.1]# sbin/yarn-daemon.sh start nodemanager
starting nodemanager, logging to /usr/local/hadoop-2.5.1/logs/yarn-root-nodemanager-localhost.localdomain.out
[root@localhost hadoop-2.5.1]#


一定要用 ps aux | grep nodemanager 去驗證,因為上面執行失敗,可能未必有錯誤訊息,但是驗證會找不到,表示沒有執行成功

[root@localhost hadoop-2.5.1]# ps aux | grep nodemanager
root     47243 19.5  5.1 1700860 96096 pts/1   Sl   08:46   0:03 /usr/lib/jvm/java/bin/java -Dproc_nodemanager -Xmx1000m -Dhadoop.log.dir=/usr/local/hadoop-2.5.1/logs -Dyarn.log.dir=/usr/local/hadoop-2.5.1/logs -Dhadoop.log.file=yarn-root-nodemanager-localhost.localdomain.log -Dyarn.log.file=yarn-root-nodemanager-localhost.localdomain.log -Dyarn.home.dir= -Dyarn.id.str=root -Dhadoop.root.logger=INFO,RFA -Dyarn.root.logger=INFO,RFA -Djava.library.path=/usr/local/hadoop-2.5.1/lib/native -Dyarn.policy.file=hadoop-policy.xml -server -Dhadoop.log.dir=/usr/local/hadoop-2.5.1/logs -Dyarn.log.dir=/usr/local/hadoop-2.5.1/logs -Dhadoop.log.file=yarn-root-nodemanager-localhost.localdomain.log -Dyarn.log.file=yarn-root-nodemanager-localhost.localdomain.log -Dyarn.home.dir=/usr/local/hadoop-2.5.1 -Dhadoop.home.dir=/usr/local/hadoop-2.5.1 -Dhadoop.root.logger=INFO,RFA -Dyarn.root.logger=INFO,RFA -Djava.library.path=/usr/local/hadoop-2.5.1/lib/native -classpath /usr/local/hadoop-2.5.1/etc/hadoop:/usr/local/hadoop-2.5.1/etc/hadoop:/usr/local/hadoop-2.5.1/etc/hadoop:/usr/local/hadoop-2.5.1/share/hadoop/common/lib/*:/usr/local/hadoop-2.5.1/share/hadoop/common/*:/usr/local/hadoop-2.5.1/share/hadoop/hdfs:/usr/local/hadoop-2.5.1/share/hadoop/hdfs/lib/*:/usr/local/hadoop-2.5.1/share/hadoop/hdfs/*:/usr/local/hadoop-2.5.1/share/hadoop/yarn/lib/*:/usr/local/hadoop-2.5.1/share/hadoop/yarn/*:/usr/local/hadoop-2.5.1/share/hadoop/mapreduce/lib/*:/usr/local/hadoop-2.5.1/share/hadoop/mapreduce/*:/usr/local/hadoop-2.5.1/contrib/capacity-scheduler/*.jar:/usr/local/hadoop-2.5.1/contrib/capacity-scheduler/*.jar:/usr/local/hadoop-2.5.1/share/hadoop/yarn/*:/usr/local/hadoop-2.5.1/share/hadoop/yarn/lib/*:/usr/local/hadoop-2.5.1/etc/hadoop/nm-config/log4j.properties org.apache.hadoop.yarn.server.nodemanager.NodeManager
root     47354  0.0  0.0 112640   980 pts/1    R+   08:46   0:00 grep --color=auto nodemanager
[root@localhost hadoop-2.5.1]#

# ***************

# 測試範例程式


#切換路徑

[root@localhost hadoop-2.5.1]# cd $HADOOP_COMMON_HOME

# 官方網頁說用下面命令,實際上路徑是錯的

[root@localhost hadoop-2.5.1]# $HADOOP_COMMON_HOME/bin/hadoop jar hadoop-examples.jar randomwriter out
Not a valid JAR: /usr/local/hadoop-2.5.1/hadoop-examples.jar

# 應該這樣測試

[root@localhost hadoop-2.5.1]# $HADOOP_HOME/bin/hadoop jar /usr/local/hadoop-2.5.1/share/hadoop/mapreduce/hadoop-mapreduce-examples-2.5.1.jar randomwriter out

14/09/28 08:49:27 INFO client.RMProxy: Connecting to ResourceManager at /0.0.0.0:8032
Running 10 maps.
Job started: Sun Sep 28 08:49:28 CST 2014
14/09/28 08:49:28 INFO client.RMProxy: Connecting to ResourceManager at /0.0.0.0:8032
java.net.ConnectException: Call From localhost.localdomain/127.0.0.1 to localhost:9000 failed on connection exception: java.net.ConnectException: Connection refused; For more details see:  http://wiki.apache.org/hadoop/ConnectionRefused
        at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
        at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:57)
        at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
        at java.lang.reflect.Constructor.newInstance(Constructor.java:526)
        at org.apache.hadoop.net.NetUtils.wrapWithMessage(NetUtils.java:783)
        at org.apache.hadoop.net.NetUtils.wrapException(NetUtils.java:730)
        at org.apache.hadoop.ipc.Client.call(Client.java:1415)
        at org.apache.hadoop.ipc.Client.call(Client.java:1364)
        at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:206)
        at com.sun.proxy.$Proxy17.getFileInfo(Unknown Source)
        at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
        at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
        at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
        at java.lang.reflect.Method.invoke(Method.java:606)
        at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:187)
        at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:102)
        at com.sun.proxy.$Proxy17.getFileInfo(Unknown Source)
        at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.getFileInfo(ClientNamenodeProtocolTranslatorPB.java:707)
        at org.apache.hadoop.hdfs.DFSClient.getFileInfo(DFSClient.java:1785)
        at org.apache.hadoop.hdfs.DistributedFileSystem$17.doCall(DistributedFileSystem.java:1068)
        at org.apache.hadoop.hdfs.DistributedFileSystem$17.doCall(DistributedFileSystem.java:1064)
        at org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81)
        at org.apache.hadoop.hdfs.DistributedFileSystem.getFileStatus(DistributedFileSystem.java:1064)
        at org.apache.hadoop.fs.FileSystem.exists(FileSystem.java:1398)
        at org.apache.hadoop.mapreduce.lib.output.FileOutputFormat.checkOutputSpecs(FileOutputFormat.java:145)
        at org.apache.hadoop.mapreduce.JobSubmitter.checkSpecs(JobSubmitter.java:458)
        at org.apache.hadoop.mapreduce.JobSubmitter.submitJobInternal(JobSubmitter.java:343)
        at org.apache.hadoop.mapreduce.Job$10.run(Job.java:1285)
        at org.apache.hadoop.mapreduce.Job$10.run(Job.java:1282)
        at java.security.AccessController.doPrivileged(Native Method)
        at javax.security.auth.Subject.doAs(Subject.java:415)
        at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1614)
        at org.apache.hadoop.mapreduce.Job.submit(Job.java:1282)
        at org.apache.hadoop.mapreduce.Job.waitForCompletion(Job.java:1303)
        at org.apache.hadoop.examples.RandomWriter.run(RandomWriter.java:283)
        at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:70)
        at org.apache.hadoop.examples.RandomWriter.main(RandomWriter.java:294)
        at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
        at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
        at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
        at java.lang.reflect.Method.invoke(Method.java:606)
        at org.apache.hadoop.util.ProgramDriver$ProgramDescription.invoke(ProgramDriver.java:72)
        at org.apache.hadoop.util.ProgramDriver.run(ProgramDriver.java:145)
        at org.apache.hadoop.examples.ExampleDriver.main(ExampleDriver.java:74)
        at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
        at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
        at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
        at java.lang.reflect.Method.invoke(Method.java:606)
        at org.apache.hadoop.util.RunJar.main(RunJar.java:212)
Caused by: java.net.ConnectException: Connection refused
        at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method)
        at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:739)
        at org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:206)
        at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:529)
        at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:493)
        at org.apache.hadoop.ipc.Client$Connection.setupConnection(Client.java:606)
        at org.apache.hadoop.ipc.Client$Connection.setupIOstreams(Client.java:700)
        at org.apache.hadoop.ipc.Client$Connection.access$2800(Client.java:367)
        at org.apache.hadoop.ipc.Client.getConnection(Client.java:1463)
        at org.apache.hadoop.ipc.Client.call(Client.java:1382)
        ... 42 more
[root@localhost hadoop-2.5.1]#

執行疑似有問題,待研究

# **************

# 停止 (用 stop 參數)

# 停止 Node Manager

[root@localhost hadoop-2.5.1]# sbin/yarn-daemon.sh stop nodemanager
no nodemanager to stop

# 上面是 Node Manager 根本沒啟動成功,所以也沒 Node Manager 可以停止
# 若 Node Manager 有啟動,stop 出現訊息如下

[root@localhost hadoop-2.5.1]# sbin/yarn-daemon.sh stop nodemanager
stopping nodemanager

# 停止 Resource Manager

[root@localhost hadoop-2.2.0]# sbin/yarn-daemon.sh stop resourcemanager
no resourcemanager to stop

# 上面是 Resource Manager 根本沒啟動成功,所以也沒  Resource Manager 可以停止
# 若  Resource Manager 有啟動,stop 出現訊息如下

[root@localhost hadoop-2.2.0]# sbin/yarn-daemon.sh stop resourcemanager
stopping resourcemanager

啟動 Resource Manager 和 Node Manager 測試結束

# **************************************************************************

# 補充:關於防火牆 (CentOS 7.x )

測試 OK後,如果不想停掉防火牆,可加幾條 rules

#設定防火牆立刻啟動
systemctl start firewalld

#要永久開放某些 port,可執行
firewall-cmd   --permanent   --add-port=22/tcp
firewall-cmd   --permanent   --add-port=8080/tcp
firewall-cmd   --permanent   --add-port=50030/tcp
firewall-cmd   --permanent   --add-port=50070/tcp
firewall-cmd   --permanent   --add-port=9000/tcp
firewall-cmd   --permanent   --add-port=9001/tcp
firewall-cmd   --permanent   --add-port=9002/tcp
firewall-cmd   --permanent   --add-port=9003/tcp
firewall-cmd   --permanent   --add-port=9004/tcp
systemctl restart firewalld
#設定 httpd 隨作業系統啟動
systemctl enable  httpd


-A INPUT -m state --state NEW -m tcp -p tcp --dport 22 -j ACCEPT
-A INPUT -m state --state NEW -m tcp -p tcp --dport 8080 -j ACCEPT
-A INPUT -m state --state NEW -m tcp -p tcp --dport 50070 -j ACCEPT
-A INPUT -m state --state NEW -m tcp -p tcp --dport 50030 -j ACCEPT
-A INPUT -m state --state NEW -m tcp -p tcp --dport 9000 -j ACCEPT
-A INPUT -m state --state NEW -m tcp -p tcp --dport 9001 -j ACCEPT
-A INPUT -m state --state NEW -m tcp -p tcp --dport 9002 -j ACCEPT
-A INPUT -m state --state NEW -m tcp -p tcp --dport 9003 -j ACCEPT
-A INPUT -m state --state NEW -m tcp -p tcp --dport 9004 -j ACCEPT

其他常用 firewall-cmd 命令

# firewall-cmd --state
# firewall-cmd --list-all
# firewall-cmd --list-interfaces
# firewall-cmd --get-service
# firewall-cmd --query-service service_name
# firewall-cmd --add-port=8080/tcp


#設定防火牆立刻啟動
systemctl start firewalld

要暫時開放 http port,可執行
firewall-cmd --add-service=http

要永久開放 http port,可執行
firewall-cmd --permanent --add-service=http
# systemctl restart firewalld

#設定 httpd 隨作業系統啟動
systemctl enable  httpd

要停掉
systemctl stop firewalld


#設定防火牆立刻重新啟動
# systemctl restart firewalld

其他常用 firewall-cmd 命令

# **************************************************************************

# 補充:關於防火牆 (CentOS 6.x )

測試 OK後,如果不想停掉防火牆,可加幾條 rules
先啟動 iptables 和 ip6tables,把 rules 存檔
(會自動存到 /etc/sysconfig/iptables 和  /etc/sysconfig/ip6tables)

[root@localhost ~]# service iptables start

[root@localhost ~]# service ip6tables start

[root@localhost ~]# iptables-save

[root@localhost ~]# ip6tables-save

修改 iptables 防火牆 rules

[root@localhost ~]# vi /etc/sysconfig/iptables
# Firewall configuration written by system-config-firewall
# Manual customization of this file is not recommended.
*filter
:INPUT ACCEPT [0:0]
:FORWARD ACCEPT [0:0]
:OUTPUT ACCEPT [0:0]
-A INPUT -m state --state ESTABLISHED,RELATED -j ACCEPT
-A INPUT -p icmp -j ACCEPT
-A INPUT -i lo -j ACCEPT
-A INPUT -m state --state NEW -m tcp -p tcp --dport 22 -j ACCEPT
-A INPUT -j REJECT --reject-with icmp-host-prohibited
-A FORWARD -j REJECT --reject-with icmp-host-prohibited
COMMIT

改為 (依自己設定的 port 增加)
# Firewall configuration written by system-config-firewall
# Manual customization of this file is not recommended.
*filter
:INPUT ACCEPT [0:0]
:FORWARD ACCEPT [0:0]
:OUTPUT ACCEPT [0:0]
-A INPUT -m state --state ESTABLISHED,RELATED -j ACCEPT
-A INPUT -p icmp -j ACCEPT
-A INPUT -i lo -j ACCEPT
-A INPUT -m state --state NEW -m tcp -p tcp --dport 22 -j ACCEPT
-A INPUT -m state --state NEW -m tcp -p tcp --dport 8080 -j ACCEPT
-A INPUT -m state --state NEW -m tcp -p tcp --dport 50070 -j ACCEPT
-A INPUT -m state --state NEW -m tcp -p tcp --dport 50030 -j ACCEPT
-A INPUT -m state --state NEW -m tcp -p tcp --dport 9000 -j ACCEPT
-A INPUT -m state --state NEW -m tcp -p tcp --dport 9001 -j ACCEPT
-A INPUT -m state --state NEW -m tcp -p tcp --dport 9002 -j ACCEPT
-A INPUT -m state --state NEW -m tcp -p tcp --dport 9003 -j ACCEPT
-A INPUT -m state --state NEW -m tcp -p tcp --dport 9004 -j ACCEPT
-A INPUT -j REJECT --reject-with icmp-host-prohibited
-A FORWARD -j REJECT --reject-with icmp-host-prohibited
COMMIT

[root@localhost ~]# vi /etc/sysconfig/ip6tables
ip6tables 的 rules 比照辦理去修改

重新啟動 iptable ( 會重新載入全部 rules )

[root@localhost ~]# service iptables restart
iptables: Flushing firewall rules:                         [  OK  ]
iptables: Setting chains to policy ACCEPT: filter          [  OK  ]
iptables: Unloading modules:                               [  OK  ]
iptables: Applying firewall rules:                         [  OK  ]

[root@localhost ~]# service ip6tables restart
ip6tables: Flushing firewall rules:                        [  OK  ]
ip6tables: Setting chains to policy ACCEPT: filter         [  OK  ]
ip6tables: Unloading modules:                              [  OK  ]
ip6tables: Applying firewall rules:                        [  OK  ]
[root@localhost ~]#

(完)

相關

[研究] Hadoop 2.5.1 安裝 (CentOS 7.0 x86_64)
http://shaurong.blogspot.tw/2014/09/hadoop-251-centos-70-x8664.html

[研究] Hadoop 2.5.0 安裝 (CentOS 7.0 x86_64)
http://shaurong.blogspot.com/2014/08/hadoop-250-centos-70-x8664.html
http://forum.icst.org.tw/phpbb/viewtopic.php?f=26&t=81014
http://download.ithome.com.tw/article/index/id/2721

[研究] Hadoop 2.4.1 安裝 (CentOS 7.0 x86_64)
http://shaurong.blogspot.com/2014/08/hadoop-241-centos-70-x8664.html

[研究] hadoop-2.4.1-src.tar.gz 快速編譯安裝程式(CentOS 7.0 x86_64)
http://shaurong.blogspot.com/2014/08/hadoop-241-srctargz-centos-70-x8664.html
http://download.ithome.com.tw/article/index/id/2375

[研究] hadoop-2.2.0-src.tar.gz 快速編譯安裝程式(二)(CentOS 6.5 x86_64)
http://shaurong.blogspot.com/2014/02/hadoop-220-srctargz-centos-65-x8664_8080.html

[研究] hadoop-2.2.0-src.tar.gz 快速編譯安裝程式(CentOS 6.5 x86_64)
http://shaurong.blogspot.com/2014/02/hadoop-220-srctargz-centos-65-x8664_7.html

[研究] hadoop-2.2.0-src.tar.gz 編譯研究(CentOS 6.5 x86_64)
http://shaurong.blogspot.com/2014/02/hadoop-220-srctargz-centos-65-x8664.html

[研究] Hadoop 2.2.0 編譯 (CentOS 6.4 x64)
http://shaurong.blogspot.tw/2013/11/hadoop-220-centos-64-x64.html

[研究] Hadoop 2.2.0 Single Cluster 安裝 (二)(CentOS 6.4 x64)
http://shaurong.blogspot.tw/2013/11/hadoop-220-single-cluster-centos-64-x64_7.html

[研究] Hadoop 2.2.0 Single Cluster 安裝 (一)(CentOS 6.4 x64)
http://shaurong.blogspot.tw/2013/11/hadoop-220-single-cluster-centos-64-x64.html

[研究] Hadoop 1.2.1 (rpm)安裝 (CentOS 6.4 x64)
http://shaurong.blogspot.tw/2013/10/hadoop-121-rpm-centos-64-x64.html

[研究] Hadoop 1.2.1 (bin)安裝 (CentOS 6.4 x64)
http://shaurong.blogspot.tw/2013/07/hadoop-112-centos-64-x64.html

[研究] Hadoop 1.2.1 安裝 (CentOS 6.4 x64)
http://forum.icst.org.tw/phpbb/viewtopic.php?t=80035

[研究] 雲端軟體 Hadoop 1.0.0 安裝 (CentOS 6.2 x86)
http://forum.icst.org.tw/phpbb/viewtopic.php?t=21166

[研究] 雲端軟體 Hadoop 0.20.2 安裝 (CentOS 5.5 x86)
http://forum.icst.org.tw/phpbb/viewtopic.php?t=18513

[研究] 雲端軟體 Hadoop 0.20.2 安裝 (CentOS 5.4 x86)
http://forum.icst.org.tw/phpbb/viewtopic.php?t=17974

2014年9月27日 星期六

[研究] Testopia 測試案例 (Test Case) 管理系統(CentOS 7.0 x86_64)

[研究] Testopia 測試案例 (Test Case) 管理系統(CentOS 7.0 x86_64)

2014-09-27

官方網站
https://developer.mozilla.org/zh-TW/docs/Mozilla/Bugzilla/Testopia

Testopia 2.5 (Bugzilla 4.2) 下載
http://ftp.mozilla.org/pub/mozilla.org/webtools/testopia/testopia-2.5-BUGZILLA-4.2.tar.gz
ftp://ftp.mozilla.org/pub/mozilla.org/webtools/testopia/testopia-2.5-BUGZILLA-4.2.tar.gz

準備工作

BugZilla 需要先安裝好

[研究] BugZilla 4.4.5 缺陷管理系統 快速安裝程式(CentOS 7.0 x86_64)
http://shaurong.blogspot.tw/2014/09/bugzilla-445-centos-70-x8664.html

安裝

yum  -y  install  httpd
service  httpd  restart
chkconfig   httpd  on

#解壓縮︰ tar zxvf testopia-2.5-BUGZILLA-4.2.tar.gz
#把裡面的內容丟到 bugzilla 的根目錄,例如 /var/www/html/bugzilla
cd  /usr/local/src
wget   http://ftp.mozilla.org/pub/mozilla.org/webtools/testopia/testopia-2.5-BUGZILLA-4.2.tar.gz
mkdir  -p   /var/www/html/testopia
#tar   zxvf  testopia-2.5-BUGZILLA-4.2.tar.gz  -C  /var/www/html/testopia
tar   zxvf  testopia-2.5-BUGZILLA-4.2.tar.gz  -C  /var/www/html/bugzilla/

#執行 checksetup 的 perl 檔 :
cd /var/www/html/bugzilla/
./checksetup.pl
firefox  http://localhost/bugzilla   &

這樣就安裝完成了。(畫面上可以看到 Testopia 下拉選單)




基本的使用流程如下︰

新增一個 Product 以及多個 Components : Administration > Product
新增一個 Test Plan
新增多個 Test Cases : 要注意的是狀態要 CONFIRMED 之後才能被挑選來執行。
新增一個 Build : ex 1001
新增一個 Environemnt : 這邊的 Environemnt,通常是指一些像是OS,Browser,Hardware等等不同的執行環境。
新增一個 Test Run : 選擇要執行的 Test Plan,以及選擇要跑的Build,挑選 Test Case
執行 Test Run : 把相
產成 Report,在 Test Run 上按右鍵,就可以把相關的的報告展現在 Dashboard上面

(完)

相關

[研究] BugZilla 5.0.1 缺陷管理系統 快速安裝程式(CentOS 7.1 x86_64)
http://shaurong.blogspot.tw/2015/11/bugzilla-501-centos-71-x8664.html

[研究] Testopia 測試案例 (Test Case) 管理系統(CentOS 7.0 x86_64)
http://shaurong.blogspot.com/2014/09/testopia-test-case-centos-70-x8664.html

[研究] Testopia 測試案例 (Test Case) 管理系統(CentOS 6.5 x86_64)
http://shaurong.blogspot.com/2014/09/testopia-test-case-centos-65-x8664.html

[研究] BugZilla 4.4.5 缺陷管理系統 快速安裝程式(CentOS 7.0 x86_64)
http://shaurong.blogspot.tw/2014/09/bugzilla-445-centos-70-x8664.html

[研究] BugZilla 4.4.5 缺陷管理系統 快速安裝程式 (CentOS 6.5 x86_64)
http://shaurong.blogspot.tw/2014/09/bugzilla-445-centos-65-x8664.html

Testopia 測試案例管理系統簡介
http://blog.codylab.com/testcase-management-using-testopia/

[研究] hadoop-2.5.1-src.tar.gz 快速編譯安裝程式(CentOS 7.0 x86_64)

[研究] hadoop-2.5.1-src.tar.gz 快速編譯安裝程式(CentOS 7.0 x86_64)

2014-09-27

下載
http://www.apache.org/dyn/closer.cgi/hadoop/common/
http://apache.stu.edu.tw/hadoop/common/hadoop-2.5.0/hadoop-2.5.0.tar.gz

程式內容如下

#!/bin/bash

echo -e "\033[31m"
echo -e "Program : Hadoop-2.5.1_CentOS-7.0-x86_64-Compile.sh "
echo -e "Hadoop 2.5.1 Compile Shell Script (CentOS 7.0 x86_64) "
echo -e "by Shau-Rong Lu 2014-09-27 "
echo -e "\033[0m"

cd /usr/local/src
#yum -y groupinstall  "Development tools"
yum -y install gcc  gcc-c++  svn  cmake git zlib zlib-devel openssl openssl-devel rsync java-1.7.0-openjdk.x86_64 java-1.7.0-openjdk-devel.x86_64  make  wget

# echo "********** Install OpenJDK **********"

yum -y install  java
# or
#yum -y install java-1.7.0-openjdk

yum -y install  java-1.7.0-openjdk-devel

#export JAVA_HOME=/usr
#echo 'export JAVA_HOME=/usr' >> /etc/profile
#echo 'export JAVA_HOME=/usr/lib/jvm/java-1.7.0-openjdk-1.7.0.65-2.5.1.2.el7_0.x86_64' >> /etc/profile
echo 'export JAVA_HOME=/usr/lib/jvm/java' >> /etc/profile
echo 'export PATH=$PATH:$JAVA_HOME/bin' >> /etc/profile
echo 'export CLASSPATH=$JAVA_HOME/jre/lib/ext:$JAVA_HOME/lib/tools.jar' >> /etc/profile

source /etc/profile
java -version
#export | grep JAVA
#export | grep jdk

echo "********** Install Apache Maven 3.0.5 (yum) **********"

yum -y install maven

# mvn -version

echo "********** Install FindBugs 3.0.0 **********"

cd  /usr/local/src
if [ ! -s findbugs-3.0.0.tar.gz ]; then
  wget  http://jaist.dl.sourceforge.net/project/findbugs/findbugs/3.0.0/findbugs-3.0.0.tar.gz
fi
tar zxvf findbugs-3.0.0.tar.gz -C /usr/local/
ln -s /usr/local/findbugs-3.0.0/bin/findbugs  /usr/bin/findbugs
echo 'export FINDBUGS_HOME=/usr/local/findbugs-3.0.0' >> /etc/profile
echo 'export PATH=$PATH:$FINDBUGS_HOME/bin' >> /etc/profile
source /etc/profile

#export | grep FINDBUGS_HOME
#export | grep PATH
#read -n 1 -p "Press Enter to continue..."

echo "********** Install Protoc 2.5.0 **********"
# https://code.google.com/p/protobuf/
# https://code.google.com/p/protobuf/downloads/list

cd  /usr/local/src
if [ ! -s protobuf-2.5.0.tar.gz ]; then
  wget https://protobuf.googlecode.com/files/protobuf-2.5.0.tar.gz
fi
tar zxvf protobuf-2.5.0.tar.gz -C /usr/local/src
cd /usr/local/src/protobuf-2.5.0
./configure
make
make install
ln -s /usr/local/bin/protoc /usr/bin/protoc
echo 'export PROTO_HOME=/usr/local/' >> /etc/profile
echo 'export PATH=$PATH:$PROTO_HOME/bin' >> /etc/profile
source /etc/profile
#read -n 1 -p "Press Enter to continue..."

echo "********** Compile Hadoop **********"

cd  /usr/local/src
if [ ! -s hhadoop-2.5.1-src.tar.gz ]; then
  wget  http://ftp.tc.edu.tw/pub/Apache/hadoop/common/hadoop-2.5.1/hadoop-2.5.1-src.tar.gz
fi
tar zxvf hadoop-2.5.1-src.tar.gz -C /usr/local/src
cd  /usr/local/src/hadoop-2.5.1-src/

#mvn clean
mvn package -Pdist,native -DskipTests -Dtar
#read -n 1 -p "Press Enter to continue..."


執行結果


[INFO] ------------------------------------------------------------------------
[INFO] Reactor Summary:
[INFO]
[INFO] Apache Hadoop Main ................................ SUCCESS [2:10.013s]
[INFO] Apache Hadoop Project POM ......................... SUCCESS [50.174s]
[INFO] Apache Hadoop Annotations ......................... SUCCESS [29.702s]
[INFO] Apache Hadoop Assemblies .......................... SUCCESS [0.335s]
[INFO] Apache Hadoop Project Dist POM .................... SUCCESS [28.309s]
[INFO] Apache Hadoop Maven Plugins ....................... SUCCESS [40.791s]
[INFO] Apache Hadoop MiniKDC ............................. SUCCESS [3:00.311s]
[INFO] Apache Hadoop Auth ................................ SUCCESS [39.886s]
[INFO] Apache Hadoop Auth Examples ....................... SUCCESS [11.284s]
[INFO] Apache Hadoop Common .............................. SUCCESS [3:57.397s]
[INFO] Apache Hadoop NFS ................................. SUCCESS [7.097s]
[INFO] Apache Hadoop Common Project ...................... SUCCESS [0.021s]
[INFO] Apache Hadoop HDFS ................................ SUCCESS [3:24.778s]
[INFO] Apache Hadoop HttpFS .............................. SUCCESS [52.574s]
[INFO] Apache Hadoop HDFS BookKeeper Journal ............. SUCCESS [31.608s]
[INFO] Apache Hadoop HDFS-NFS ............................ SUCCESS [4.112s]
[INFO] Apache Hadoop HDFS Project ........................ SUCCESS [0.025s]
[INFO] hadoop-yarn ....................................... SUCCESS [0.022s]
[INFO] hadoop-yarn-api ................................... SUCCESS [1:00.165s]
[INFO] hadoop-yarn-common ................................ SUCCESS [51.288s]
[INFO] hadoop-yarn-server ................................ SUCCESS [0.031s]
[INFO] hadoop-yarn-server-common ......................... SUCCESS [21.424s]
[INFO] hadoop-yarn-server-nodemanager .................... SUCCESS [56.444s]
[INFO] hadoop-yarn-server-web-proxy ...................... SUCCESS [4.565s]
[INFO] hadoop-yarn-server-applicationhistoryservice ...... SUCCESS [7.043s]
[INFO] hadoop-yarn-server-resourcemanager ................ SUCCESS [14.625s]
[INFO] hadoop-yarn-server-tests .......................... SUCCESS [1.109s]
[INFO] hadoop-yarn-client ................................ SUCCESS [5.830s]
[INFO] hadoop-yarn-applications .......................... SUCCESS [0.044s]
[INFO] hadoop-yarn-applications-distributedshell ......... SUCCESS [3.352s]
[INFO] hadoop-yarn-applications-unmanaged-am-launcher .... SUCCESS [2.298s]
[INFO] hadoop-yarn-site .................................. SUCCESS [0.037s]
[INFO] hadoop-yarn-project ............................... SUCCESS [4.523s]
[INFO] hadoop-mapreduce-client ........................... SUCCESS [0.057s]
[INFO] hadoop-mapreduce-client-core ...................... SUCCESS [22.938s]
[INFO] hadoop-mapreduce-client-common .................... SUCCESS [17.266s]
[INFO] hadoop-mapreduce-client-shuffle ................... SUCCESS [4.292s]
[INFO] hadoop-mapreduce-client-app ....................... SUCCESS [11.857s]
[INFO] hadoop-mapreduce-client-hs ........................ SUCCESS [9.558s]
[INFO] hadoop-mapreduce-client-jobclient ................. SUCCESS [14.674s]
[INFO] hadoop-mapreduce-client-hs-plugins ................ SUCCESS [1.812s]
[INFO] Apache Hadoop MapReduce Examples .................. SUCCESS [6.743s]
[INFO] hadoop-mapreduce .................................. SUCCESS [3.861s]
[INFO] Apache Hadoop MapReduce Streaming ................. SUCCESS [10.350s]
[INFO] Apache Hadoop Distributed Copy .................... SUCCESS [18.868s]
[INFO] Apache Hadoop Archives ............................ SUCCESS [2.024s]
[INFO] Apache Hadoop Rumen ............................... SUCCESS [7.014s]
[INFO] Apache Hadoop Gridmix ............................. SUCCESS [4.974s]
[INFO] Apache Hadoop Data Join ........................... SUCCESS [2.891s]
[INFO] Apache Hadoop Extras .............................. SUCCESS [2.984s]
[INFO] Apache Hadoop Pipes ............................... SUCCESS [5.990s]
[INFO] Apache Hadoop OpenStack support ................... SUCCESS [5.098s]
[INFO] Apache Hadoop Client .............................. SUCCESS [7.264s]
[INFO] Apache Hadoop Mini-Cluster ........................ SUCCESS [0.083s]
[INFO] Apache Hadoop Scheduler Load Simulator ............ SUCCESS [11.020s]
[INFO] Apache Hadoop Tools Dist .......................... SUCCESS [4.964s]
[INFO] Apache Hadoop Tools ............................... SUCCESS [0.023s]
[INFO] Apache Hadoop Distribution ........................ SUCCESS [24.629s]
[INFO] ------------------------------------------------------------------------
[INFO] BUILD SUCCESS
[INFO] ------------------------------------------------------------------------
[INFO] Total time: 25:31.849s
[INFO] Finished at: Sat Sep 27 09:32:07 CST 2014
[INFO] Final Memory: 163M/474M
[INFO] ------------------------------------------------------------------------
[root@localhost ~]#

(完)

相關

[研究] hadoop-2.5.1-src.tar.gz 快速編譯安裝程式(CentOS 7.0 x86_64)
http://shaurong.blogspot.com/2014/09/hadoop-251-srctargz-centos-70-x8664.html

[研究] hadoop-2.5.0-src.tar.gz 快速編譯安裝程式(CentOS 7.0 x86_64)
http://shaurong.blogspot.com/2014/08/hadoop-250-srctargz-centos-70-x8664.html
http://forum.icst.org.tw/phpbb/viewtopic.php?f=26&t=81015
http://download.ithome.com.tw/article/index/id/2722

[研究] Hadoop 2.5.0 安裝 (CentOS 7.0 x86_64)
http://shaurong.blogspot.com/2014/08/hadoop-250-centos-70-x8664.html
http://forum.icst.org.tw/phpbb/viewtopic.php?f=26&t=81014
http://download.ithome.com.tw/article/index/id/2721

[研究] Hadoop 2.4.1 安裝 (CentOS 7.0 x86_64)
http://shaurong.blogspot.com/2014/08/hadoop-241-centos-70-x8664.html

[研究] hadoop-2.4.1-src.tar.gz 快速編譯安裝程式(CentOS 7.0 x86_64)
http://shaurong.blogspot.com/2014/08/hadoop-241-srctargz-centos-70-x8664.html
http://download.ithome.com.tw/article/index/id/2375

[研究] hadoop-2.2.0-src.tar.gz 快速編譯安裝程式(二)(CentOS 6.5 x86_64)
http://shaurong.blogspot.com/2014/02/hadoop-220-srctargz-centos-65-x8664_8080.html

[研究] hadoop-2.2.0-src.tar.gz 快速編譯安裝程式(CentOS 6.5 x86_64)
http://shaurong.blogspot.com/2014/02/hadoop-220-srctargz-centos-65-x8664_7.html

[研究] hadoop-2.2.0-src.tar.gz 編譯研究(CentOS 6.5 x86_64)
http://shaurong.blogspot.com/2014/02/hadoop-220-srctargz-centos-65-x8664.html

[研究] Hadoop 2.2.0 編譯 (CentOS 6.4 x64)
http://shaurong.blogspot.tw/2013/11/hadoop-220-centos-64-x64.html

[研究] Hadoop 2.2.0 Single Cluster 安裝 (二)(CentOS 6.4 x64)
http://shaurong.blogspot.tw/2013/11/hadoop-220-single-cluster-centos-64-x64_7.html

[研究] Hadoop 2.2.0 Single Cluster 安裝 (一)(CentOS 6.4 x64)
http://shaurong.blogspot.tw/2013/11/hadoop-220-single-cluster-centos-64-x64.html

[研究] Hadoop 1.2.1 (rpm)安裝 (CentOS 6.4 x64)
http://shaurong.blogspot.tw/2013/10/hadoop-121-rpm-centos-64-x64.html

[研究] Hadoop 1.2.1 (bin)安裝 (CentOS 6.4 x64)
http://shaurong.blogspot.tw/2013/07/hadoop-112-centos-64-x64.html

[研究] Hadoop 1.2.1 安裝 (CentOS 6.4 x64)
http://forum.icst.org.tw/phpbb/viewtopic.php?t=80035

[研究] 雲端軟體 Hadoop 1.0.0 安裝 (CentOS 6.2 x86)
http://forum.icst.org.tw/phpbb/viewtopic.php?t=21166

[研究] 雲端軟體 Hadoop 0.20.2 安裝 (CentOS 5.5 x86)
http://forum.icst.org.tw/phpbb/viewtopic.php?t=18513

[研究] 雲端軟體 Hadoop 0.20.2 安裝 (CentOS 5.4 x86)