hadoop的hbase安裝入門

前提:接上一篇的hadoop已經成功安裝後,此篇介紹如何進行hbase和zookeeper集成。

1:首先下載hbase和zookeeper安裝包,然後先解壓縮hbase,進入conf目錄,首先對hbase-env.sh 進行如下修改:

export JAVA_HOME=/home/jdk1.6.0_13

====後加的

export HBASE_HOME=/home/yf/hbase-0.94.3
export PATH=$PATH:/home/yf/hbase-0.94.3/bin


2:修改hbase-site.xml文件,增加如下內容:

<property>  
    <name>hbase.rootdir</name>  
    <value>hdfs://bida:9100/hbase</value>  
  </property>  
<property>
<name>hbase.tmp.dir</name>
<value>hdfs://bida:9100/tmp</value>
<description>Temporary directory on the local filesystem.</description>
</property>

然後保存,並在bin目錄下啓動。進行如下測試:

下面測試系統是否正確安裝。
1:確認已經啓動了hadooop;
2.如果沒有 在Hadoop 安裝目錄下,執行“bin/start-all.sh”腳本,啓動Hadoop。
3. 在Hbase 安裝目錄下,執行“bin/start-hbase.sh”腳本,啓動HBase。
4. 在Hbase 安裝目錄下,執行“bin/hbase shell”,進入Shell 命令模式。
5. 在Shell 中輸入“create 'test', 'data'”,執行結果通過輸入“list”命令進
行查看。

如下所示:
[[email protected] bin]# ./hbase shell
HBase Shell; enter 'help<RETURN>' for list of supported commands.
Type "exit<RETURN>" to leave the HBase Shell
Version 0.94.3, r1408904, Wed Nov 14 19:55:11 UTC 2012

hbase(main):001:0> list
TABLE                                                                                                                               
0 row(s) in 0.6520 seconds

hbase(main):002:0> create 'test', 'data'
0 row(s) in 1.2060 seconds

hbase(main):003:0> list
TABLE                                                                                                                               
test                                                                                                                                
1 row(s) in 0.0350 seconds

hbase(main):004:0> put 'test', 'row1', 'data:1', 'value1'
0 row(s) in 0.0850 seconds

hbase(main):005:0> put 'test', 'row2', 'data:2', 'value2'
0 row(s) in 0.0150 seconds

hbase(main):006:0> put 'test', 'row3', 'data:3', 'value3'
0 row(s) in 0.0030 seconds

hbase(main):007:0> scan 'test'
ROW                                COLUMN+CELL                                                                                      
 row1                              column=data:1, timestamp=1358484999214, value=value1                                             
 row2                              column=data:2, timestamp=1358485004710, value=value2                                             
 row3                              column=data:3, timestamp=1358485005165, value=value3                                             
3 row(s) in 0.0680 seconds


另外hbase和hadoop集成好以後,在hadoop目錄下可以看到hbase的目錄:

[[email protected] conf]# cd ../../hadoop-1.0.3/bin
[[email protected] bin]# ./hadoop fs -ls /
Warning: $HADOOP_HOME is deprecated.

Found 4 items
drwxr-xr-x   - root supergroup          0 2013-01-18 12:56 /hbase
drwxr-xr-x   - root supergroup          0 2013-01-16 18:53 /home
drwxr-xr-x   - root supergroup          0 2013-01-17 14:29 /tmp
drwxr-xr-x   - root supergroup          0 2013-01-17 14:47 /user
[[email protected] bin]# ./hadoop fs -ls /hbase
Warning: $HADOOP_HOME is deprecated.

Found 7 items
drwxr-xr-x   - root supergroup          0 2013-01-18 12:35 /hbase/-ROOT-
drwxr-xr-x   - root supergroup          0 2013-01-18 12:35 /hbase/.META.
drwxr-xr-x   - root supergroup          0 2013-01-18 12:35 /hbase/.logs
drwxr-xr-x   - root supergroup          0 2013-01-18 12:35 /hbase/.oldlogs
-rw-r--r--   1 root supergroup         38 2013-01-18 12:35 /hbase/hbase.id
-rw-r--r--   1 root supergroup          3 2013-01-18 12:35 /hbase/hbase.version
drwxr-xr-x   - root supergroup          0 2013-01-18 12:56 /hbase/test


=================================

hbase-site.xml寫的格式不對會出現如下異常:

[Fatal Error] hbase-site.xml:35:2: The markup in the document following the root element must be well-formed.
13/01/18 12:24:28 FATAL conf.Configuration: error parsing conf file: org.xml.sax.SAXParseException: The markup in the document following the root element must be well-formed.
Exception in thread "main" java.lang.RuntimeException: org.xml.sax.SAXParseException: The markup in the document following the root element must be well-formed.
        at org.apache.hadoop.conf.Configuration.loadResource(Configuration.java:1263)
        at org.apache.hadoop.conf.Configuration.loadResources(Configuration.java:1129)


以下hbase的logs日誌異常問題應該是配置hbase-site.xml的zookeeper配置造成的,建議沒有安裝zookeeper之前註釋掉zookeeper配置:

2013-01-18 12:27:20,962 INFO org.apache.zookeeper.ClientCnxn: Opening socket connection to server bida/127.0.0.1:2181
2013-01-18 12:27:20,963 WARN org.apache.zookeeper.client.ZooKeeperSaslClient: SecurityException: java.lang.SecurityException: Unable to locate a login configuration occurred when trying to find JAAS configuration.
2013-01-18 12:27:20,963 INFO org.apache.zookeeper.client.ZooKeeperSaslClient: Client will not SASL-authenticate because the default JAAS configuration section 'Client' could not be found. If you are not using SASL, you may ignore this. On the other hand, if you expected SASL to work, please fix your JAAS configuration.
2013-01-18 12:27:20,963 WARN org.apache.zookeeper.ClientCnxn: Session 0x13c4be8b1cd0004 for server null, unexpected error, closing socket connection and attempting reconnect
java.net.ConnectException: Connection refused
        at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method)
        at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:574)
        at org.apache.zookeeper.ClientCnxnSocketNIO.doTransport(ClientCnxnSocketNIO.java:286)
        at org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1035)


13/01/18 12:29:41 ERROR zookeeper.ZooKeeperWatcher: hconnection Received unexpected KeeperException, re-throwing exception
org.apache.zookeeper.KeeperException$ConnectionLossException: KeeperErrorCode = ConnectionLoss for /hbase/root-region-server
        at org.apache.zookeeper.KeeperException.create(KeeperException.java:99)
        at org.apache.zookeeper.KeeperException.create(KeeperException.java:51)
        at org.apache.zookeeper.ZooKeeper.exists(ZooKeeper.java:1021)
        at org.apache.hadoop.hbase.zookeeper.RecoverableZooKeeper.exists(RecoverableZooKeeper.java:166)
        at org.apache.hadoop.hbase.zookeeper.ZKUtil.watchAndCheckExists(ZKUtil.java:230)
        at org.apache.hadoop.hbase.zookeeper.ZooKeeperNodeTracker.start(ZooKeeperNodeTracker.java:82)
        at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.ensureZookeeperTrackers(HConnectionManager.java:595)
        at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.getMaster(HConnectionManager.java:650)
        at org.apache.hadoop.hbase.client.HBaseAdmin.<init>(HBaseAdmin.java:110)
        at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
        at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:39)
        at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:27)
        at java.lang.reflect.Constructor.newInstance(Constructor.java:513)
        at org.jruby.javasupport.JavaConstructor.newInstanceDirect(JavaConstructor.java:275)
        at org.jruby.java.invokers.ConstructorInvoker.call(ConstructorInvoker.java:91)
        at org.jruby.java.invokers.ConstructorInvoker.call(ConstructorInvoker.java:178)
        at org.jruby.runtime.callsite.CachingCallSite.cacheAndCall(CachingCallSite.java:322)
        at org.jruby.runtime.callsite.CachingCallSite.callBlock(CachingCallSite.java:178)


以下問題是由於hbase-site.xml配置中的hbase.rootdir和tmp配置與hadoop的hdfs配置不是一樣的,不能一個寫ip,一個寫域名,要統一。如我在hadoop的core-site.xml配置寫的是

<property>  
                <name>fs.default.name</name>  
                <value>hdfs://bida:9100</value>  
        <description> 
則在hbase的 hbase-site.xml 配置中也要寫成如下:

<property>  
    <name>hbase.rootdir</name>  
    <value>hdfs://bida:9100/hbase</value>  
  </property>  
<property>


2013-01-18 12:32:26,124 FATAL org.apache.hadoop.hbase.master.HMaster: Unhandled exception. Starting shutdown.
java.net.ConnectException: Call to 192.168.9.228/192.168.9.228:9100 failed on connection exception: java.net.ConnectException: Connection refused
        at org.apache.hadoop.ipc.Client.wrapException(Client.java:1099)
        at org.apache.hadoop.ipc.Client.call(Client.java:1075)
        at org.apache.hadoop.ipc.RPC$Invoker.invoke(RPC.java:225)
        at $Proxy11.getProtocolVersion(Unknown Source)
        at org.apache.hadoop.ipc.RPC.getProxy(RPC.java:396)
        at org.apache.hadoop.ipc.RPC.getProxy(RPC.java:379)
        at org.apache.hadoop.hdfs.DFSClient.createRPCNamenode(DFSClient.java:119)
        at org.apache.hadoop.hdfs.DFSClient.<init>(DFSClient.java:238)
        at org.apache.hadoop.hdfs.DFSClient.<init>(DFSClient.java:203)


另外通過http://192.168.9.228:60010/master-status在瀏覽器訪問可以查看到hbse的相關情況。