Skip to main content
Home
  • Tutorials
    • Quality Assurance
    • Software Development
    • Machine Learning
    • Data Science
  • About Us
  • Contact
programsbuzz facebook programsbuzz twitter programsbuzz linkedin
  • Log in

Main navigation

  • Tutorials
    • Quality Assurance
    • Software Development
    • Machine Learning
    • Data Science
  • About Us
  • Contact

Install Apache HBase on Ubuntu 20.04

Profile picture for user shiksha.dahiya
Written by shiksha.dahiya on 07/17/2021 - 10:50

1.) Prepare to install Symphony DE on Linux on UNIX:

  1. If you have Linux, determine which installation file you need. Note that MapReduce workload in Symphony DE is only supported on Linux 64-bit hosts.
    • To find out the Linux version, enter:
      uname -a
    • To find out the glibc version, enter:
      rpm -q glibc
  2. Determine the installation package and download.

    For Linux, it is a .bin package (that contains .rpm files); for Solaris, it is a tar.gz package.

  3. Check communication ports.

2.) Install Platform Symphony DE on a UNIX host:

3.) If you have root permissions, you can install using the default settings.

  1. Set the environment variable CLUSTERADMIN to your user account to be able to run Symphony DE without root permissions after installation. For example:
    (bsh) export CLUSTERADMIN=user1
    
    (tcsh) setenv CLUSTERADMIN user1

    where user1 is your operating system user name.

  2. If you plan to use MapReduce (available only on Linux® 64-bit hosts), set JAVA_HOME to point to the directory where JDK 1.6 is installed. For example:
    (bsh) export JAVA_HOME=/opt/java/j2sdk1.6.2
    
    (tcsh) setenv JAVA_HOME /opt/java/j2sdk1.6.2
  3. If you plan to use the Hadoop Distributed File System (HDFS) with MapReduce:
    1. Set HADOOP_HOME to the installation directory where Hadoop is installed. For example:
      (bsh) export HADOOP_HOME=/opt/hadoop-2.4.x
      
      (tcsh) setenv HADOOP_HOME /opt/hadoop-2.4.x
    2. Set HADOOP_VERSION to any of the following:
      • The HDFS or Hadoop API version in your cluster:
        • 2_4_x = version 2.4.x
        • 1_1_1 = version 1.1.1
      • The Cloudera version in your cluster:
        • cdh5.0.2 = CDH 5.0.2

      For example:2.4.x

      (bsh) export HADOOP_VERSION=2.4.x
      
      (tcsh) setenv HADOOP_VERSION 2.4.x
  4. Install the package:
    • To install with default settings, run:
      ./symphony_DE_package_name.bin

      Platform Symphony DE will be installed to the default directory of /opt/ibm/platformsymphonyde/de71.

    • To install with custom settings, run:
      ./rpm -ivh symphony_DE_package_name.rpm 
                     --prefix install_dir --dbpath dbpath_dir

Installation: 

On hosts identified as HBase Master/RegionServers, type:

STEP 1: Download the Hadoop .tar.gz package from the Apache Hadoop download website.

STEP 2 : Select a directory to install Hadoop and untar the package tar ball in that directory. For example, to download the 2.4.x distribution, enter:

cd /opt
tar xzvf hadoop-2.4.x.tar.gz
ln -s hadoop-2.4.x hadoop

STEP 3 : Select a host to be the NameNode (for example, db03b10).

STEP 4 : As user hadoop, configure the environment:

export JAVA_HOME=/usr/java/latest
export HADOOP_HOME=/opt/hadoop-2.4.x
export CLASSPATH=$CLASSPATH:$JAVA_HOME/lib:$JAVA_HOME/jre/lib
export PATH=$JAVA_HOME/bin:$JAVA_HOME/jre/bin:$PATH:$HADOOP_HOME/bin
export HADOOP_VERSION=2_4_x

STEP 5 : Specify the Java version used by Hadoop using the ${HADOOP_HOME}/conf/hadoop-env.sh script file. Change, for example:

# The Java implementation to use. Required.
export JAVA_HOME=/usr/java/latest

STEP 6 : Configure the directory where Hadoop will store its data files using the ${HADOOP_HOME}/conf/core-site.xml file. For example:

hadoop.tmp.dir</name>
      <value>/opt/hadoop/data</value>
fs.default.name</name>
     <!-- NameNode host -->
     <value>hdfs://db03b10:9000/</value>

STEP 7 : Ensure that the base you specified for other temporary directories exists and has the required ownerships and permissions. For example:

mkdir -p /opt/hadoop/data
chown hadoop:hadoop /opt/hadoop/data
chmod 755 /opt/hadoop/data

STEP 8 : Set values for the following properties using the ${HADOOP_HOME}/conf/mapred-site.xml. Set the host and port that the MapReduce job tracker runs at. For example:

<name>mapred.job.tracker</name>
<value>db03b10:9001</value>

STEP 9 : Set the maximum number of map tasks run simultaneously per TaskTracker. For example:

<name>mapred.tasktracker.map.tasks.maximum</name>
<value>7</value>

        

STEP 10 : Set the default block replication using the ${HADOOP_HOME}/conf/hdfs-site.xml. For example:

<name>dfs.replication</name>
<value>1</value>
       

STEP 11 : If you plan on using the High-Availability HDFS feature (not available on the Developer Edition), configure the shared file system directory to be used for HDFS NameNode meta data storage in the hdfs-site.xml file. For example:

<name>dfs.name.dir</name>
<value>/share/hdfs/name</value>
       

STEP 12 :  Repeat steps 1 to 5 on every compute host.

STEP 13 : Specify master and compute hosts. For example, in the 2.4.x distribution, the compute file is under the ${HADOOP_HOME}/etc/hadoop directory. 

$ vim compute
$ cat compute
db03b11
db03b12
Related Content
Apache HBase Tutorial
Prerequisites for Apache HBase Installation
Install Apache HBase on Windows 10
Tags
Apache HBase
  • Log in or register to post comments

Choose Your Technology

  1. Agile
  2. Apache Groovy
  3. Apache Hadoop
  4. Apache HBase
  5. Apache Spark
  6. Appium
  7. AutoIt
  8. AWS
  9. Behat
  10. Cucumber Java
  11. Cypress
  12. DBMS
  13. Drupal
  14. GitHub
  15. GitLab
  16. GoLang
  17. Gradle
  18. HTML
  19. ISTQB Foundation
  20. Java
  21. JavaScript
  22. JMeter
  23. JUnit
  24. Karate
  25. Kotlin
  26. LoadRunner
  27. matplotlib
  28. MongoDB
  29. MS SQL Server
  30. MySQL
  31. Nightwatch JS
  32. PactumJS
  33. PHP
  34. Playwright
  35. Playwright Java
  36. Playwright Python
  37. Postman
  38. Project Management
  39. Protractor
  40. PyDev
  41. Python
  42. Python NumPy
  43. Python Pandas
  44. Python Seaborn
  45. R Language
  46. REST Assured
  47. Ruby
  48. Selenide
© Copyright By iVagus Services Pvt. Ltd. 2023. All Rights Reserved.

Footer

  • Cookie Policy
  • Privacy Policy
  • Terms of Use