×

Hadoop Training

Certification:Yes
Mode: Online
Hours :61
Support:24/7 Support

Hadoop | Big data Training

(5.0)

Jovi Soft Solutions Big Data Hadoop training program helps you master Big Data Hadoop and Spark to prepare for the Cloudera CCA Spark and Hadoop Developer Certification (CCA175) test just as master Hadoop Administration. In this Big Data course, you will master Hive, MapReduce, Pig, Oozie, Sqoop and Flume and work with Amazon EC2 for cluster setup, Scala and Spark SQL, Spark structure and RDD, Machine Learning using Spark, Spark Streaming, etc.

Big Data Hadoop Online Training Objectives.
1. About Bigdata Hadoop Training.

It is a comprehensive Hadoop Big Data online training course designed by industry specialists considering current industry job prerequisites to assist you with learning Big Data Hadoop and Spark modules. This is an industry-perceived Big Data Hadoop certification training that is a mix of the training courses in Hadoop administrator, Hadoop developer, Hadoop testing and analytics with Apache Spark. This Hadoop and Spark training will set you up to clear Cloudera CCA175 Big Data accreditation.

2. Who should learn Adobe Hadoop ?

There is a huge demand for skilled Big Data Hadoop professionals across the industries. We recommend this Hadoop course for the following professionals in particular:
● System Administrators and Programming Developers
● Experienced working professionals and Project Managers
● Architects, Mainframe Professionals and Testing Professionals
● Big Data Hadoop Developers eager to learn other verticals like analytics, testing and administration
● Graduates and Undergraduates eager to learn Big Data
● Business Intelligence, Analytics Professionals and Data Warehousing.

3. What are the prerequisites for learning Hadoop ?

There are no prerequisites to take up this Hadoop training course and to master Hadoop. But basics of SQL, UNIX and Java would be good to learn Big Data Hadoop.

4. How do I become a Big Data Engineer ?

This Big Data Hadoop certification training course will give you insights/analytics into the Hadoop ecosystem and Big Data tools and methodologies to prepare you for success in your role as a Big Data Engineer.

5.What does the CCA175 Hadoop certification cost?

The cost of the CCA 175 Spark and Hadoop Developer exam is USD 295

Module 1 – Introduction to Big data & Hadoop (1.5 hours)

  • What is Big data?
  • Sources of Big data
  • Categories of Big data
  • Characteristics of Big data
  • Use-cases of Big data
  • Traditional RDBMS vs Hadoop
  • What is Hadoop?
  • History of Hadoop
  • Understanding Hadoop Architecture
  • Fundamental of HDFS (Blocks, Name Node, Data Node, Secondary Name Node)
  • Block Placement &Rack Awareness
  • HDFS Read/Write
  • Drawback with 1.X Hadoop
  • Introduction to 2.X Hadoop
  • High Availability

Module 2 – Linux (Complete Hands-on) (1 hour)

  • Making/creating directories
  • Removing/deleting directories
  • Print working directory
  • Change directory
  • Manual pages
  • Help
  • Vi editor
  • Creating empty files
  • Creating file contents
  • Copying file
  • Renaming files
  • Removing files
  • Moving files
  • Listing files and directories
  • Displaying file contents

Module 3 – HDFS (1 hour)

  • Understanding Hadoop configuration files
  • Hadoop Components- HDFS, MapReduce
  • Overview of Hadoop Processes
  • Overview of Hadoop Distributed File System
  • The building blocks of Hadoop
  • Hands-On Exercise: Using HDFS commands

Module 4 – Map Reduce (1.5 hours)

  • • Map Reduce 1(MRv1)
  • Map Reduce Introduction
  • How Map Reduce works?
  • Communication between Job Tracker and Task Tracker
  • Anatomy of a Map Reduce Job Submission
  • • MapReduce-2(YARN)
  • Limitations of Current Architecture
  • YARN Architecture
  • Node Manager & Resource Manager

Module 5-Hive (Complete Hands-on) (8 hours)

  • What is hive?
  • Why hive?
  • What hive is not?
  • Meta store DB in hive
  • Architecture of hive
  • Internal table
  • External table
  • Hive operations
  • Static Partition
  • Dynamic Partition
  • Bucketing
  • Bucketing with sorting
  • File formats
  • Hive performance tuning

Module 6 – Sqoop (Complete Hands-on) (8 hours)

  • What is Sqoop?
  • Architecture of Sqoop
  • Listing databases
  • Listing tables
  • Different ways of setting the password
  • Using options file
  • Sqoop eval
  • Sqoop import into target directory
  • Sqoop import into warehouse directory
  • Setting the number of mappers
  • Life cycle of Sqoop import
  • Split-by clause
  • Importing all tables
  • Import into hive tables
  • Export from hive tables
  • Setting number of mappers during the export

Module 7-Scala (Complete Hands-on) (12 hours)

  • Setup Java and JDK
  • Install Scala with IntelliJ IDE
  • Develop Hello World Program using Scala
  • Introduction to Scala
  • REPL Overview
  • Declaring Variables
  • Programming Constructs
  • Code Blocks
  • Scala Functions - Getting Started
  • Scala Functions - Higher Order and Anonymous Functions
  • Scala Functions - Operators
  • Object Oriented Constructs - Getting Started
  • Object Oriented Constructs - Objects
  • Object Oriented Constructs - Classes
  • Object Oriented Constructs - Companion Objects and Case Class
  • Operators and Functions on Classes
  • External Dependencies and Import
  • Scala Collections - Getting Started
  • Mutable and Immutable Collections
  • Sequence (Seq) - Getting Started
  • Linear Seq vs. Indexed Seq
  • Scala Collections - Primitive Operations
  • Scala Collections - Sorting Data
  • Scala Collections - Grouping Data
  • Scala Collections - Set
  • Scala Collections - Map
  • Tuples in Scala
  • Development Cycle - Developing Source code
  • Development Cycle - Compile source code to jar using SBT
  • Development Cycle - Setup SBT on Windows
  • Development Cycle - Compile changes and run jar with arguments
  • Development Cycle - Setup IntelliJ with Scala
  • Development Cycle - Develop Scala application using SBT in IntelliJ

Module 8-Getting started with Spark (Complete Hands-on) (6 hours)

  • What is Apache Spark & Why Spark?
  • Spark History
  • Unification in Spark
  • Spark ecosystem Vs Hadoop
  • Spark with Hadoop
  • Introduction to Spark’s Python and Scala Shells
  • Spark Standalone Cluster Architecture and its application flow

Module 9 –Programming with RDDS, DFs & DSs (Complete Hands-on) (12 hours)

  • RDD Basics and its characteristics, Creating RDDs
  • RDD Operations
  • Transformations
  • Actions
  • RDD Types
  • Lazy Evaluation
  • Persistence (Caching)
  • Module-Advanced spark programming
  • Accumulators and Fault Tolerance
  • Broadcast Variables
  • Custom Partitioning
  • Dealing with different file formats
  • Hadoop Input and Output Formats
  • Connecting to diverse Data Sources
  • Module-Spark SQL
  • Linking with Spark SQL
  • Initializing Spark SQL
  • Data Frames &Caching
  • Case Classes, Inferred Schema
  • Loading and Saving Data
  • Apache Hive
  • Data Sources/Parquet
  • JSON
  • Spark SQL User Defined Functions (UDFs)

Module 10-KAFKA & Spark Streaming (Complete Hands-on) (5 hours)

  • Getting started with Kafka
  • Understanding Kafka Producer and Consumer APIs
  • Deep dive into producer and consumer APIs
  • Ingesting Web Server logs into Kafka
  • Getting started with Spark Streaming
  • Getting started with HBASE
  • Integrating Kafka-Spark Streaming-HBASE

Module 11 – Spark on Amazon Web Services (AWS)(Complete Hands-on) (5 hours)

  • Introduction
  • Sign up for AWS account
  • Setup Cygwin on Windows
  • Quick Preview of Cygwin
  • Understand Pricing
  • Create first EC2 Instance
  • Connecting to EC2 Instance
  • Understanding EC2 dashboard left menu
  • Different EC2 Instance states
  • Describing EC2 Instance
  • Using elastic IPs to connect to EC2 Instance
  • Using security groups to provide security to EC2 Instance
  • Understanding the concept of bastion server
  • Terminating EC2 Instance and relieving all the resources
  • Create security credentials for AWS account
  • Setting up AWS CLI in Windows
  • Creating s3 bucket
  • Deleting root access keys
  • Enable MFA for root account
  • Introduction to IAM users and customizing sign in link
  • Create first IAM user
  • Create group and add user
  • Configure IAM password policy
  • Understanding IAM best practices
  • AWS managed policies and creating custom policies
  • Assign policy to entities (user and/or group)
  • Creating role for EC2 trusted entity with permissions on s3
  • Assigning role to EC2 instance
  • Introduction to EMR
  • EMR concepts
  • Pre-requisites before setting up EMR cluster
  • Setting up data sets
  • Setup EMR with Spark cluster using quick options
  • Connecting to EMR cluster
  • Submitting spark job on EMR cluster
  • Validating the results
  • Terminating EMR Cluster
1. Does Jovi Soft Solutions offer job assistance ?

Jovi Soft Solutions actively provides placement assistance to all learners who have successfully completed the Big Data Hadoop training.

2. Do I get any discount on the course ?

Yes, you get two types of discounts. They are referral discount and group discount. Referral discount is offered when you are referred from someone who has already enrolled in our training and Group discount is offered when you join as a group.

3. Do Jovi Soft Solutions accept the course fees in installments ?

Yes, we accept payments in two installments.

4. What is the qualification of the trainer ?

The trainer is a certified consultant and has a significant amount of working experience with the technology.

4. Can I attend a Demo Session before enrolment ?

Yes. You can register or enroll for a free Hadoop demo session.

REGISTER FOR FREE DEMO

Please fill the form below for further queries