1 Answer

0 votes
by

HDFS is not good at handling large number of small files. Because every file, directory and block in HDFS is represented as an object in the namenode’s memory, each of which occupies approx 150 bytes So 10 million files, each using a block, would use about 3 gigabytes of memory. when we go for a billion files the memory requirement in namenode cannot be met.

Click here to read more about Loan/Mortgage
Click here to read more about Insurance

Related questions

0 votes
asked Nov 5, 2020 in Hadoop by SakshiSharma
0 votes
asked Jun 14, 2020 in Hive by Robindeniel
0 votes
asked Apr 6, 2020 in Big Data | Hadoop by GeorgeBell
0 votes
asked Jun 8, 2020 in HDFS by Robindeniel
0 votes
asked Jan 11, 2020 in Big Data | Hadoop by rajeshsharma
...