1 Answer

0 votes
by

HDFS is more suitable for large amount of data sets in a single file as compared to small amount of data spread across multiple files. This is because Namenode is a very expensive high performance system, so it is not prudent to occupy the space in the Namenode by unnecessary amount of metadata that is generated for multiple small files. So, when there is a large amount of data in a single file, name node will occupy less space. Hence for getting optimized performance, HDFS supports large data sets instead of multiple small files.

Click here to read more about Loan/Mortgage
Click here to read more about Insurance

Related questions

0 votes
asked Feb 23, 2020 in Big Data | Hadoop by rahuljain1
0 votes
asked Nov 1, 2020 in MongoDB by SakshiSharma
0 votes
asked Oct 28, 2020 in Hadoop by rahuljain1
0 votes
asked Jan 10, 2020 in Big Data | Hadoop by sharadyadav1986
0 votes
asked Jun 8, 2020 in HDFS by Robindeniel
...