0 votes
in HDFS by
What is a block?

1 Answer

0 votes
by

You should begin the answer with a general definition of a block. Then, you should explain in brief about the blocks present in HDFS and also mention their default size. 

Blocks are the smallest continuous location on your hard drive where data is stored. HDFS stores each file as blocks, and distribute it across the Hadoop cluster. The default size of a block in HDFS is 128 MB (Hadoop 2.x) and 64 MB (Hadoop 1.x) which is much larger as compared to the Linux system where the block size is 4KB. The reason of having this huge block size is to minimize the cost of seek and reduce the meta data information generated per block.

Related questions

+1 vote
asked Dec 21, 2022 in HDFS by Robin
+1 vote
asked Nov 7, 2020 in Hadoop by SakshiSharma
...