Feb 23 in Big Data | Hadoop

What is a ‘block’ in HDFS?

1 Answer

Feb 23

A ‘block’ is the minimum amount of data that can be read or written. In HDFS, the default block size is 64 MB as contrast to the block size of 8192 bytes in Unix/Linux. Files in HDFS are broken down into block-sized chunks, which are stored as independent units. HDFS blocks are large as compared to disk blocks, particularly to minimize the cost of seeks. If a particular file is 50 mb, will the HDFS block still consume 64 mb as the default size? No, not at all! 64 mb is just a unit where the data will be stored. In this particular situation, only 50 mb will be consumed by an HDFS block and 14 mb will be free to store something else. It is the MasterNode that does data allocation in an efficient manner.

Click here to read more about Loan/Mortgage
Click here to read more about Insurance

Related questions

Jan 11 in Big Data | Hadoop
Jun 8 in HDFS
Feb 23 in Big Data | Hadoop