0 votes
in Big Data | Hadoop by

What is a ‘block’ in HDFS?

1 Answer

0 votes
by

A ‘block’ is the minimum amount of data that can be read or written. In HDFS, the default block size is 64 MB as contrast to the block size of 8192 bytes in Unix/Linux. Files in HDFS are broken down into block-sized chunks, which are stored as independent units. HDFS blocks are large as compared to disk blocks, particularly to minimize the cost of seeks. If a particular file is 50 mb, will the HDFS block still consume 64 mb as the default size? No, not at all! 64 mb is just a unit where the data will be stored. In this particular situation, only 50 mb will be consumed by an HDFS block and 14 mb will be free to store something else. It is the MasterNode that does data allocation in an efficient manner.

Related questions

0 votes
asked Jan 11, 2020 in Big Data | Hadoop by rajeshsharma
0 votes
asked Feb 23, 2020 in Big Data | Hadoop by rahuljain1
...