0 votes
in HDFS by

How to copy a file into HDFS with a different block size to that of existing block size configuration?

1 Answer

0 votes
by

Yes, one can copy a file into HDFS with a different block size by using ‘-Ddfs.blocksize=block_size’ where the block_size is specified in Bytes.

Let me explain it with an example: Suppose, I want to copy a file called test.txt of size, say of 120 MB, into the HDFS and I want the block size for this file to be 32 MB (33554432 Bytes) instead of the default (128 MB). So, I would issue the following command:

hadoop fs -Ddfs.blocksize=33554432 -copyFromLocal /home/edureka/test.txt /sample_hdfs

Now, I can check the HDFS block size associated with this file by:

hadoop fs -stat %o /sample_hdfs/test.txt

Else, I can also use the NameNode web UI for seeing the HDFS directory.

Related questions

0 votes
asked Dec 21, 2022 in HDFS by Robin
0 votes
asked Dec 21, 2022 in HDFS by Robin
...