There is very little information that I can find online with regards to
recommended dfs.block.size setting for HBase. Often it conflates with the
HBase blocksize, which we know should be smaller. Any chance we can get
some recommendations for dfs.block.size?
The default shipped with HDFS in later versions of CDH is 128mb. In highly
random-read online database scenarios, should we be tuning that lower? Does
the datanode need to read that entire block when HBase tries to fetch data
from it? It seems hard to believe that the default block size would be good
for HBase, considering how different it is from other hadoop workloads.