Quantcast
Channel: StackExchange Replication Questions
Viewing all articles
Browse latest Browse all 17268

Adjusting HDFS Replication Factor by Age or "Hotness"?

$
0
0

I'm migrating from classic filesystem and Grid Engine scheduler to a more hadoopish world with first step HDFS only and later a new scheduler as well.

The current access pattern where I work is new data (we generate ~150GB data/day, split into ~100 files) is by far most accessible and older data gets less accessible unless it's somehow interesting and so there are a lot of accesses to say a few files of the daily 100 files.

Today we save the latest data on ALL nodes (we have 50 nodes) and remove older data, and we keep one huge storage server for all history.. this is suboptimal as we're only able to keep a few weeks of data and like I said sometimes a few old files are "hot" and we only have a copy of them on the storage server and it slows us down dramatically when we're working on them - so that's the main motivation to go HDFS.

What I want to do is have HDFS keep LOTS of replicas for new files (as they are automatically hot) but have that number auto reduced based on age or better yet - hotness - so we can keep a whole lot more data and only relevant data - out of 150GB we generate daily only 5GB are really read over and over again..

Is there any solution that does that?


Viewing all articles
Browse latest Browse all 17268

Trending Articles



<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>