Quantcast
Channel: StackExchange Replication Questions
Viewing all articles
Browse latest Browse all 17268

Periodic Updates to Broadcast Variables

$
0
0

I am building a spark streaming application that receives primary streaming data from Kinesis, joins that data with data from other sources (e.g., JSON files in S3) and stores the final output in S3 as parquet files.

As the external data (i.e., JSON from S3) is not too large, I want to fully replicate the data on each node of the cluster. From what I understand, broadcast variables are the best way to do this. However, the challenge here is that the external data is updated daily, and I want to do the following:

  1. Periodically check from the driver whether the new daily file is available: This is probably doable because part of the driver code executes every batch_interval seconds (to the best of my understanding). So, every batch_interval seconds I can check if the date has changed (assuming both the cluster and the file generation job operate in UTC time zone), and if it has, then I will start looking for the new file in S3. Looking at the underlying Spark code, it defines a re-entry point in the driver code. However, I am not sure how the re-entry point is determined. I want to make sure that the driver's periodic check for data and file is within the re-entry point so that it actually is executed every batch_interval seconds.
  2. If the new daily file is available, then rebroadcast the variable: According to the documentation, broadcast variables are immutable and cannot be rebroadcasted. But, someone suggested that I can un-persist existing broadcast variables and then broadcast the variable again. Has anyone tried this approach and did it work?

Viewing all articles
Browse latest Browse all 17268

Trending Articles



<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>