Sep 19, 2018 · Welcome to the third blog in the series on how to configure TeamForge with Gerrit replication enabled. In the first blog post, I have explained the problem of Git LFS data in context of replication and proposed a solution with AWS S3 Bucket Cross Region Replication (CRR). Mar 20, 2019 · Tooling, deployment scripts, and tech stack. This blog is currently built using Jekyll, a static site generator written in ruby.It’s only a part of the stack, which also involves various build scripts and optimization tools I use to (as of this writing) score a 100 on pagespeed insights There are some good guides for working with MapReduce and DynamoDB. I followed this one the other day and got data exporting to S3 going reasonably painlessly. I think your best bet would be to create a hive script that performs the backup task, save it in an S3 bucket, then use the AWS API for your language to pragmatically spin up a new EMR job flow, complete the backup. Apr 09, 2019 · What is SNS, SES What is scale up and scale out of DB instance based on CPU and memory utilization https://aws.amazon.com/blogs/database... Input: Spark Dataframe: df String: Filename String: S3 Path String: Environment String: category String: division Output: Human readable csv file''' import os import shutil def renamer (filename): '''Renames all Spark partition filenames to just the part and the parition number.csv.