Big Data Architect

Atlanta, Georgia

Post Date: 10/16/2017 Job ID: 6793 Category: Other Area(s)

Our client has the first SaaS Application Management and Security Platform, enabling IT to centralize, orchestrate, and operationalize day-to-day administration and control for SaaS applications. Every day, thousands of customers rely on their platform to centralize data and controls, surface operational intelligence, enforce complex security policies, and delegate custom administrator privileges across SaaS applications. This is a great opportunity for professional and personal growth on a large scale, with 40+ pieces of technology and 6-8 different databases.  

Database Architect on the Platform Services team

Responsibilities
  • Primary responsibility for soundness of database design against standards for development, high availability, security, and performance
  • Ownership of setting up process and frameworks to codify best practices (data migration, maintenance, and modeling)
  • Provide database and query design help to development teams
  • Assist teams in selecting the right database for their use cases
  • Train and educate team members on performance best practices.
  • Understand the User Story and the problem it is trying to solve. Make sure solution uses best practices, frameworks, and meets performance and security standards
  • Envisioning, defining, and delivering end-to-end integrated data solutions at an enterprise wide scale

Requirements
  • 4+ years’ experience with database design and query optimization
  • 4+ years of experience with managing a high performance and high availability RDBMS
  • Deep MYSQL experience (i.e. Master/Secondary clusters, backup/restore, disaster recovery, high qps, etc.)
  • 2+ years’ experience Java programming (jdbc best practices, connection pooling, and experience with Hibernate/iBatis)
  • 2+ years of experience with automating database cluster set-ups, migrations, backups, scaling, and recoveries for our cloud-based Big Data clusters (Percona MySQL, ElasticSearch, Redis, and Kafka)
  • 1+ years of Kafka experience - high availability, upstream checkpointing via compaction
  • Experience with multiple datastores and distributed databases; understand the pros/cons for given use cases, and able to make database recommendations based on team needs
  • Experience with enterprise monitoring of data services (master election, replication status, connection drops, slow queries, memory, cpu, and queries per second)
  • Colleagues describe you as self-driven, fast-learning, and hardworking

Plus Skills (in order of priority)
  • ElasticSearch experience -  Multi-cluster, multi-terabyte indices
  • Hadoop/Hbase Setup and Maintenance
  • Interest in graph databases (ArangoDB)
  • Interest/awareness with automating server setup with Chef/Ansible on AWS, GCP, or Azure
  • Experience or exposure to Vitess
  • Cassandra experience - setup, config, keyspace and table design
  • Container orchestration tools (i.e. Mesos/Kubernetes/Nomad, etc.)
  • Experience with Spark/Flink/Storm

Not ready to apply?

Send an email reminder to:

Share This Job:

Related Jobs: