Principal Data Ops Cloud Engineer in Charlotte, North Carolina
Posted 09/29/21


  • Top 25 U.S. digital financial services company committed to developing award-winning technology and services.
  • Named one of the top three fastest-growing banking brands in the U.S. in 2020.
  • Offers a full suite of products including mortgage lending, personal lending, and a variety of deposit and other banking products (savings, money-market, and checking accounts, certificates of deposit (CDs), and individual retirement accounts (IRAs)), self-directed and investment-advisory services, and capital for equity sponsors and middle-market companies.



  • Fast paced, highly collaborative, teamwork-oriented environment
  • Make an immediate impact in this high visibility role
  • Base salary of $145k + 11% bonus and excellent benefits package
  • Top-notch leadership committed to developing people



  • Charlotte, NC - 100% remote for now, then will sit on-site in Charlotte, NC when staff transitions back into the office after October



The Principal Data Ops Cloud Engineer serves as an industry leading technical expert as it pertains to product analysis, business analysis, and requirements development. This individual is a strategic leader in capturing Sustain requirements by interacting with customers and understanding Development Squad and Business Partner needs, providing consultation to ensure system and business requirements are met and documented.

  • Develop Cloud centric pipelines through Infrastructure as Code components
  • Implement open source, vendor, and cloud native pipelines via GitOps model
  • Support Data Science and Analytic teams using Python, R and Scala code
  • Integrate Data Governance tools using API’s and DevOps
  • Develop auto scaling and self-healing / self service offerings in the Analytics space at AWS
  • Collaborate and partner with counterparts from Security, Enterprise Architecture, and CIO application teams to enable developer agility while ensuring appropriate controls are in place.
  • Help to facilitate a collaborative development approach which encourages and accepts contributions and emphasizes transparency.



  • 5+ years of Cloud experience and the ability to articulate the benefits of the cloud using concrete examples of work that has been done in the past.
  • 5+ years background in Hadoop administration, DevOps, or developer experience. The emphasis would be on Spark, Hive, NiFi , Impala and Kafka.
  • 5+ year experience with AWS Big Data tech stack including EMR, Lambda, Sagemaker, Glue , Kinesis , SMS and other related technologies.
  • Demonstrated experience supporting Data Science teams with experience using technologies such as Jupyter Notebooks, Anaconda, DataRobot and other platforms that are used to run ML models.
  • 5+ years of Python, R, Scala, Go or any other programming language that has been used to run Spark workloads.
  • Demonstrated experience in optimizing Cloud centric workloads using monitoring solutions such as ELK, Prometheus, Grafana, Splunk and other APM tools.
  • Familiarity with running Big Data pipelines in an automated manner using Jenkins, Terraform and any other similar Cloud based tool.


  • Employee Type: Direct Hire
  • Location: Charlotte, North Carolina
  • Category: Information Technology
  • Date Posted: 09/29/21
Apply Today!
Apply Today!

Apply Today!

Attach a resume file. Accepted file types are DOC, DOCX, PDF, HTML, and TXT.

We are uploading your application. It may take a few moments to read your resume. Please wait!