Cloud Data Integration Senior (Open for H1 candidates))

Location: McLean, VA


Responsibilities in 2 primary categories:

Development & Execution

Demonstrate effective and disciplined software development expertise 
Deliver solutions on-time with high bar of quality, and continuously improve software engineering practices
Work with Product Owners to understand the desired capability, to define and prioritize work, determine deliverables, and manage workloads
Involves in application development, prototyping, modeling and technical consulting
Actively facilitates issue resolution and issue tracking. Identifies mitigation steps and ensures risks and issues are mitigated/resolved in a timely manner.

Technical Leadership

Leads efforts in data, code, and systems analysis to ensure accuracy and completeness in requirements
Leads the design process and evaluates alternative solutions relating to usability, security, scalability, failover, and performance. 
Supports the Development Tech Lead and/or Project Manager in managing projects / Agile Sprints
Performs and leads thorough Unit testing and Integration testing, including test data creation for various test scenarios.
Mentor and coach intermediate Developers on both technical and soft skills

Basic Qualifications:

At least 4 years of full-time experience in software development including design, coding, testing, and support
At least 1 years of Cloud infrastructure experience working with one or more of the following Amazon Web Services (AWS) Cloud services: EC2, Elastic Beanstalk , EMR, ECS, S3, SNS, SQS, Cloud Formation, Cloud watch, Lambda 
At least 3 years of experience in: Java, JEE, Spring boot, Docker, Kubernetes
At least 1 years of experience in: Apigee, OAuth 2.0 and OpenID
At least 1 year of experience in any big data technologies
Hands-on experience of AWS architecture design, Data Management, System to System Data Integration 
At least 3 years of experience in RESTful API design and development 
At least 2 years of experience with Agile, Kanban or Scrum methodologies

Preferred Qualifications:

Master's Degree in Computer Sciences or related fields
Proficient with CICD process, Agile and DevOps Software Development Life Cycle including analysis, high level design, coding, testing, and implementation, performance tuning, bug fixing and quality control
Experience of building Data Lake and Data Fabric in AWS Cloud, moving Data applications to the Cloud, and developing cloud native Data applications
Expertise in creating data models, optimizing data, automating & restructuring data reporting system in financial services domain
2+ years of experience in big data technologies (Spark, Hadoop, HDFS)
2+ years of experience in NoSQL technologies (MongoDB, DynamoDB, Cassandra)
2+ years of experience in PySpark, AWS EMR, SQS, AWS MQ and AWS Lamda
At least 1 years of experience in one or more: Python, Spark cluster programming using Java/Python/Scala
Experience of leading complex data applications with large volumes of data

Key to success in this role

Strong working knowledge and technical competencies of AWS
Ability to communicate clearly, effectively, persuasively with technology and business stakeholders
Ability to develop mutually beneficial relationships inside and outside of the division, work and collaborate effectively in a team environment
Sense of urgency to delivery and able to apply risk-based approach to prioritize work
Strong work ethic, self-motivated, independent, and works with minimal direction
Motivated to learn new technologies and identify process improvements and efficiencies
Ability to adapt to change while continuing to deliver on assigned objectives
Ability to use data to help inform strategy and direction