Hadoop/AWS/Java L3 Production Support/Developer

Location: McLean, VA


Position Overview:

Do you want to be part of the exciting journey of Cloud Transformation? Are you a seasoned data integration developer or support member that thrives in an innovative and collaborative team? An advanced coder or support member who enjoys a rapid, dynamic environment? If you have proven Data Integration experience with AWS Cloud technologies and would like to help us build an Enterprise Data Lake and Data Fabric in the Cloud, apply to join Freddie Mac’s Emerging Data Technologies team.

Our Impact:

As part of Platform & Data Solutions organization, we are an eco-system that enables optimal, real-time loan evaluation and purchase decisions across Single Family by providing the right data at the right time to the downstream applications. We are also leading the change to build data solutions using our Cloud Native Data Lake platform that enable consolidating, centralizing and consuming shared data for the enterprise to serve various data analytics and reporting needs.

Your Impact:

In this role, you will be part of the team to pave the way by building and supporting for the future working on state-of-the-art tools and technologies in alignment with business objectives. You will be researching, learning, supporting and implementing innovative, creative and effective solutions to support real-time back-end needs of downstream applications across the enterprise.

Qualifications:

At least 5 years of full-time experience in software development including design, coding, testing, and support
At least 2 years of Cloud infrastructure experience working with one or more of the following:
Amazon Web Services (AWS) Cloud services: EC2, Elastic Beanstalk, EMR, ECS, S3, SNS, SQS, Cloud Formation, Cloud watch, Lambda
At least 3 years of experience in: Java, JEE, Spring boot, Docker, Kubernetes
At least 1 years of experience in: Apigee, OAuth 2.0 and OpenID
At least 2 years of experience in RESTful API design and development
At least 2 years of experience with Agile, Kanban or Scrum methodologies
2+ years of experience in big data technologies like Hadoop and HBase with MapReduce, Hive, Sqoop, Spark, Spark, PySpark
2+ years of experience in NoSQL technologies (MongoDB, DynamoDB, Cassandra)
At least 1 years of experience in one or more: Python, Spark cluster programming using Java/Python/Scala
1+ years of experience in Snowflake or similar
1 + years of experience building ETL processes using Attunity, Kafka, Talend or similar preferred
Hands-on experience of AWS architecture design, Data Management, System to System Data Integration
Bachelor’s Degree in computer science or IT, Advanced degree preferred


Keys to Success in this Role:

Passionate and take pride in owning your work
Technically strong with hands-on experience
Ability to understand and communicate well with a passion to get the job done
Open minded to embrace new technologies and culture
Ready to take on any kind of work to help drive things forward
Innovative in providing solutions