Sr. Data Engineer (Spark, Kafka , AWS) 

Location: McLean, VA


Sr. Data Engineer (Spark, Kafka , AWS) - Temp to Perm

Job Description
•    Responsible for delivery in the areas of: big data engineering with Hadoop, Python and Spark (PySpark) and an high level understanding of machine learning
•    Develop scalable and reliable data solutions to move data across systems from multiple sources in real time (Kafka) as well as batch modes (Sqoop).
•    Construct data staging layers and fast real-time systems to feed BI applications and machine learning algorithms
•    Utilize expertise in technologies and tools, such as Python, Hadoop, Spark, AWS, as well as other cutting-edge tools and applications for Big Data
•    Demonstrated ability to quickly learn new tools and paradigms to deploy cutting edge solutions.
•    Develop both deployment architecture and scripts for automated system deployment in AWS
•    Create large scale deployments using newly researched methodologies.
•    Work in Agile environment

Basic Qualifications
•    Bachelor’s degree in Mathematics, Statistics, Computer Science
•    Solid experience with Hadoop including Hive, HDFS, Kafka and PySpark
•    At least 3 years’ experience in Python (NumPy, SciPy, scikit-learn, pandas, PySpark) and any other open source programming languages for large scale data analysis
•    3+ years of experience working with AWS
•    At least 8+ years of software development experience 

Preferred Qualifications
•    Master’s Degree in Computer Science
•    2+ years of experience working with financial data
•    knowledge of modern statistical learning methods
•    Familiarity with one or more streaming technologies, NiFi etc.
•    Experience with NoSQL databases
•    Strong communication skills, with the ability to work both independently and in project teams