About the Project
We are working on a large international U.S.-based project and are looking for a Data Engineer to strengthen our team. You will be responsible for backend processes, ensuring that applications like Zoom, Webex, and other tools run smoothly in the background.
This role offers a great opportunity to gain hands-on international experience, collaborate with a distributed team, and work with modern technologies in real business scenarios.
Responsibilities
Design, develop, and maintain robust ETL pipelines using AWS Glue, Airflow, and Python
Optimize and monitor large-scale data workflows with Athena, Hudi, and S3
Automate data extraction, transformation, and loading processes
Write and tune complex SQL queries for reporting and analytics
Develop serverless solutions using AWS Lambda, SNS, and SQS
Collaborate with cross-functional teams to support and improve existing data infrastructure
Ensure data quality, scalability, and performance across different data layers
Implement CI/CD processes using GitLab for code versioning and deployment
Requirments
3–6 years of experience as a Data Engineer or in a similar role
Strong programming skills in Python and SQL
Hands-on experience with AWS services — including Glue, S3, Athena, Lambda, SNS, SQS
Experience with Apache Airflow for orchestration and scheduling
Familiarity with Hudi and data lake optimization concepts
Experience with GitLab for code management and pipeline automation
Understanding of ETL/ELT concepts and distributed data systems
English level: B2 or higher
Strong analytical mindset and attention to detail
We Offer
Remote work with a flexible schedule
Opportunity to work on an international U.S.-based project
Competitive pay: 1000+ USD per task/project
Exposure to modern data stack (AWS, Airflow, Glue, Athena, Hudi)
Supportive team and opportunities for career growth
📩 If you want to boost your data engineering skills and gain international project experience, apply now — we’d love to meet you!