This role is for one of the Weekday's clients
Salary range: Rs 800000 - Rs 2000000 (ie INR 8-20 LPA)
Min Experience: 5 years
Location: Pune, Ahmedabad, Indore, Hyderabad
JobType: full-time
Job Title: AWS DevOps Engineer / Data Engineer
Experience: 5 to 12 Years
Locations: Ahmedabad / Pune / Indore / Hyderabad
Requirements
Key Responsibilities:
- Design, implement, and manage scalable cloud infrastructure using AWS, Terraform, and Docker.
- Administer and optimize Kubernetes clusters on Amazon EKS to ensure high availability and performance.
- Develop and maintain CI/CD pipelines using tools such as Jenkins, GitHub Actions, or Bitbucket Pipelines.
- Collaborate with engineering and product teams via GitHub, Jira, and Confluence to drive delivery and operational efficiency.
- Apply strong security best practices across infrastructure, applications, and deployment pipelines.
- Monitor systems performance and availability using DataDog, and continuously optimize for scalability and reliability.
- Create and maintain automation scripts using Shell and Python to streamline infrastructure and data workflows.
- Utilize Infrastructure as Code (IaC) with Terraform and manage configurations using Ansible.
- Work with a broad range of AWS services, including EC2, S3, RDS, Lambda, CloudWatch, Config, Control Tower, DynamoDB, and Glue.
- Design and manage secure network architectures (e.g., VPCs, NATs, Security Groups, Firewalls, Routing, and ACLs).
- Implement disaster recovery plans, monitoring strategies, and auto-scaling to support production-grade workloads.
Required Qualifications:
- Bachelor's degree in Computer Science, Information Technology, or a related discipline.
- 5+ years of experience in DevOps or Cloud Infrastructure roles, with a strong focus on automation and scalability.
- Strong hands-on experience with Linux, shell scripting, and Python for infrastructure and data operations.
- Deep expertise in AWS services, including Terraform-based infrastructure provisioning.
- Proven track record managing and scaling Kubernetes (EKS) clusters.
- Solid understanding of CI/CD pipeline architecture and deployment automation.
- Familiarity with Python linting tools and coding best practices.
- Proficient in using Docker for containerization in production environments.
- Experience using DataDog for logging, metrics collection, and monitoring.
- Strong collaborative skills and experience using agile project tools such as GitHub, Jira, and Confluence.
- Knowledge of AWS Glue for ETL workflows is a plus.
Key Skills:
AWS, Kubernetes, EKS, CI/CD, Datadog, EC2, S3, AWS Lambda, AWS Glue, Terraform, Docker, Ansible, Python, Shell Scripting