Job Opening for Databricks Developer (Job Code RT 1555)
Apply before 20-01-2026
We are seeking an experienced Databricks Engineer / Data Engineer with strong expertise in
Databricks, Python, SQL, and PySpark. The candidate will be responsible for designing,
developing, and maintaining scalable data pipelines and analytics solutions on the
Databricks platform.
Roles & Responsibilities
1. Design, build, and maintain scalable data pipelines using Databricks and PySpark
2. Develop efficient data transformations using Python and SQL
3. Work with large volumes of structured and semi-structured data
4. Optimize data processing jobs for performance and cost
5. Ensure data accuracy, quality, and reliability
6. Collaborate with cross-functional teams to understand data requirements
7. Follow best practices for data engineering and cloud-based data solutions
Required Skills & Qualification
1. 2–6 years of hands-on experience in data engineering
2. Databricks Certification (mandatory)
3. Strong experience with Databricks
4. Proficiency in Python, SQL, and PySpark
5. Experience in building and optimizing ETL/ELT pipelines
6. Good understanding of data engineering best practices
7. Strong problem-solving and analytical skills
Preferred / Good to Have
1. Experience with cloud platforms such as AWS, Azure, or GCP
2. Knowledge of Delta Lake and data optimization techniques
3. Familiarity with data warehousing concepts
4. Experience with CI/CD pipelines for data workflows