About the job
The skillset we're looking for: DE: AWS (S3, Lambda, RDS-Postgres), Databricks, PySpark, SQL, Data analysis, Experience in handling large datasets Data modelling (Dimensional data modelling, data profiling, data analysis) Ability to understand the functional requirement and implement the same in ETL pipeline code (Databricks Notebook, Full refresh vs Merge strategy, Change Data Capture)
Requirements
- AWS
- Databricks
- PySpark
- SQL
- Data Modelling
Preferred Technologies
- AWS
- Databricks
- PySpark
- SQL
- Data Modelling
About the company
null
Similar Jobs
Data Engineer
SAP
Gurugram•Not disclosed
Last week•On-Site
D
Data Engineer
D. E. Shaw India
Hyderabad•Not disclosed
3 weeks ago•On-Site
Data Engineer
GlobalLogic
Gurugram•Not disclosed
3 weeks ago•Not specified