22 MayTitle: Data Engineers
Overall Responsibility :
- Develop sustainable data driven solutions with current new gen data technologies to meet the needs of our organization and business customers.
- Apply domain driven design practices to build out data applications. Experience in building out conceptual and logical models.
- Build out data consumption views and provisioning self-service reporting needs via demonstrated dimensional modeling skills.
- Measuring data quality and making improvements to data standards, helping application teams to publish data in the correct format so it becomes easy for downstream consumption.
- Big Data applications using Open Source frameworks like Apache Spark, Scala and Kafka on AWS and Cloud based data warehousing services such as Snowflake.
- Build pipelines to enable features to be provisioned for machine learning models. Familiar with data science model building concepts as well as consuming and from data lake.
- At least 8 years of experience with the Software Development Life Cycle (SDLC)
- At least 5 years of experience working on a big data platform
- At least 3 years of experience working with unstructured datasets
- At least 3 years of experience developing microservices: Python, Java, or Scala
- At least 1 year of experience building data pipelines, CICD pipelines, and fit for purpose data stores
- At least 1 year of experience in cloud technologies: AWS, Docker, Ansible, or Terraform
- At least 1 year of Agile experience
- At least 1 year of experience with a streaming data platform including Apache Kafka and Spark
- 5+ years of data modeling and data engineering skills
- 3+ years of microservices architecture & RESTful web service frameworks
- 3+ years of experience with JSON, Parquet, or Avro formats
- 2+ years of creating data quality dashboards establishing data standards
- 2+ years experience in RDS, NOSQL or Graph Databases
- 2+ years of experience working with AWS platforms, services, and component technologies, including S3, RDS and Amazon EMR