Senior Apache Kafka Data Engineer
Accenture Southeast Asia
Tanggal: 2 minggu yang lalu
Kota: Semarang, Central Java
Jenis kontrak: Penuh waktu

As a Data Engineer, you will:
Work across workstreams to support data requirements including reports and dashboards
Analyze and perform data profiling to understand data patterns and discrepancies following Data Quality and Data Management processes
Understand and follow best practices to design and develop the E2E Data Pipeline: data transformation, ingestion, processing, and surfacing of data for large-scale applications
Develop data pipeline automation using Azure technologies stack, Databricks, Data Factory
Understand business requirements to translate them into technical requirements that the system analysts and other technical team members can drive into the project design and delivery
Analyze source data and perform data ingestion in both batch and real-time patterns via various methods; for example, file transfer, API, Data Streaming using Kafka and Spark Streaming
Analyze and understand data processing and standardization requirements, develop ETL using Spark processing to transform data
Understand data/reports and dashboards requirements, develop data export, data API, or data visualization using Power BI, Tableau, or other visualization tools
We are looking for experience and qualifications in the following:
Bachelor’s degree in Computer Science, Computer Engineer, IT, or related fields
Minimum 2 years’ experience in Data Engineering fields (new graduates are also welcome for some of our job openings)
Data Engineering skills: Python, SQL, Spark, Cloud Architect, Data & Solution Architect, API, Databricks, Azure
Data Visualization skills: Power BI (or other visualization tools), DAX programming, API, Data Model, SQL, Story Telling and wireframe design
Business Analyst skills: business knowledge, data profiling, basic data model design, data analysis, requirement analysis, SQL programing
Basic knowledge in Data Lake/Data Warehousing/ Big data tools, Apache Spark, RDBMS and NoSQL, Knowledge Graph
Experience working in a client-facing/consulting environment is a plus
Team player, analytical and problem-solving skills
Good communication skills in English
Work across workstreams to support data requirements including reports and dashboards
Analyze and perform data profiling to understand data patterns and discrepancies following Data Quality and Data Management processes
Understand and follow best practices to design and develop the E2E Data Pipeline: data transformation, ingestion, processing, and surfacing of data for large-scale applications
Develop data pipeline automation using Azure technologies stack, Databricks, Data Factory
Understand business requirements to translate them into technical requirements that the system analysts and other technical team members can drive into the project design and delivery
Analyze source data and perform data ingestion in both batch and real-time patterns via various methods; for example, file transfer, API, Data Streaming using Kafka and Spark Streaming
Analyze and understand data processing and standardization requirements, develop ETL using Spark processing to transform data
Understand data/reports and dashboards requirements, develop data export, data API, or data visualization using Power BI, Tableau, or other visualization tools
We are looking for experience and qualifications in the following:
Bachelor’s degree in Computer Science, Computer Engineer, IT, or related fields
Minimum 2 years’ experience in Data Engineering fields (new graduates are also welcome for some of our job openings)
Data Engineering skills: Python, SQL, Spark, Cloud Architect, Data & Solution Architect, API, Databricks, Azure
Data Visualization skills: Power BI (or other visualization tools), DAX programming, API, Data Model, SQL, Story Telling and wireframe design
Business Analyst skills: business knowledge, data profiling, basic data model design, data analysis, requirement analysis, SQL programing
Basic knowledge in Data Lake/Data Warehousing/ Big data tools, Apache Spark, RDBMS and NoSQL, Knowledge Graph
Experience working in a client-facing/consulting environment is a plus
Team player, analytical and problem-solving skills
Good communication skills in English
Cara melamar
Untuk melamar pekerjaan ini, Anda perlu otorisasi di situs web kami. Jika Anda belum memiliki akun, silakan daftar.
Posting CVPekerjaan serupa
Associate PySpark Manager
Accenture Southeast Asia,
Semarang, Central Java
1 minggu yang lalu
THE WORK: Ignite your passion for data and innovation! In this role, you will be a subject matter expert, collaborating with various teams to contribute to key decisions and provide solutions to challenges that arise. You will engage with multiple teams and manage decisions that impact your immediate group and beyond. Your expertise in PySpark will be essential as you...

Administrative Intern
Schoters,
Semarang, Central Java
1 minggu yang lalu
Join us to support the daily operations (administrative) that ensure students have the best learning experience with Schoters!What you will do:Handle branch operations (attendance, student assessment recap, documentation, and teacher needs)Support students and ensure they have a smooth and engaging learning journeyCoordinate with stakeholders regarding student progressAssist in campus admission and student onboarding (data input, checking, and documentation)Monitor admission &...

Senior Apache Spark Data Engineer
Accenture Southeast Asia,
Semarang, Central Java
2 minggu yang lalu
As an Application Developer, you will be responsible for designing, building, and configuring applications to meet business process and application requirements using Google Dataproc.Your typical day will involve working with Apache Spark and collaborating with cross-functional teams to deliver impactful data-driven solutions.Roles & Responsibilities: Design, build, and configure applications to meet business process and application requirements using Google Dataproc. Collaborate...
