We are looking for a savvy and experienced Data Engineer to join our data science team! Get ready to work with everything connected to data: expanding and optimizing our data pipeline architecture as well as data flows, building and improving our data systems and collecting data for cross-functional teams.
Come work in our Ramat Hahayal office and catch an exceptional opportunity to tackle multiple challenging projects while leveraging data sources to come up with new and innovative ideas!
What's the job?
- Create and maintain optimal data pipeline architecture
- Assemble large, complex data sets that meet both functional & non-functional business requirements
- Identify, design, and implement improvements in internal processes: manual processes automation, data delivery optimizations, infrastructure re-designs for greater scalability, etc.
- Use SQL and big data cloud technologies to build the infrastructure required for optimal loading, extraction, and transformation of data from a wide variety of sources
- Build analytics tools that utilize the data pipeline to provide actionable insights into customer acquisition, operational efficiency, and other key business performance metrics
- Work with data teams to assist with data-related technical issues and support their needs associated with data infrastructure
- Create data tools for analytics and help our team of data scientists in building and turning our products into the most progressive ones in the industry
- Work with data and analytics experts, striving to improve our data systems functionality
- 3+ years of experience as a Data Engineer
- M.Sc. or Ph.D. in Computer Science, Statistics, Informatics, Information Systems or another quantitative field.
- Advanced knowledge of SQL
- Experience with relational databases (+ familiarity with other databases) and query authoring (SQL)
- Experience building and optimizing big data pipelines, architectures, and data sets
- Experience with ML models (classification, clustering, decision tree-based methods)
- Experience building processes that support data transformations, data structures, dependencies, workload management, and metadata
- Working and supporting cross-functional teams in a dynamic environment
- Google Cloud services experience: Bigquery, Cloud pubsub, Cloud Dataflow, Cloud Dataprep, etc.
- AWS cloud services experience: EC2, EMR, RDS, Redshift, Kinesis, etc.
- Experience with object-oriented/object-functional scripting languages: Python, Java, C++, Scala, etc.
- Experience with data pipeline and workflow management tools: Azkaban, Luigi, Airflow, etc.
- Experience with big data tools: Hive, Spark, Kafka, etc.
- Experience with stream processing systems: Storm, Spark-Streaming, etc.