Data Engineer

Data Engineer

Data Science Group

Data Science Group (DSG) is looking for a talented Data Engineer with a desire to transition into the machine learning field to join our team!

Together we'll enable and collaborate with our data science teams to deliver business value using machine learning.

We are an independent global AI Center of Excellence, offering rapid application development platforms for end-to-end AI-based solutions. Furthermore, DSG provides solutions for AI governance, AI versioning, AI monitoring, and AI performance management in research and production environments.

As an ML enabler, you will work directly with the CTO to design and implement the functionalities required to deliver excellent ML solutions.

We'll be doing so in different environments and different infrastructures, generalizing solutions but also tailoring specifics when necessary for each case.

Inside of our scope will be data handling and serving, MLOps and execution management, productionizing, model serving, model monitoring, etc.

In DSG you will be working with multiple teams of excellent data scientists operating inside different organizations, utilizing varying technologies to help make real-world impacting machine-learning solutions.

Who you are?

You are a technological enabler, a people person who can work and adapt to changing environments with different stakeholders and varying technologies.

You understand the lifecycle of a machine learning product and have a high-level understanding of machine learning concepts like models, datasets, probabilities, and similar. Minimum of 2 years of relevant industry experience – a must

Tech requirements:

  • Fluency in Python, writing production-grade code
  • Data engineering skills, working with SQL, Spark, and cloud-based data services
  • Docker
  • Experience working in cloud environments
  • Developing API services
  • Designing and implementing microservices-based solutions
  • Intermediate or better data handling skills

Extras:

  • Experience with job execution orchestration
  • Experience with machine learning experiment managers

LinearB is looking for a Data Engineer who is passionate about data and has strong experience with building data pipelines and products. You will work with our data team at the CTO office to help mine our datasets to extract insights powering LinearB’s storytelling, product strategy decisions, and customer success flows. You will build pipelines that make our data accessible for research, data science and ad-hoc analytics, as well as develop the data services that power ML based features in our product. You will join our CTO Office team based in our headquarters in Tel Aviv, Israel, working in a Hybrid model.

You will have the opportunity to take part in a disruptive product that leverages cutting-edge technologies while working in an agile development lifecycle. You will join a highly technical and savvy group of doers who are building tomorrow’s technologies. You are expected to be a subject matter expert and a strong team player with innovative thinking and the ability to ramp up on new technologies, patterns, and frameworks.

What You'll Do:

  • Build data processing pipelines to enable efficient access to and utilization of our data by various stakeholders, including data scientists, analysts, product, customer success and marketing
  • Build and own the infrastructure to enable ML based features in LinearB’s product, including APIs, inference services, model monitoring etc.
  • Build data dashboarding capabilities enabling a broad range of stakeholders to view trends, run ad-hoc queries and visualizations, and extract intelligence.

Qualifications:

  • 3+ years proven experience in backend development with Python, Scala, Go or a similar language
  • 2+ years of experience with one or more of the following: Pandas, Spark, AWS Sagemaker, MLFlow
  • Proven experience developing and deploying production workloads on AWS
  • Proficient in SQL, with at least one year of experience writing and maintaining SQL based analytics
  • Proven experience designing, creating and maintaining data visualizations and analytics dashboards
  • You are a curious, independent and rigorous individual with a “Can Do” attitude and ability to prioritize and solve problems
  • You are capable of tackling ambiguous problems and challenges, and you love learning new skills
  • Last but not least, you have a good sense of humor!

 Advantages:

  • Experience as a data engineer developing pipelines for data teams
  • DevOps experience, in particular terraform and serverless
  • Experience developing and deploying REST and GraphQL APIs
  • Experience with customer data platforms and APIs: Segment.io, SFDC, Amplitude, Hubspot, ChurnZero, Zendesk etc.
  • BSc degree in STEM or other related subjects

Scanovate is a global leader in the computer vision industry, with more than 85 million end-users worldwide. Scanovate is building its own proprietary real-time video OCR and Facial Biometrics engine, which can extract information and verify the identity of an end-user.

We are looking for machine learning engineers to join our R&D department. People that can develop amazing technologies, who wish to join our growing family and who want to be part of one of the most interesting companies currently in the global Fintech scene.

We are seeking amazing people:

  • Are passionate about science, math, and innovation
  • Have advanced problem resolution skills
  • Have the ability to learn and work in a fast-paced environment
  • Have a collaborative style and can work in a team

Responsibilities:

  • Collect and build databases for Machine Learning
  • Develop, implement and integrate new machine learning algorithms
  • Analyze and improve existing algorithms
  • Research new methods for future product development

Requirements:

  • At least two years of experience in the same field
  • Background in machine learning theory
  • Understanding machine learning approach to real-world problems
  • Background in mathematical statistic, probability theory, and applied math
  • Experience in Python

Scanovate is located in Ramat-Gan, a 5-minute walk from the train station.

Data Engineer

Imagry

We are looking for a data engineer to join our Israel office in Haifa.

You will work on developing and maintaining our complex data pipelines that feed our deep-learning based self-driving system. The work combines data science with computer vision, algorithms, and software development.

Responsibilities:

1)    Managing multiple large datasets, preparing data for network training.

2)    Dataset cleaning and quality control.

3)    Dataset filtering and balancing.

4)    Processing and manipulation of visual data.

5)    Taking an active part in the development and upkeep of our annotation process, including software development.

6)    Measuring and devising performance metrics.

1)    B.Sc. in Computer Science, Applied Mathematics, Electrical Engineering or similar

2)    1 year using Python for numerical and data analysis (Pandas, NumPy etc. )

3)    1 year experience in basic computer vision methods.

4)    2 years programming experience (Python/C++ or any high-level language).

5)    Probability theory, linear algebra, geometry, trigonometry at a B.Sc level.

Skills that are considered an advantage:

1)    Familiarity with Computer Vision.

2)    Familiarity with the Automounts Vehicle field.

3)    Experience with git.

4)    A publicly viewable body of accomplished projects (e.g. github repository, portfolio, etc.)

Data Engineer

Diagnostic Robotics

We at Diagnostic Robotics imagine a world where the most advanced technologies in the field of artificial intelligence can make healthcare better, cheaper, and more widely available. Diagnostic Robotics’ systems are trained on tens of billions of claims data points and nearly 100 million patient visits. We support global and regional health plans, employers groups and providers, by creating seamless, data-driven interactions that integrate seamlessly into major touchpoints along the patient journey. Our goal is to provide high-value decision support while slashing administrative burdens, massively reducing the cost of care, improving patient experiences, and saving lives.

We are looking for a strong Data Engineer to develop, maintain and test state of the art ML pipelines both for production and experimentation use. This person will work with top talents in the industry including data scientists and physicians (domain experts) to solve real-world problems with AI-based tools using billions of medical data entries. As part of the Data team, our products impact millions of patients on a monthly basis.

What you’ll do:

  • Build data pipelines that digest complex medical data
  • Bring machine learning based solutions to our AI products from research to production.
  • Adapt standard machine learning methods to best exploit modern environments
  • Work closely with data scientists and physicians on medical datasets.

To be successful in this role, you'll need:

  • 2+ years of experience as a Machine Learning Engineer, Data Engineer or a similar role
  • 4+ years of hands-on experience in coding using Python/Scala/Java/Node/Golang/Ruby – Python preferred
  • Independent, self-motivated and auto-didactic with creative and critical thinking capabilities
  • Team player with excellent communication and collaboration skills.
  • Familiarity with data pipeline tools like: Airflow, Argo, Luigi and others – Advantage
  • Familiarity with big-data tools such as: Snowflake, Dask, Spark, Kafka, Hive (or similar) – Advantage
  • Experience with at least one major cloud provider (Azure/GCP/AWS) – Advantage

Data Engineer

Gong.io
Gong is one of Israel’s most valued private software companies. Our solution uses machine learning and AI to automate big parts of customer-facing roles. Over 2000 innovative companies like Zillow, Slack, PayPal, Twilio, Shopify, Hubspot, SproutSocial, Zoominfo, Outreach, MuleSoft, and LinkedIn trust Gong to power their customer reality.
At Gong, we’re building new-generation, machine-learning based software that automates big parts of customer-facing roles by “understanding” their conversations and related work.
Our solution guides sales professionals, coaches them on how to become better, performs tasks for them, and directs them to the best actions.
Gong is seeking a highly motivated, Experienced Data Engineer to join our talented Research group in building our unique product.
If you are someone with a passion for creating innovative products and thrive on solving hard problems, come work with us!

You will:

  • Create and maintain data pipeline architecture, and build the infrastructure required for it
  • Assemble large, complex data sets that meet business requirements.
  • Identify, design, and implement internal process improvements: automating manual processes, optimizing data delivery, re-designing infrastructure for greater scalability, etc.
  • Build analytics tools that utilize the data pipeline and Machine Learning models to provide actionable insights into key business performance metrics
  • Own and lead significant projects around data tools for data scientist team members and analysts, that will level up their work
  • Work with stakeholders including the Executive, Product, Data and Design teams to assist with data-related technical issues and support their data infrastructure needs
  • Explore & learn new fields, APIs and cutting-edge technologies including AI and NLP
  • Be a part of a successful tech startup, building innovative state-of-the-art AI-based systems, leveraging machine learning, natural language understanding (NLU), and more

You have:

  • +1 years of hands-on experience in building data pipelines or tools for BIML systems
  • A degree in Computer Science, Information Systems or similar
  • In-depth familiarity with Python, SQL, and preferably with typed languages like Java, Scala
  • Preferably experience working at scale with cloud environments (preferably AWS) with technologies such as Spark, Hadoop, Kafka, Airflow, EMR, Kubernetes, Docker
  • A good understanding of data with interest in machine learning
  • Good communication and people skills Self-motivation and desire to grow and learn
  • Someone who enjoys having responsibility and experiencing ownership of the created product

You'll get:

  • An opportunity to work with other top talents to solve complex AI challenges
  • To lead and own complex solutions
  • To experience building state-of-the-art machine-learning based systems that are used in production by top business professionals
  • First-hand entrepreneurial experience
  • To be part of a high-profile, well-funded company with access to top customers in the US

Come [legally] hack with us the data on the largest exchange that’s running our world. Not NASDAQ; the one with way more events – the Global Ads Exchanges, where millions of ads are born and clicked every second.

Step behind the curtain of algorithms and competitors that move $1T of annual budgets. Plunge into a world of ISP-volumes of traffic, sub-second predictions and TBs of live, real-world data. Use cutting-edge analysis, ML and engineering or just plain hacker-like thinking to out-perform the market.

Arpeely is a Data-Science startup, leveraging, data analysis, ML, engineering, multi-disciplinary thinking to gain a market edge and exploit hidden opportunities in real-time advertising. Processing over 350k requests per second and serving over 20B sub-second predictions daily, we build and operate Machine Learning algorithms running on the world’s largest Real-time-bidding (RTB) Ad-Exchanges. Arpeely is a Google AdX vendor and serves clients spanning from startups to Fortune-50 companies.

As our Senior Data Engineer, you will become the owner and gatekeeper of all data and data flows within the company. Bring data ingenuity and technological excellence while gaining deep business understanding.

This is an amazing opportunity to join a small and multi-disciplinary A-team while working in a fast-paced, modern cloud, data-oriented environment.

If you are experienced and still hungry to learn and impact – we’d love to have you on our team!

What You’ll Do:

  • Own and develop Arpeely DWH (Petabytes of data!)
  • Own the entire data development process, including business knowledge, methodology, quality assurance, and monitoring
  • Work on a high scale, real-time, real-world business-critical data stores and data endpoints
  • Design and implement efficient data solutions and architecture to support company growth
  • Design data profiling to identify anomalies and maintain data integrity
  • Identify opportunities to improve processes and strategies with technology solutions
  • Work in a results-driven, high paced, rewarding environment

Requirements:

  • Passionate for data
  • 4+ years experience as a Data Engineer or a similar role
  • Strong experience in SQL and Python
  • Good working knowledge of Google Cloud Platform (GCP)
  • Experience with high volume ETL/ELTs tools and methodologies – both batch and real-time processing
  • Hands-on experience in writing complex queries and optimizing them for performance
  • Able to understand complex data and data flows
  • Have a strong analytical mind with proven problem-solving abilities
  • Ability to manage multiple tasks and drive them to completion
  • Independent and proactive
  • Fun to work with & a team player 🙂

AI Engineer

SparkBeyond

Our team’s mission statement:

We are the core & essence of SparkBeyond’s AI research engine. We plan, design, and perform research that brings our product to life. We continuously develop and improve our engine by creating and implementing cutting-edge breakthroughs and providing our clients with the best-performing, one-of-a-kind product.

What will you do?

  • Get a once in a lifetime opportunity to build the next generation AI-for-Data-Science technology using cutting-edge software, algorithms, methods, and practices.
  • Be a part of a unique, multidisciplinary pioneering team of professional aces.
  • Bring your novel ideas from the drawing boards all the way to production at scale, creating the optimal experience for our users and enabling them to make revolutionizing breakthroughs and discoveries.
  • Drive high-impact results via both goal-oriented research and your own creative approaches.
  • Handle vast data volumes, stored in memory, various databases, and file systems.
  • Work with various interfaces – product managers, developers, researchers, QA, data scientists, UX, and many others.

What will you need?

  •  5+ years of hands-on, software development experience in Scala, Java, or C#.
  •  BSc or higher academic education in computer science
  • Strong analytical and algorithmic skills.
  • Advantage: experience with Hadoop / Spark

About Behalf

Behalf is a rapidly-growing alternative lender/fintech that provides in-purchase financing (also known as Buy Now Pay Later) on behalf of B2B merchants. The company is backed by world-class investors Viola Growth, Vintage Investment Partners, Migdal Insurance, La Maison Investment Partners, MissionOG, and Maverick VC Israel.

Behalf’s in-purchase financing solution enables merchants to outsource their net terms & financing programs and receive payment by the next business day. Merchants see increases in average ticket size of up to 100% without the need to tie up their capital or devote resources to the business of credit and collections. Small-medium-sized borrowers choose to pay in full with Net terms or take advantage of up to 180 days of financing. Customers choose the precise financing duration at the individual transaction level, enabling them to match specific purchases with anticipated sales revenue and cash flow. Behalf is purpose-built for e-commerce and is also available for use by merchant sales and customer success teams.

Behalf is headquartered in New York City with offices in Ra’anana, Israel. Behalf employs a hybrid office/remote model allowing our employees to combine work-from-home with limited office presence in both locations.

Data Engineering at Behalf

The Business Intelligence and Data Engineering team is integral to driving scalable data products across Behalf and is responsible for managing and developing the data ecosystem, infrastructure, and strategy for Behalf. The team sits at the intersection of analytics and technology, leveraging ever-evolving in-house and third-party technology to build best-in-class data infrastructure and provide business intelligence.

The team collaborates closely with Engineering and Product on strengthening the infrastructure and collaborates closely with Risk, Marketing, and Sales to deliver growth-driving initiatives, such as developing data-driven insights and strategies to improve new account onboarding and underwriting, leveraging data models and rules to optimize customer management credit decisions. Data plays an integral role at Behalf and the Business Intelligence and Data Engineering team is responsible for the overarching data infrastructure and strategy at Behalf.

Purpose of the Role

The purpose of this role is to be part of the Business Intelligence and Data Engineering team. As a BI Data Engineer, you will collaborate with colleagues to maintain and develop internal data infrastructure providing all the necessary services required to gather, process, consolidate and deliver data within the Company. The goal is to build robust, reliable, and scalable data infrastructure and make data easily accessible.

The role requires strong competency in data engineering and the ability to collaborate closely with the Engineering and Product teams. The role reports to the Director of Business Intelligence and Data Engineering.

Essential Responsibilities:

  • Improve and develop internal ETL tools, optimize and automate internal processes required for successful operating of the data infrastructure
  • Design, develop, and maintain data pipelines from a wide variety of data sources (internal and external)
  • Support and develop a central data warehouse, managing DWH data quality, security, and availability
  • Design and deploy data models and visualizations to support various analytical projects

Minimum Requirements:

  • BSc/MSc degree in Computer Science/Engineering/Mathematics
  • Minimum 3 years of relevant hands-on experience as a data engineer (DWH, ETL)
  • Highly proficient and experienced in SQL and relational databases
  • Highly proficient and experienced in scripting languages and have rapid prototyping skill (e.g. Python)
  • Experience in data modeling and DWH implementation
  • Proven track record of developing and building data infrastructure

Preferred Experience:

  • Experience with Google BigQuery/Snowflake/Redshift
  • Experience with orchestration tools such as Airflow
  • Experience with BI reporting tools (Tableau, QlikView, Looker, etc)
  • Experience with development practices – Agile, CI/CD, TDD
  • Experience in FinTech

Leadership Requirements:

  • A natural leader, responsible and accountable with excellent communication skills
  • Ability to articulate complex theories, concepts, methodology, and findings
  • Willingness to work in a team environment and to cooperate with multiple partners
  • Ability to analyze, present and reach conclusions for business processes
  • Ability to work a flexible schedule
  • Excellent written and oral communication skills in English

You probably want to know

  • We provide a stock option plan as you deserve to own what you create.
  • We run a weekly backend guild meeting in which we share each other with the newest technology and best practices.
  • Every week you will have dedicated time to learn and enrich your toolbox skillset.
  • Working in a hybrid mode (home/office).
  • We are family-friendly and provide flexible working hours. We encourage you to spend time with the people you care about so we don’t work on Erev Hag, and in Chol-Hamoed we work only half days.
  • Pet-friendly office.

General Motors Israel (Herzliya)  takes a significant part in shaping the autonomous vehicle. We impact the future vehicles in diverse fields by developing cutting edge technologies. Within GM Israel, over 600 people working in a hybrid flexible mode on one of the most exciting challenges of our days.

UltraCruise is a driver-assistance technology that enables hands-free driving in 95% percent of all driving scenarios. A door-to-door hands-free driving technology that will enable GM’s goal of zero crashes, zero emissions and zero congestion.

Ultracruise is part of SDV – ‘Software Defined Vehicle’ group that is developing a new vehicle intelligence platform whose features and functions are primarily enabled through software. This is a significant part of the ongoing transformation of vehicles from being mainly hardware-based to a software-centric electronic device on wheels

UltraCruise IL group is focusing on research and development of advanced technologies to enable brake through applications for the future of mobility.

About the Group and Position:
Our Automated driving AI application develops the “brain” of the autonomous vehicle by applying cutting edge technologies into our vehicle's Perception and Decision-Making capabilities. The group joins domains such as Artificial Intelligence, Machine Learning, Computer Vision, Sensor Fusion and Signal Processing and embedded platform development.
The Data Group is developing and operating an efficient Data Engine for automated driving development.

We're hiring a Big Data Engineer that will be the driving force of our big data operation.

What will You do?

  • Develop and maintain scalable data platform to support continuing increases in data volume and complexity – petabytes scale on thousands core cluster.
  • Assemble large, complex data sets that meet functional / non-functional business requirements.
  • Build the infrastructure required for optimal extraction, transformation, and loading of data.
  • Support the needs of advanced ML development (Neural net training, reinforcement learning)
  • Work with data and analytics experts to strive for greater functionality in our data systems

Additional Job Description

What are We Looking for?

  • B.Sc. in computer sciences/ mathematics or equivalent.
  • Over 4 years of experience in software development, experience in at least one high-level programming language (Go, Java, Scala, python – preferred).
  • Strong background in distributed data processing microservices building high-quality, scalable data products.
  • Strong background in data modeling, metadata management, ETL development, and data quality.

Advantages:

  • Experience working with and deploying distributed data technologies (e.g, Kafka, Airflow, Spark, Presto, Dremio, etc.)
  • Experience with deploying distributed data technologies with Docker, Helm over k8s on prem clusters – BIG advantage
  • Experience designing and implementing BI solutions
  • Experience with cloud computing like Azure cloud.
  • Familiar with Argo workflow
פרסם משרה
X