Data Engineer

Machine Learning Engineer

Argus Cyber Security

Key Responsibilities:

  • Design and build high-availability, cloud-scale ML pipelines (ETLs)
  • Develop smart ML infrastructure in order to to build Agile ML processes (MLOps)
  • Develop and deploy real time/batch data processing infrastructure, using the latest technologies
  • Take part in all architecture and development stages – from design to deployment
  • Collaborate with Cyber experts and data scientists to productionize, improve and monitor ML models in various pipelines and stages

Required Skills and Experience:

  • 3+ years of experience in large scale, distributed server side development
  • 2+ years of experience writing ML training and serving pipelines (ETLs)
  • 2+ years of experience using Python
  • Passion for designing scalable, distributable and robust platforms and analytic tools
  • Good understanding of ML processes and concepts – training, serving, evaluation, model lifecycle, etc.
  • Experience with big data tech (Spark Kafka, Kubeflow, Seldon or similar)
  • Experience with Docker, Linux, CI/CD, Kubrenetes, etc
  • Experience working with cloud providers (AWS, GCP, etc)


  • BSc/MSc degree in Computer Science/ Engineering / Mathematics or Statistics
  • Experience creating reports and data visualizations
  • Experience with Java, or other JVM language
  • Experience in the Cybersecurity domain
  • Experience in the Automotive domain
  • Experience with unsupervised learning
  • Team player
  • Presentation skills

Data Scientist

Causalis is a VC-backed startup partnering with the world’s leading pharma companies, building machine learning solutions to bring causal intelligence to healthcare. We are developing a platform of solutions based on novel and proprietary causal AI techniques, to help doctors, patients, and researchers understand and act on the cause-effect relationships in medical data. We are a multi-disciplinary, international team of experts in AI and ML, causal inference and data science, healthcare and medicine. We are looking for smart, driven and overall extraordinary individuals who are passionate about changing the world of personalized medicine, and eager to shape the future of Causalis.
We're looking for an experienced data scientist to join as one of the first members of the data science team, with the skills to take ownership from idea through implementation and deployment. You will be engaged in the early data exploration phase, health data engineering, development of causal models and other ML methods, evaluation techniques, and eventually product integration and follow-ups — the essential challenges to make Causalis deliver ML solutions.
How you’ll contribute:
  • Apply state-of-the-art methods in health research to improve and validate our ML and data science capabilities
  • Define, design, and execute data engineering and ML tasks with a high bar for quality and robustness
  • Collaborate with infrastructure engineers to design and implement data pipelines end-to-end with complex data types to fuel ML experiments
  • Work with internal and external domain experts to define and execute the needed solutions in healthcare
Our ideal candidate will bring:
  • 4+ years of experience working as a data scientist
  • Strong experience with ML and data science in research and production environments (ideally with causal inference methods)
  • Strong Python skills
  • Excellent communication skills and ability to work with technical and non-technical partners from many teams
Nice to have:
  • Experience working in the healthcare industry
  • Deep knowledge of data-driven research in healthcare
  • MS/PhD in a relevant engineering or science field
  • Experience with ML explainability methods
  • Strong experience with NLP
  • Experience with imaging data
  • Experience with genetic data

Data Scientist


With the amount of data growing at an exponential rate, AI is one of the hottest fields of this century. Do you want to be part of this exciting field and start your career as a Data Scientist or Data Engineer? Find out what MIcompany has to offer and become a leader in AI!

Everybody is talking about Big Data & AI. The overwhelming quantity and complexity of data offers a huge challenge for Data Scientists and Data Engineers. But how can you really create tangible business impact with data?

To do that, a new kind of leader is needed. A leader who is on top of the newest technologies and knows how to manage complex datasets. Who can create insights from huge amounts of data using advanced analytical techniques. A leader who can define growth opportunities and has a vision on Data and AI. A leader who builds relationships and combines these skills to create impact at scale.

Who are we?

MIcompany is an Artificial Intelligence (AI) company with offices in Tel Aviv and Amsterdam. From these offices, we drive AI transformations by building AI solutions and skills. Our team of more than 70 data scientists, AI engineers & software engineers serves industry-leading companies such as Nike, eBay,, Heineken, KPN, Lease Plan, Aegon and Shufersal. In more than 25 countries.

We shape and drive AI transformations by building AI solutions. We develop the most promising AI use cases while also building the required technology capabilities. We build AI production platforms to operationalize and manage algorithms at scale. And we ensure business adoption of algorithmic decision making by implementing applications and process change.

We are AI innovators. We implement the newest modeling techniques, create state-of-the-art technology solutions and unlock new data sources using advanced data capturing techniques. To ensure these innovations contribute to sustainable value creation, we combine them with our expert business knowledge and cross-industry experience.

Why work for MIcompany?


Learn to create lasting breakthroughs at international scale

Create LASTING IMPACT with AI at the strategic core of industry leaders

At MIcompany you work at the strategic core of our clients. You contribute to the development of algorithmic applications that transform high impact business processes.

Work all over the world in a multidisciplinary and AMBITIOUS TEAM of analytical talents

At MIcompany you will work in an environment where you can develop yourself optimally due to a multidisciplinary team of ambitious analytical talent, focus on personal development and lots of international opportunities.

Help build our new Tel-Aviv office and become an integral part of the team

You will join a young, positive an ambitious team in our journey to bring AI transformation to leading IL companies. Take part in quarterly team building events, monthly delicious team dinners and weekly happy hours!

Learn how to change organizations using AI models in our CERTIFIED EDUCATION PROGRAM

You are offered to join a 3-year AI & Data Talentship Program, in which you learn about the newest AI techniques and how to implement them in practice. And you learn how to change organizations using AI. Our education is certified by GAIn (Global Artificial Intelligence network).

Become part of a company investing in PURPOSEFUL BREAKTHROUGHS

At MIcompany we invest in diversity & doing good. We believe that our experience and skills can contribute in fields like DNA, cultural sector and charities.

What is the position about?

As a Data Scientist, you will be working in a team with other Data Scientists and AI engineers. From the start you will contribute to challenging projects for leading companies from various industries. You will build meaningful, innovative and impactful models using advanced analytical techniques. Your work is very diverse, on a strategic, analytical and technical level. Step-by-step you will develop all skills to become an AI leader.

Machine learning engineer

Clew Medical innovative analytics solution alerts ICU teams about a patient's possible deterioration so they can provide the right treatment before their condition becomes critical.

We are looking for Machine learning engineer expert to join our team as we productize product offering and expanding predictive analytics capabilities to new clinical areas. The ideal candidate should be eager to learn and grow as well as comfortable jumping headfirst into new concepts and technologies.

What will you do?

As a Senior ML Engineer, you will be part of the Data Science and analytics team which works on the development of cutting edge analytical and AI techniques in data intensive addressing some of the most pressing challenges in healthcare:

·  Design and develop novel algorithms and machine learning models.

· Create and automate data pipelines including data extraction, validation & anomaly

detection, indexing and other data-related tasks.

·  Quickly iterate on design approaches and POCs based on data-driven research and client


·  Push the solutions all the way to production. Understand the architectural constraints

and work with an engineering and product team to quickly transition from prototype to

a scalable implementation.

Must have skills:

·  3+ years of hands-on experience in Python/Scala/Java/etc.

·  At least 1+ years of relevant experience and track record in Data Science: Statistical Data

Analysis, Algorithms and ML development.

· B.Sc., M.Sc. in CS or Mathematics, Bioinformatics, Statistics, Engineering, Physics, or

similar discipline.

·   High technical competence that includes a track record of strong coding and individual

technical contribution

·  Proven experience creating and maintaining ML solutions in production systems.

·  Collaborative team player, able to work with doctors and clinical subject matter experts.

Extra points for:

· Experience with large datasets and distributed computing (Spark, Hive/Hadoop, etc.)

· Familiarity with databases and database structures (SQL/NoSQL)

· Great presentation skills and ability to explain complex solutions to non-technical


Things we appreciate

·  Strong analytical thinking ability

· Self-motivated and strong team contributor

· Ability to work in an agile and dynamic environment.

Why join us now?

Our product is operational, and we have strong financial backing from Israel’s leading venture capital funds. We are at a critical stage in product development and all team members are encouraged to take initiative and become full partners in this lie changing mission.

We have funding, ability, a strong team, and an important mission.

Join us for a life-changing mission!

Data Operations Engineer

Asensus Surgical

About the job

Asensus Surgical is currently seeking a Data Operations Engineer to join our top-notch Research & Development team in Israel.

This is an exciting time to join Asensus and to be part of a leading edge team that is pioneering a new era of Performance-Guided Surgery.

You will own an environment of hybrid data and will ensure that the team has timely, accurate and complete data sets to drive their activities.

We are looking for an individual who is passionate about data, a team player and a strong communicator. Someone with the ability to communicate results and insights in a clear and concise manner to a non-technical audience.

Who We Are 

As a medical device company, Asensus is digitizing surgery to pioneer a new era of Performance-Guided Surgery. Utilizing robotic technology to improve minimally invasive surgery in ways that matter to patients, physicians, surgical staff, and hospitals and enabling consistently superior outcomes and a new standard of surgery. Our employees are especially passionate about the work they do and thrive in a collaborative environment that fosters creative solutions to complex problems. The work is challenging, but everyone comes to Asensus Surgical looking for a fulfilling career, and that's exactly what they find.

What You Bring

  • – Master’s degree Applied Mathematics, Computer science (e.g. specialization: Machine learning/Artificial Intelligence /Visualization, databases, and Big Data), Statistics, or closely related field with Data Science specialization
  • – At least 3 years of experience building data pipelines in Machine Learning frameworks.
  • – Experience in Big Data and with data management platforms such as Hadoop/Spark
  • – Experience with Python (Work with images, videos and files )
  • – Experience in deep-learning libraries (Pytorch/Tensorflow)
  • – Demonstrated proficiency with Git version control systems
  • – Good verbal and writing capabilities in Hebrew and English
  • – Specific experience developing Computer Vision algorithms with machine learning / deep learning will be considered an advantage
  • – Experience working in a fast-paced agile environment will be considered a plus
  • – Experience with Linux.

What You’ll Do

  • – Implement, maintain, monitor and improve data pipelines for deep-learning algorithm development and for data annotation platforms
  • – Prepare the data for Deep Learning algorithm development: design data requirements for experiments, build appropriate API for data usage, and management of the different experiments.
  • – Build and optimize automation software for deep-learning models development and evaluation.
  • – Manage database of images, vidoes, meta-data.
  • – Acquire and bring structure to data so that it can be used in existing and new data systems.
  • – Build tools that help translate insights into action at scale.
  • – Identify, define and translate business needs/problems into analytical questions.
  • – Design and execute experiments, models, algorithms, and visualizations
  • – Work in cloud/hybrid environments

What We Offer

  • – A culture driven to achieve our mission and deliver remarkable results
  • – Coworkers committed to collaboration and winning the right way
  • – Quality products that improve the lives of our customers and patients
  • – Ability to discover your strengths, follow your passion and find your own rewarding career
  • – Flexible, engaging work environment

At Asensus Surgical, we believe in contributing to a society that welcomes diverse voices and values differences in lived experiences, culture, religion, age, gender identity, sexual orientation, race, ethnicity, and neurodiversity. We are committed to ensuring this same environment for our employees – a culture where individuals feel safe, heard, and respected. We celebrate the uniqueness of our global workforce and know that only through inclusion, ongoing learning, and partnership can we succeed. Together we are all stronger.

Data Engineer


About Us

Intelligo is a fast-growing startup with proven product-market fit and a steady revenue stream. We recently completed our Series B. We’re headquartered in Petach Tikva, Israel

Our mission is to empower trust and manage risk by providing institutional investors, investment banks, capital allocators, law firms, and corporations with advanced capabilities to run comprehensive background checks powered by cutting edge artificial intelligence and machine learning. Our ClarityTM product is a one-stop platform delivering quick checks, deep reports, and continuous monitoring with ease, speed, and accuracy.

The Role

We’re looking for a talented independent Data Engineer specializing in Python to join our growing data science team, that is responsible for the design and implementation of Intelligo’s revolutionary automated SaaS background-check solution.

The ideal person for this role is someone who is a self-starter, self-directed, and is comfortable supporting multiple production implementations for various use cases.
Things move fast at Intelligo and we’re looking for someone who can adapt quickly in our fast-paced startup environment.

What does the day to day look like?

  • The most accurate definition of this role is a blend of Data Engineering & Python development. It involves:
    • End to end infrastructure development of Python microservices
    • Optimizing process run-time and solving problems in a scalable manner
    • Taking a major part in designing and implementing complex high scale systems, data pipelines and algorithms using a variety of technologies
    • Working  closely with the Data-Science team, to define and implement an automated training pipeline for machine learning models.
  • Responsibilities
    • In charge of development and maintenance of data processing engine
    • Provide Data engineering capabilities and frameworks to other company teams

Experience we’re looking for:

  • BSc in Computer Science, a related technical field , or equivalent experience.
  • Deep understanding of Python
  • 4+ years of experience as a software developer
  • At least 3+ years experience with Python programming language
  • Experience with building scalable, high-performance, data-oriented production systems
  • Experience with:
    • Micro services architecture and communication architecture (MQ)
    • Cloud environments (AWS)
    • Mongo – Advantage
  • Familiarity with Big Data technologies such as Hadoop, Spark, HDFS, Airflow, HBase, etc. – Advantage


  • Python, Node-JS
  • Kubernetes, Docker, Jenkins, RabbitMQ, Redis
  • AWS (EC2, S3, Lambda, Streaming, EMR)

About Incredibles 

Incredibles, a Team8 Fintech portfolio company, Building upon the rise of eCommerce, the company focuses on allowing its partners to unlock financing opportunities for their e-commerce sellers. By leveraging deep industry knowledge, traditional and alternative data, and cutting-edge technologies we are set to offer superior creditworthiness assessments for e-commerce businesses. Our product ranges from financial risk assessment and monitoring, underwriting to fully white-label lending services.

About Team8 

Team8 is a company-building venture group that builds and invests in companies specializing in Fintech, enterprise technologies, data, AI, and cybersecurity. The Team8 model supports entrepreneurs with an in-house team of researchers, growth experts, and talent acquisition specialists, and a “village” community of enterprise c-level executives and thought leaders. Whether building a new company from scratch or investing in companies already on their journey, Team8 brings a rich ecosystem to work hand in hand with entrepreneurs in accelerating their path to success.

The Team8 Fintech Team 

Bringing unparalleled expertise in banking, credit, e-commerce, payments, capital markets, and wealth management from financial incumbents and startups that have grown into billion-dollar valuation companies.


We’re looking for a curious, creative, ambitious Data Engineer. If you’re all that and looking to lead, invent, and grow professionally you should definitely consider applying to join us on our journey!


We are looking for an experienced Data Engineer to take part in architecture and development. Your role will include:

Building pipelines to crawl, clean, and structure large datasets that form the basis of our platform and IP.

Define architecture, evaluate tools, and open source projects to use within our environment.

Take a leading part in the development of our technology platform and ecosystem.


At least 3 years experience with data engineering in Python (or equivalent language)

Experience with cloud platforms (AWS, Google GCP, Azure), working on production payloads on large scale and complexity.

Experience in working with Kubernetes and AWS Lambda functions.

Experience in working on enterprise software, or data/fintech products.

Big Advantage: Experience with AWS services such as Athena, Kinesis, EKS, MSK, and others.

Big Advantage: Hands-on experience with data science tools, packages, and frameworks

Big Advantage: Hands-on experience with ETL Flow

Which department will you join?

Mobileye’s Road Experience Management (REM) is an end-to-end mapping and localization engine for full autonomy.

We build an autonomous 3D map with accuracy of few centimeters, called "Roadbook", that contains all the necessary information for driving.

Our "Roadbook" is built using crowdsourcing, aggregating information being sent to the cloud from hundreds of thousands of cars driving with the Mobileye chip.

To use the map for full autonomy, we develop semantic understanding of the road, lanes, objects, and relations between them.

​For more information about the mapping, you can watch Amnon's lecture in CES.

How will your job look like?

Some challenges are easy for humans, but are still difficult for computers:

  • Calculating the drivable path
  • Matching traffic lights to lanes
  • Defining the priorities in junctions

You will use classic and geometry algorithms together with Deep Learning and AI to solve those challenges.

We use Python, Numpy, Pandas and Spark to develop algorithms than run in distributed fashion on AWS over very large amount of data.

All you need is:

  • Sharp and creative mind
  • Dedication
  • Ability to learn quickly
  • B.Sc. in natural sciences
  • M.Sc. in natural sciences – an advantage
  • Background in machine learning / computer vision / geometry – an advantage

Knowledge in Python / Numpy – an advantage

Data Engineer

ISI - Imagesat International

We are looking for a savvy Data Engineer to join our growing team of analytics experts. The hire will be responsible for expanding and optimizing our data and data pipeline architecture, as well as optimizing data flow and collection for cross functional teams. The ideal candidate is an experienced data pipeline builder and data wrangler who enjoys optimizing data systems and building them from the ground up. The Data Engineer will support our software developers, database architects, data analysts and data scientists on data initiatives and will ensure optimal data delivery architecture is consistent throughout ongoing projects. They must be self-directed and comfortable supporting the data needs of multiple teams, systems and products. The right candidate will be excited by the prospect of optimizing or even re-designing our company’s data architecture to support our next generation of products and data initiatives.


      • Create and maintain optimal and scalable databases & data pipeline architectures.
      • Perform optimal extraction, transformation, and loading of data from a wide variety of data sources using SQL and AWS ‘big data’ technologies.
      • Keep our data separated and secure across national boundaries through multiple data centers and AWS regions.
      • Work with data and analytics experts to strive for greater functionality in our data systems.

Job Requirements:

B.S. in Computer Science, similar technical field or equivalent experience

Skills Highly Preferred:

      • Experience with relational SQL and NoSQL databases, including MySQL, Postgres and Elasticsearch.
      • Experience with data pipelines and workflow management tools: Nifi, Luigi, Airflow, etc.
      • Experience with AWS cloud services: VPC, EC2, S3, RDS, ECR, ECS
      • Strong Python, Java, Scala, etc.


ThetaRay is the leading provider of AI-based Big Data analytics.

We are dedicated to helping financial organizations combat financial cybercrimes such as money laundering and fraud facilitating malicious crimes such as terrorist financing, narco trafficking, and human trafficking which negatively impact the global economy.

Our Intuitive AI solutions for Anti Money Laundering and Fraud Detection enable clients to manage risk, detect money laundering schemes, uncover fraud, expose bad loans, uncover operational issues, and reveal valuable new growth opportunities.

We are looking for a Big Data Engineer to join our growing team of data experts.

The hire will be responsible for designing, implementing, and optimizing ETL processes and data pipeline flows within the ThetaRay system.

The ideal candidate has experience in building data pipelines and data transformations enjoy optimizing data systems and building them from the ground up.

The Big Data Engineer will support our data scientists with the implementation of the relevant data flows based on the data scientist’s features design.

They must be self-directed and comfortable supporting multiple production implementations for various use cases, part of which will be conducted on-premise at customer locations.

 Key Responsibilities

●     Implement and maintain data pipeline flows in production within the ThetaRay system based on the data scientist’s design.

●    Design and implement solution-based data flows for specific use cases, enabling applicability of implementations within the ThetaRay product.

●     Building a Machine Learning data pipeline.

●     Create data tools for analytics and data scientist team members that assist them in building and optimizing our product into an innovative industry leader.

●     Work with product, R&D, data, and analytics experts to strive for greater functionality in our systems.

●     Train customer data scientists and engineers to maintain and amend data pipelines within the product.


●    1+ years of hands-on experience in working with Apache Spark cluster

●    1+ years of Hands-on experience and knowledge of Spark scripting languages: PySpark/Scala/Java/R

●    2+ years of Hands-on experience with SQL

●    1+ years of experience with data transformation, validations, cleansing, and ML feature engineering in a Big Data Engineer role

●    BSc degree or higher in Computer Science, Statistics, Informatics, Information Systems, Engineering, or another quantitative field.

●    Experience working with and optimizing ‘big data’ data pipelines, architectures, and data sets.

●    Strong analytic skills related to working with structured and semi-structured datasets.

●     Build processes supporting data transformation, data structures, metadata, dependency, and workload management.

●     Experience performing root cause analysis on internal and external data and processes to answer specific business questions and identify opportunities for improvement.

●    Business oriented and able to work with external customers and cross-functional teams.

●    Willingness to travel abroad to customers as needed (up to 25%)

Nice to have

●    Experience with Linux

●    Experience in building Machine Learning pipeline

●    Experience with Elasticsearch

●    Experience with Zeppelin/Jupyter

●    Experience with workflow automation platforms such as Jenkins or Apache Airflow

●    Experience with Microservices architecture components, including Docker and Kubernetes.

פרסם משרה