Kryon is rapidly growing and we are looking for a highly experienced algorithm and computer vision HO team leader who lives and breathes technology, loves data and has strong business skills together with deep understanding of software infrastructure design.

Lead a team of professional algorithm developers, that research and develop algorithms in the fields of computer vision, neural networks, OCR, machine learning and graph algorithms.

Work in an Agile (Scrum) environment and help us to continuously deliver our product to customers.


·        M.A or above in a relevant field.

·        At least 2 years of management experience.

·        8+ years of algorithm development experience.

·        Experience in computer vision/image processing development.

·        Deep knowledge with machine learning and neural networks.

·        Experience in research and development using MATLAB

·        Good communication skills


·        Experience in Agile development.

·        Experience developing in .NET/Java

·        Experience in software design patterns and practices.

Job Description:
● You will be responsible for developing new algorithms to solve exciting computer vision
challenges reducing our customers manual work, from survey analysis to flying drones

● To solve these challenges you’ll need to combine approaches from multiple disciplines:
Deep Learning, object detection and classification, photogrammetry, 3D mapping,
geometry, image processing, optimization algorithms and more.

● Your work will range from early research and feasibility studies, through algorithm
definition, software implementation and integration to getting feedback from the field and
improving performance in the real product

● You will team up with an agile team of experts working in cutting edge software
development methodologies, to deliver quality software to our customers

● M.Sc / PhD in Electrical Engineering or Computer Science

● At least 5 years of professional experience developing computer vision algorithms

● At least 3 years of hands-on experience programming in C++/Java/Python

● At least 3 years of experience with deep learning for computer vision problems

● Hands-on experience with the following: Tensorflow/PyTorch/Keras and OpenCV

● Experience in delivery and maintenance of algorithms to real products

● Can work independently under loose guidance

● Business-fluent English Nice to have

● Experience with GIS, photogrammetry and 3D mapping algorithms

● Extra bonus: love drones

IBEX is developing state-of-the-art artificial intelligence applications for cancer diagnostics. Our cutting-edge deep learning and computer vision technology is transforming the diagnosis of cancer, leading to rapid and accurate decision making and providing new diagnostic insight to improve patient outcomes and quality of life.

We've recently locked series A which brings our funding to $13.6M, and we plan to enforce our R&D with exceptional engineers to take our platform and algorithms to the next level.

This is a unique opportunity to use your technological abilities in order to save lives, literally.


Job description:

We’re looking for experienced top-notch Data Scientists and Machine Learning experts to help us build and improve our core algorithms and to develop novel cancer diagnostic tests. In this role, you’ll work on huge, unique datasets of high-resolution pathology images and clinical records from leading institutes worldwide, develop and optimize AI-based solutions for complex problems using cutting-edge technology, integrate your algorithms into our products and receive feedback from clinicians using them in live clinical settings, and work closely with our multi-disciplinary team of Machine Learning experts, experienced software developers and medical professionals.



  • PhD or MSc in Computer Science or relevant discipline
  • Deep knowledge of algorithms, statistics and Machine Learning
  • At least 5 years of industry experience, working on large-scale projects
  • Ability to design and develop production-level code
  • Experience with at least one of Python, Java, C++
  • Experience with Deep Learning and/or Computer Vision – an advantage

As a Senior Data scientist at Windward, you’ll be responsible for creating and implementing various models on unique big datasets, developing the algorithm all the way from an idea to a working piece of code in production. The algorithms may be related to predictive modelling, classifications and ship behavioral patterns.

Throughout this fun journey you'll also learn a thing or two and improve your technical skills (working with NoSql DBs, Apache Spark, building services to your ML products, etc.) and ML knowledge (working with unbalanced data, risk modeling, explaining black-box models and much more).

Your work will generate significant impact in the maritime insurance and intelligence domains. In short, we are looking for a brilliant problem solver with a passion for data to join a fast paced company who is changing the maritime world.

What will you do?

  • You’ll be working on a distributed research cluster to create features and insights from large volumes of noisy data from different sources; choose an algorithmic approach to solve the problem at hand and define success metrics to measure algorithm performance.
  • You'll be expected to research and understand the context of different maritime phenomenon and propose a data-based solution which addresses a business need.
  • You'll be required to work independently but you'll be surrounded by a strong and experienced team of data scientists who will be willing to help (and learn from you).
  • You'll be communicating with product managers to understand needs and outline solutions.

Who Are you?

  • M.Sc. or Ph.D. in Computer Science / Mathematics / Physics
  • 4+ years of experience as a Data Scientist the industry
  • Extensive experience with statistics/analytics tools such as Python/R
  • Extensive experience in development of machine learning solutions in the industry.
  • Ability to independently tackle a new problem starting from the research phase through design and implementation.
  • Experience in working with big data.


  • Ability to write production-level code.
  • Python/Jupyter/Sci-kit experience
  • Startup experience.
  • Developing Geospatial algorithms.
  • Experience in Deep Learning (TCN networks)

Unbotify was recently acquired by Adjust. This important milestone backs Unbotify’s vision of fighting bot fraud while ensuring we remain independent in developing our unique technology and maintain our innovative & vibrant culture.

Unbotify’s mission is to provide a best in class solution to one of the major problems of web and mobile applications security today – detecting malicious bots. Our unique focus on behavioral biometrics derives from years of experience in the field and deep knowledge of existing detection methods and their inherent shortcomings.

We are looking for a Senior Data Scientist to join the Unbotify team based in Israel.


What we offer you:  

– A competitive salary

– Flexibility in work schedule

– Relocation assistance

– An international team with a strong focus on transparency

– Regular team gatherings and company retreats

– Knowledge sharing environment

More details about our company culture and perks can be found on our careers page.


Your role:

You will be responsible for the implementation of new features and the maintenance of our machine learning detection system. Since our existing system is written mainly in Python, your experience working with Python and its scientific packages will help understand our bot detection system. Additionally, you’ll work on our big data platform, mainly written in Apache Spark.


Your tasks:

– Analyze large quantities of customers’ data (i.e. behavior biometrics data)

– Implement analytical tools for processing large quantities of data

– Implement feature engineering and machine learning models for bot detection


Your profile:

– Practical industry experience in the field of machine learning and algorithm development.

– Hands on experience with Python (or Java, Scala or Go) and scientific packages

– Experience in Apache Spark – a Plus

AppCard is looking for an experienced Senior/Lead Data scientist. In this position you will be part of our backend team commanding the ML pipeline from raw data all the way to getting the model into production and presenting insights directly to our customers. 

We handle retail (brick and mortar) data, focusing on grocery. We have realtime feeds of item level data from the point of sale and user facing interactive devices. Basically ever wondered around a supermarket and thought, "hey if I had all this place's data I could …". We have that data.


M.Sc. and above in a relevant fields (CS, OR, statistics,…) – A Must

At least 2+ years’ Experience in a similar role – A Must

At least 2+ years’ experience coding in Python (Focusing on pandas, scikit-learn, jupyter, tensorflow, and at least one visualization library)

Experience working with AWS (Knowledge with Athena, EMR, Sage Maker) an advantage.


  • Where are the offices located? Ramat Hasharon.
  • Dogs? bring as many as you like.
  • PyTorch / Tensorflow? We started with Tensorflow before PyTorch was widely used, but nothing against PyTorch.
  • How much data are we talking about? ~15% of the US grocery market.
  • Founders / management? Yair Goldfinger & Amichay Oren.
  • Investors: Founders Fund,  Innovation Endeavors.  Jerry Yang, PLDT Capital, Alexander Rittweger.
  • You know all the buzz is in IOT now? Yes, we make our own hardware, with all the AWS IOT goodies inside. we are also part of Intel® IoT Solutions Alliance
  • How many people all in all? ~100 globally, 25 in Israel.
  • What kind of NN do you use? RNN (LSTM to be exact).


Msc or PhD in Computer Science, Mathematics, Statistics, Physics or related fields.

5+ years of experience with data mining / machine learning / optimization.

Experience in one or more of the following areas is an advantage:

  • Marketing (Search Engine/Display/Social) auctioning and bidding algorithms – big advantage!
  • Deep learning.
  • Scalable Classification and Clustering techniques.
  • Statistical modeling / information gains.
  • Big data technologies.
  • Python.

About The Position

Kenshoo Research creates algorithms that bring real value to the world’s top advertisers. Its role consists of identifying and defining functionality from conception to delivery, by fully understanding customer requirements, data, ‘connecting the dots’, creating an algorithm, and delivering a working algorithm to production.

Role Description:

Researching, selecting and tuning machine learning models and algorithms to solve real world business problems using Kenshoo’s data: internal and 3rd party user interaction tracking (big data), publisher (Google, Facebook, Bing etc.) data and other industry data. Creating and improving algorithm components, specifically the algorithm code itself. Working with a team of data scientists that help each other to mine insights and find opportunities in interesting data from a variety of fields and geographies. Working with the algorithm developers team to integrate these algorithms with the production data sources and the application ecosystem.

Data Engineer


Data Engineer

Bigabid is a fast-growing startup founded in 2015 by an experienced team of serial entrepreneurs and backed by some of the most prominent angel investors in Israel.

One of our goals at Bigabid is to disrupt the $200B App Marketing industry with an innovative User-Acquisition and Re-engagement platform powered by machine learning.

Our platform helps leading App developers worldwide to acquire new loyal users for their business, whether It’s Games, e-Commerce or other utility Apps.

As a data engineer you will be working on our massive (PBs) data pipeline, making sure the data is clean, whole, and accessible. Your team's goal is to create amazing groundbreaking tools to make the data scientists more productive and agile. If you love working on complicated network pipelines, you understand the importance of reliable data and have felt the pain of big data inconsistencies, and you're the type who thinks of great solutions and want to bring them to life, BigaBid is your best challenge.



  • Create and maintain optimal data pipeline architecture
  • Build and maintain our `feature store` and machine learning orchestration mechanism
  • Identify, design, and implement internal process improvements: automating manual processes, optimizing data delivery, re-designing infrastructure for greater scalability, etc.
  • Build the infrastructure required for optimal extraction, transformation, and loading of data from a wide variety of data sources using SQL, Spark and AWS ‘big data’ technologies.
  • Work with stakeholders including the executive, product and marketing teams to assist with data-related technical issues and support their data infrastructure needs.
  • Create data tools for analytics and data scientist team members
  • Work with data and analytics experts to strive for greater functionality in our data systems

Required Skills:

  • 5+ Years coding (preferably Python)
  • 2+ Years' experience with big data tools: Hadoop, Spark, Kafka, Presto, EMR etc.
  • Experience building and optimizing ‘big data’ data pipelines; including – message queuing, stream processing, and highly scalable data sets
  • Experience performing root cause analysis on internal and external data and processes.
  • Strong organizational skills with ability to juggle multiple tasks within constraints and timelines


Preferred qualifications:

  • Experience with Airflow or other workflow management software
  • Familiar with the Linux environment and bash scripting
  • Familiar with Machine Learning techniques

Ready for the challenge?


Position: Senior Algorithm’s and ML Developer

Siga – start-up company located in Beer Sheva, Gav Yam 2


A start-up company is looking for an hands-on algorithm developer to take part in developing tools and algorithms for anomaly detection in multi dimensional time series data.

Research anomaly detection algorithms, run simulations and back-testing, and integrate in framework.

Develop end-to-end algorithms for detecting anomalous events in real data in the OT environment.


B.Sc. in Electrical Engineering/Computer Science/Mathematics – MUST

M.Sc. in algorithms related fields (deep learning/machine learning/optimization etc.) – MUST

Programming skills & solid background in computers – MUST

2 Years experience in Python/MATLAB – MUST

Proven experience in training and evaluating machine learning models – MUST

Solid theoretical foundation & hands on experience in deep learning using deep learning frameworks (e.g. TensorFlow) – MUST



About the algorithm team:

·       We develop state of the art machine learning algorithms with a particular emphasize on optimization and performance. We specifically deal with complex real-world data sets which need careful consideration and require non-conventional approaches.

Key responsibilities:

·       Perform research and development on new machine learning algorithms

·       Closely work with software and product teams to deliver optimized algorithms

·       Analyze and treat large datasets to gain deep understanding of real-world problems

·       Lead feature development from the requirement characterization stage, through technical research and design up to production

·       Keep track of latest state-of-the-art machine learning research


Desired Skills & Experience

·       M.Sc or PhD in CS, EE, Mathematics or Physics (or equivalent) from a leading university

·       At least 2 years of experience in developing machine learning algorithms

·       Proven experience in working with large datasets and extracting information from it

·       Strong mathematical skills as well as software proficiency

·       Programming experience (MATLAB / Python)

·       Comfortable tackling new problems and thinking outside the box

·       Self-starter, independent, highly motivated and creative

פרסם משרה