Data Engineer

About The Position

Taranis is led by a vision to bring precision and control to the agriculture industry worldwide, helping growers maximize and stabilize yield from their crops.

We leverage planes and drones to collect high-resolution imagery of crops from the fields, leverage cloud and on-the-edge software and our unique AI assets to convert imagery to insights which growers can make decisions with.

Taranis is the only player in the market that can bring visual proofs to emerging threats for the crops before these harmed the yield.

With accurate and actionable data, farmers increase their yield, spend less on herbicide and fungicide, and provide healthier outcomes.

After recently closing our Series C round of funding, we are expanding our AI core team, looking for an experienced Computer Vision Engineer eager to make an impact in this important market segment.

If you are passionate about doing something good for the planet, and helping growers yield more food and use less chemicals, keep reading!

Responsibilities

  • Research and development in the fields of computer vision, image processing and machine learning.
  • Responsible for the entire development cycle, from researching appropriate techniques, designing a solution, implementation, to integration into the production pipeline, as well as subsequent maintenance and support.
  • Work closely with Products and R&D to understand and define the technical requirements, as well as integration of the solution into the production pipeline.
  • Clearly communicate about ongoing progress of current projects, as well as expected deliveries of milestones.
  • Stay knowledgeable of the latest developments in the relevant fields.

Requirements

  • Excellent teamwork and problem solving skills.
  • Master’s degree or equivalent in Computer Science, Physics, Electrical Engineering or Mathematics. Phd – an advantage.
  • At least 7 years’ relevant work and hands-on experience in Computer Vision, Image Processing, Machine Learning, Statistical Modeling. Non trivial experience with Convolutional Neural Networks – a must.
  • Hands-on experience with modern machine learning frameworks (e.g., Tensorflow, PyTorch)
  • Experience with cloud computing environments (e.g. Google Cloud, AWS, Azure) and relevant execution frameworks (Docker, Kubernetes, etc.) – an advantage.
  • Excellent programming skills (esp. Python)
  • Ability to research and understand the latest developments in the fields of A.I., machine learning and computer vision.

To maintain and cultivate our unique DNA in Taranis, we look for people who

  • Love what they do and passionate about making an impact
  • Able to adapt quickly in an intensive, fast-paced environment
  • Own strong attention to details and sense of accountability
  • Quick learners, organized and personally dedicated to accomplishing objectives
  • Master multitasking, own high work ethic and prioritize workload
  • Have great communication skills

Algorithm Expert

Nova Measuring Instruments Ltd.

“Algorithms Group” at Nova is looking for a “Algorithm Expert” to “Develop advanced algorithms for next-generation Optical Critical Dimension applications”

• Nova provides insights into process control in the world’s most technologically advanced industry. We employ physics, math, algorithms, software, and hardware expertise to redefine the limits of possible in semiconductors’ manufacturing.
• We invite you to join our dreamers and winners! Brilliant high-aimers who see the impossible as the starting point to exciting challenges, and work together in multidisciplinary global teams to find answers.
• We dive deep, into the nanometric and atomic levels, to extract unique insights and provide our customers and partners with crucial decision-making data. Every one of us helps redefine what people can achieve through technology.
We simply do things differently. What about you?
• You’ll be joining the Effective Modeling team within Nova’s algorithms group: 
Our goal is to transform and optimizing device manufacturing and process control in the semiconductor industry. We implement algorithms into products deployed on Big Data ecosystems and on edge devices so to solve our customer manufacturing challenges.
• What will you do as an “Algorithm Expert”?
 You will be developing advanced algorithms for next-generation Optical Critical Dimension applications, as part of a world-class team.
 Research, develop and implement State of the Art solutions using multiple tools including advanced physical modeling, optimization tools, machine learning, etc.
 Work closely with headquarters’ R&D HW, SW, and application engineers.
Requirements

What will make you succeed in the role?

o MSc in physics or similar (mathcomputer sciencematerial sciencesemiconductorelectrical engineeringoptics)
o PhD – advantage
o Proven ability to research and develop algorithms independently
o 3+ years previous experience in algorithm development
o Programming experience is a significant advantage – python or similar (Matlab) as well as C++/C#
o Excellent communication skills and team player

Data Engineer

DriftSense

DriftSense, a fast growing startup in the AgTech field, is looking for a highly motivated person to join our team.

The new team-member will be responsible to lead the software projects in the company, and will take part in developing a cutting-edge prediction system. A system which comprises a variety of components and connects to different data sources, and an in-house developed algorithm.

Responsibilities:

*Designing and implementation of algorithms.

*Integration of various tools.

*Working side-by-side with the company`s CTO.

Requirements:

Previous experience with the following:

*Automation tools (web crawlers – an advantage).

*Integration of cloud services (AWS – an advantage).

*Integration and management of Data Pipeline with databases.

*Machine Learning.

Strong background in algorithm design.

Education:

B.Sc. in Computer Science or similar degree (M.Sc. is an advantage).

We use AutoML and Deep Reinforcement Learning to optimize complex processes in the Manufacturing, Energy, and Natural Resources sectors.

We’re seeking a Senior Software Engineer to:
– Plan, design & develop a new framework for data processing and data research.
– Work with cutting-edge technologies in the fields of Deep Learning and Data Science.
– Create a new industry altering product from its inception to production level.

Requirements:
– B.Sc. in Computer Science or another relevant scientific degree.
– At least 3 years of experience in developing Python-based complex data processinganalysis systems.
– Experience in building ETL infrastructure – an advantage
– Management experience – an advantage
– Familiarity with Data Science Machine Learning algorithms – an advantage

Trax is looking for a Data Engineer to join its Digitization data science team. This person will be extending our existing data teams and work with our Data Scientists, and Data Analysts on building data pipelines, customized reports and improving our data platforms. They will act as a key enabler of ML / statistical models, AI modules and advanced analysis over our production environment.

Responsibilities:

  • Design, build and deploy complex data pipelines, moving and manipulating Trax’ data.
  • Scrape and ETL data from different internal systems into our data platforms.
  • Shorten time to value of new modules by taking ownership of their productization process. Integrate and deploy these within the Trax environment.
  • Re-design or re-implement existing data driven modules to improve performance while leveraging most suitable technology.
  • Identify new and emerging technologies that may drive efficiency and introduce these to relevant architecture, research and development teams.

Requirements:

  • Ability to communicate and work with multiple interfaces with different roles
  • 2+ years of experience with server side and microservice development
  • 3+ years hands on experience with python, SQL and NoSQL, code deployment and version control
  • Thorough understanding of software engineering principals
  • Good familiarity with Spark, Scala, Linux power tools (bash, sed, awk, etc) – an advantage
  • Client-Side development – an advantage
  •  B.Sc/M.Sc in Computer Science or Software Engineering
Revuze develops a cloud-based, text-analyzing engine that scans and analyzes both online and offline data sources that mention a customer’s brand, products, and competitors.

The company employs a team of 40 people.

We are looking to hire an experienced Big Data developer / Data Scientist for the algorithm team.

The position will combine development from scratch in the field of big data , along with data science tasks in the field of NLP.

If you have passion for data, experience with Python, willing to be part of algorithm team – We can offer you great opportunity .

Requirements:

B.Sc/M.Sc in Computer Science, Mathematics, Engineering or related field

4+ years of programming experience with Python – must

Experience with building scalable, high-performance, data-oriented systems

Experience working with MongoDB and/or Cassandra

Experience working with Spark

Advantages:

Working with Data science libraries: Pandas, NumPy

Experience working with Scala

Experience in Data Science and Machine Learning techniques (NLP)

About the Role:

We're seeking a Software Engineer to help our data science team scale. Here are some examples of the challenges you'll be facing:

  • Build frameworks for BigPanda's ML efforts and be a key contributor in the discussion around which ML technologies will be used in production environment
  • Design and implement the best production architecture for integrating models into our production pipelines
  • Automate repeatable routines for most machine learning tasks
  • Bring the best software development practices to the data science team and help them boost their work
  • Ensure that data science code is maintainable and scalable

What would make you a good fit?

  • You’re passionate about data, data science and ML
  • You're excited about building high performance distributed systems that scale
  • You're excited to build a product that solves real customer problems.

Requirements:

  • Highly proficient in Python language with at least 4 years of production experience
  • At least 3 years experience writing in Java/Scala/C# code for distributed systems that were used in production.
  • Experience working with any of these technologies: Kafka, ElasticSearch, MongoDB.

Nice to have:

  • Hands-on experience with stream processing for near real-time systems. (BigData, Kafka, Distributed systems)
  • Production level experience in Scala
  • Hands-on experience with Akka framework.
  • Experienced working with cloud platforms, preferably AWS.
  • Experience with Seldon, Kubeflow and AWS Sagemaker

Convizit’s advanced AI technologies are revolutionizing how user behavior data is collected, structured and analyzed.

As a Machine Learning Engineer, you will be part of an awesome multidisciplinary team of engineers and data scientists who are responsible for the most advanced visitor activity tracker in existence. Our tracker processes millions of online user actions, worldwide, every day. The resulting structured datasets are the most comprehensive ever generated for user behavior analysis and insight extraction. You will develop data-driven approaches and products that are changing the face of user behavior analytics.

What You’ll Do

  • Work closely with other ML engineers, software engineers and top-notch data scientists
  • Collaborate with product management to understand company needs and devise models and solutions
  • Research and implement machine learning algorithms for our unique data sets
  • Develop highly scalable, distributed systems as part of our data pipeline

What You Have

  • A degree in Computer Science or related field
  • An advanced degree in Computer Science or related field with machine learning experience – significant advantage
  • 6+ years of engineering experience, out of which at least 3 years of experience implementing machine learning algorithms into production code
  • Significant hands-on experience in Python
  • Knowledge of a variety of machine learning techniques (clustering, decision tree learning, artificial neural networks, etc.)
  • Experience with ML frameworks, such as Pandas, Keras, TensorFlow or PyTorch
  • Experience with big data frameworks, such as Kafka, Spark, or Hadoop
  • Experience with SQL and NoSQL databases
  • Experience with cloud SaaS environments (e.g., AWS, Azure, GCP)

Who You Are

  • Innovative and creative, you love to learn new things and find creative solutions to challenging problems
  • A great teammate, with excellent communication skills, who loves to collaborate with others
  • Agile, you adapt quickly to changes and thrive in a fast-paced environment
  • A natural leader, you are charismatic, responsible and accountable
  • You love to look at the big picture and lead the team to success

What will you do?

As a ML/Biometrics Engineer you will dive deep into the technology that lies at the foundations of Keyless. You will support the machine learning team in designing and evaluating new techniques and models.

Use pretrained models as well as train new deep learning models for:

  • Face detection
  • Face recognition
  • Liveness detection
  • Iris recognition

Model size and model latency optimizations for mobile devices and integration of existing biometric solutions supplied by third party vendors.

You will work in dynamic environment of early-stage startup with focus on prototyping and fast delivery.

Requirements

  • At least four years of experience leading a biometrics or machine learning team (outside of academia) deploying a solution into production
  • Having successfully shipped a biometrics or machine learning product
  • At least two years of experience with big data processing
  • At least two years of experience with liveness detection/anti-spoofing techniques

Nice to have

  • Mobile development experience (ideally porting machine learning models)
    • Android, Java, Tensorflow (Lite)
    • iOS, Swift, CoreML
  • Experience with startup environment

Bigabid is a data science company bent on revolutionizing the app marketing industry.
It is a 2nd-generation user acquisition and re-engagement DSP optimized for app developers.

We use a wide range of state-of-the-art ML techniques to analyze TBs of data and profile hundreds of millions of users through multiple proprietary data sources.

We process TBs of new data every day (PBs overall).
This data is used to power our Machine Learning predictions, business critical metrics and analytics used to power our decision making.

Bigabid is a fast-growing startup founded in 2015 by an experienced team of serial entrepreneurs and backed by some of the most prominent angel investors in Israel.

We are looking for a talented leader that can help us design the next generation data architecture and tools for the data powering our company.
We are looking for individuals who can bring software engineering best practices into building both business and technical critical data pipelines.
Having excellent reliability for such a large and distributed cloud infrastructure is paramount to a data-driven company like Bigabid.

Your team's goal is to create amazing ground breaking tools to make the data scientists more productive and agile.

If you love working on complicated network pipelines, you understand the importance of reliable data and have felt the pain of big data inconsistencies, and you're the type who thinks of great solutions and want to bring them to life, BigaBid is your best challenge.

Responsibilities:

  • Evangelize operational best practices and continuously look for opportunities to automate and build tools to lower operational barriers, improve clarity on problematic areas, and improve reliability.
  • Lead and grow a team of top-talent senior data engineers.
  • Transform our data architecture for massive scale and high performance.
  • Develop and automate large scale, high-performance data processing systems to drive Bigabid business growth and improve the product experience.
  • Design the architecture and schemas for the tables and be the technical lead for architecting it to be extendable, testable, maintainable and debuggable.
  • Design data models for optimal storage and retrieval, and optimize the data architecture to meet critical product and business requirements.
  • Understand logging and how it impacts the rest of our data flow, architect logging best practices where needed.
  • Develop research tools for our researchers from the Data Science and Data Analysis teams.
  • Preserve quality over the research teams’ code.

REQUIREMENTS

  • 8+ years of relevant industry experience.
  • People leader and a technical leader.
  • Passion for building and motivating teams to reach their potential.
  • You are a strong partner and can drive cross-functional projects forward.
  • Experience building internal infrastructure that is shared across teams.
  • Working with data on the TBs scale.
  • Experience designing, building and operating robust distributed systems.
  • Strong skills of relational databases and query authoring (SQL).
  • Experience with Python is preferred, deep knowledge of PySpark – a plus.
  • Experience designing and deploying high performance systems with reliable monitoring and logging practices – a plus
  • Experience with Machine Learning – a plus
פרסם משרה
X