Key Responsibilities:
Required Skills and Experience:
Advantage:
With the amount of data growing at an exponential rate, AI is one of the hottest fields of this century. Do you want to be part of this exciting field and start your career as a Data Scientist or Data Engineer? Find out what MIcompany has to offer and become a leader in AI!
Everybody is talking about Big Data & AI. The overwhelming quantity and complexity of data offers a huge challenge for Data Scientists and Data Engineers. But how can you really create tangible business impact with data?
To do that, a new kind of leader is needed. A leader who is on top of the newest technologies and knows how to manage complex datasets. Who can create insights from huge amounts of data using advanced analytical techniques. A leader who can define growth opportunities and has a vision on Data and AI. A leader who builds relationships and combines these skills to create impact at scale.
Who are we?
MIcompany is an Artificial Intelligence (AI) company with offices in Tel Aviv and Amsterdam. From these offices, we drive AI transformations by building AI solutions and skills. Our team of more than 70 data scientists, AI engineers & software engineers serves industry-leading companies such as Nike, eBay, Booking.com, Heineken, KPN, Lease Plan, Aegon and Shufersal. In more than 25 countries.
We shape and drive AI transformations by building AI solutions. We develop the most promising AI use cases while also building the required technology capabilities. We build AI production platforms to operationalize and manage algorithms at scale. And we ensure business adoption of algorithmic decision making by implementing applications and process change.
We are AI innovators. We implement the newest modeling techniques, create state-of-the-art technology solutions and unlock new data sources using advanced data capturing techniques. To ensure these innovations contribute to sustainable value creation, we combine them with our expert business knowledge and cross-industry experience.
Why work for MIcompany?
BECOME A LEADER IN AI
Learn to create lasting breakthroughs at international scale
Create LASTING IMPACT with AI at the strategic core of industry leaders
At MIcompany you work at the strategic core of our clients. You contribute to the development of algorithmic applications that transform high impact business processes.
Work all over the world in a multidisciplinary and AMBITIOUS TEAM of analytical talents
At MIcompany you will work in an environment where you can develop yourself optimally due to a multidisciplinary team of ambitious analytical talent, focus on personal development and lots of international opportunities.
Help build our new Tel-Aviv office and become an integral part of the team
You will join a young, positive an ambitious team in our journey to bring AI transformation to leading IL companies. Take part in quarterly team building events, monthly delicious team dinners and weekly happy hours!
Learn how to change organizations using AI models in our CERTIFIED EDUCATION PROGRAM
You are offered to join a 3-year AI & Data Talentship Program, in which you learn about the newest AI techniques and how to implement them in practice. And you learn how to change organizations using AI. Our education is certified by GAIn (Global Artificial Intelligence network).
Become part of a company investing in PURPOSEFUL BREAKTHROUGHS
At MIcompany we invest in diversity & doing good. We believe that our experience and skills can contribute in fields like DNA, cultural sector and charities.
What is the position about?
As a Data Scientist, you will be working in a team with other Data Scientists and AI engineers. From the start you will contribute to challenging projects for leading companies from various industries. You will build meaningful, innovative and impactful models using advanced analytical techniques. Your work is very diverse, on a strategic, analytical and technical level. Step-by-step you will develop all skills to become an AI leader.
Machine learning engineer
Clew Medical innovative analytics solution alerts ICU teams about a patient's possible deterioration so they can provide the right treatment before their condition becomes critical.
We are looking for Machine learning engineer expert to join our team as we productize product offering and expanding predictive analytics capabilities to new clinical areas. The ideal candidate should be eager to learn and grow as well as comfortable jumping headfirst into new concepts and technologies.
What will you do?
As a Senior ML Engineer, you will be part of the Data Science and analytics team which works on the development of cutting edge analytical and AI techniques in data intensive addressing some of the most pressing challenges in healthcare:
· Design and develop novel algorithms and machine learning models.
· Create and automate data pipelines including data extraction, validation & anomaly
detection, indexing and other data-related tasks.
· Quickly iterate on design approaches and POCs based on data-driven research and client
feedback.
· Push the solutions all the way to production. Understand the architectural constraints
and work with an engineering and product team to quickly transition from prototype to
a scalable implementation.
Must have skills:
· 3+ years of hands-on experience in Python/Scala/Java/etc.
· At least 1+ years of relevant experience and track record in Data Science: Statistical Data
Analysis, Algorithms and ML development.
· B.Sc., M.Sc. in CS or Mathematics, Bioinformatics, Statistics, Engineering, Physics, or
similar discipline.
· High technical competence that includes a track record of strong coding and individual
technical contribution
· Proven experience creating and maintaining ML solutions in production systems.
· Collaborative team player, able to work with doctors and clinical subject matter experts.
Extra points for:
· Experience with large datasets and distributed computing (Spark, Hive/Hadoop, etc.)
· Familiarity with databases and database structures (SQL/NoSQL)
· Great presentation skills and ability to explain complex solutions to non-technical
colleagues.
Things we appreciate
· Strong analytical thinking ability
· Self-motivated and strong team contributor
· Ability to work in an agile and dynamic environment.
Why join us now?
Our product is operational, and we have strong financial backing from Israel’s leading venture capital funds. We are at a critical stage in product development and all team members are encouraged to take initiative and become full partners in this lie changing mission.
We have funding, ability, a strong team, and an important mission.
Join us for a life-changing mission!
Asensus Surgical is currently seeking a Data Operations Engineer to join our top-notch Research & Development team in Israel.
This is an exciting time to join Asensus and to be part of a leading edge team that is pioneering a new era of Performance-Guided Surgery.
You will own an environment of hybrid data and will ensure that the team has timely, accurate and complete data sets to drive their activities.
We are looking for an individual who is passionate about data, a team player and a strong communicator. Someone with the ability to communicate results and insights in a clear and concise manner to a non-technical audience.
Who We Are
As a medical device company, Asensus is digitizing surgery to pioneer a new era of Performance-Guided Surgery. Utilizing robotic technology to improve minimally invasive surgery in ways that matter to patients, physicians, surgical staff, and hospitals and enabling consistently superior outcomes and a new standard of surgery. Our employees are especially passionate about the work they do and thrive in a collaborative environment that fosters creative solutions to complex problems. The work is challenging, but everyone comes to Asensus Surgical looking for a fulfilling career, and that's exactly what they find.
What You Bring
What You’ll Do
What We Offer
At Asensus Surgical, we believe in contributing to a society that welcomes diverse voices and values differences in lived experiences, culture, religion, age, gender identity, sexual orientation, race, ethnicity, and neurodiversity. We are committed to ensuring this same environment for our employees – a culture where individuals feel safe, heard, and respected. We celebrate the uniqueness of our global workforce and know that only through inclusion, ongoing learning, and partnership can we succeed. Together we are all stronger.
Intelligo is a fast-growing startup with proven product-market fit and a steady revenue stream. We recently completed our Series B. We’re headquartered in Petach Tikva, Israel
Our mission is to empower trust and manage risk by providing institutional investors, investment banks, capital allocators, law firms, and corporations with advanced capabilities to run comprehensive background checks powered by cutting edge artificial intelligence and machine learning. Our ClarityTM product is a one-stop platform delivering quick checks, deep reports, and continuous monitoring with ease, speed, and accuracy.
The Role
We’re looking for a talented independent Data Engineer specializing in Python to join our growing data science team, that is responsible for the design and implementation of Intelligo’s revolutionary automated SaaS background-check solution.
The ideal person for this role is someone who is a self-starter, self-directed, and is comfortable supporting multiple production implementations for various use cases.
Things move fast at Intelligo and we’re looking for someone who can adapt quickly in our fast-paced startup environment.
What does the day to day look like?
Experience we’re looking for:
Technologies:
About Incredibles
Incredibles, a Team8 Fintech portfolio company, Building upon the rise of eCommerce, the company focuses on allowing its partners to unlock financing opportunities for their e-commerce sellers. By leveraging deep industry knowledge, traditional and alternative data, and cutting-edge technologies we are set to offer superior creditworthiness assessments for e-commerce businesses. Our product ranges from financial risk assessment and monitoring, underwriting to fully white-label lending services.
About Team8
Team8 is a company-building venture group that builds and invests in companies specializing in Fintech, enterprise technologies, data, AI, and cybersecurity. The Team8 model supports entrepreneurs with an in-house team of researchers, growth experts, and talent acquisition specialists, and a “village” community of enterprise c-level executives and thought leaders. Whether building a new company from scratch or investing in companies already on their journey, Team8 brings a rich ecosystem to work hand in hand with entrepreneurs in accelerating their path to success.
The Team8 Fintech Team
Bringing unparalleled expertise in banking, credit, e-commerce, payments, capital markets, and wealth management from financial incumbents and startups that have grown into billion-dollar valuation companies.
Description
We’re looking for a curious, creative, ambitious Data Engineer. If you’re all that and looking to lead, invent, and grow professionally you should definitely consider applying to join us on our journey!
Responsibilities
We are looking for an experienced Data Engineer to take part in architecture and development. Your role will include:
Building pipelines to crawl, clean, and structure large datasets that form the basis of our platform and IP.
Define architecture, evaluate tools, and open source projects to use within our environment.
Take a leading part in the development of our technology platform and ecosystem.
Requirements:
At least 3 years experience with data engineering in Python (or equivalent language)
Experience with cloud platforms (AWS, Google GCP, Azure), working on production payloads on large scale and complexity.
Experience in working with Kubernetes and AWS Lambda functions.
Experience in working on enterprise software, or data/fintech products.
Big Advantage: Experience with AWS services such as Athena, Kinesis, EKS, MSK, and others.
Big Advantage: Hands-on experience with data science tools, packages, and frameworks
Big Advantage: Hands-on experience with ETL Flow
Which department will you join?
Mobileye’s Road Experience Management (REM) is an end-to-end mapping and localization engine for full autonomy.
We build an autonomous 3D map with accuracy of few centimeters, called "Roadbook", that contains all the necessary information for driving.
Our "Roadbook" is built using crowdsourcing, aggregating information being sent to the cloud from hundreds of thousands of cars driving with the Mobileye chip.
To use the map for full autonomy, we develop semantic understanding of the road, lanes, objects, and relations between them.
For more information about the mapping, you can watch Amnon's lecture in CES.
How will your job look like?
Some challenges are easy for humans, but are still difficult for computers:
You will use classic and geometry algorithms together with Deep Learning and AI to solve those challenges.
We use Python, Numpy, Pandas and Spark to develop algorithms than run in distributed fashion on AWS over very large amount of data.
Knowledge in Python / Numpy – an advantage
We are looking for a savvy Data Engineer to join our growing team of analytics experts. The hire will be responsible for expanding and optimizing our data and data pipeline architecture, as well as optimizing data flow and collection for cross functional teams. The ideal candidate is an experienced data pipeline builder and data wrangler who enjoys optimizing data systems and building them from the ground up. The Data Engineer will support our software developers, database architects, data analysts and data scientists on data initiatives and will ensure optimal data delivery architecture is consistent throughout ongoing projects. They must be self-directed and comfortable supporting the data needs of multiple teams, systems and products. The right candidate will be excited by the prospect of optimizing or even re-designing our company’s data architecture to support our next generation of products and data initiatives.
Responsibilities
Job Requirements:
B.S. in Computer Science, similar technical field or equivalent experience
Skills Highly Preferred:
ThetaRay is the leading provider of AI-based Big Data analytics.
We are dedicated to helping financial organizations combat financial cybercrimes such as money laundering and fraud facilitating malicious crimes such as terrorist financing, narco trafficking, and human trafficking which negatively impact the global economy.
Our Intuitive AI solutions for Anti Money Laundering and Fraud Detection enable clients to manage risk, detect money laundering schemes, uncover fraud, expose bad loans, uncover operational issues, and reveal valuable new growth opportunities.
We are looking for a Big Data Engineer to join our growing team of data experts.
The hire will be responsible for designing, implementing, and optimizing ETL processes and data pipeline flows within the ThetaRay system.
The ideal candidate has experience in building data pipelines and data transformations enjoy optimizing data systems and building them from the ground up.
The Big Data Engineer will support our data scientists with the implementation of the relevant data flows based on the data scientist’s features design.
They must be self-directed and comfortable supporting multiple production implementations for various use cases, part of which will be conducted on-premise at customer locations.
Key Responsibilities
● Implement and maintain data pipeline flows in production within the ThetaRay system based on the data scientist’s design.
● Design and implement solution-based data flows for specific use cases, enabling applicability of implementations within the ThetaRay product.
● Building a Machine Learning data pipeline.
● Create data tools for analytics and data scientist team members that assist them in building and optimizing our product into an innovative industry leader.
● Work with product, R&D, data, and analytics experts to strive for greater functionality in our systems.
● Train customer data scientists and engineers to maintain and amend data pipelines within the product.
● 1+ years of hands-on experience in working with Apache Spark cluster
● 1+ years of Hands-on experience and knowledge of Spark scripting languages: PySpark/Scala/Java/R
● 2+ years of Hands-on experience with SQL
● 1+ years of experience with data transformation, validations, cleansing, and ML feature engineering in a Big Data Engineer role
● BSc degree or higher in Computer Science, Statistics, Informatics, Information Systems, Engineering, or another quantitative field.
● Experience working with and optimizing ‘big data’ data pipelines, architectures, and data sets.
● Strong analytic skills related to working with structured and semi-structured datasets.
● Build processes supporting data transformation, data structures, metadata, dependency, and workload management.
● Experience performing root cause analysis on internal and external data and processes to answer specific business questions and identify opportunities for improvement.
● Business oriented and able to work with external customers and cross-functional teams.
● Willingness to travel abroad to customers as needed (up to 25%)
Nice to have
● Experience with Linux
● Experience in building Machine Learning pipeline
● Experience with Elasticsearch
● Experience with Zeppelin/Jupyter
● Experience with workflow automation platforms such as Jenkins or Apache Airflow
● Experience with Microservices architecture components, including Docker and Kubernetes.