Risks and Opportunities in the AI World
אני שמח להזמין אתכם לאירוע שאנחנו מקיימים עם אלביט בשם "Risks and Opportunities in the AI World". האירוע יתקיים ב4.5 בשעה 18:00 ויכלול שלוש הרצאות טכניות ומקצועיות. בהמשך אנחנו נעלה את כל המצגות וההקלטה של האירוע לעמוד הזה.
ניתן להירשם לאירוע דרך הלינק הזה.ההרצאות:
🎙️ Speaker: Yatsir Shmueli, AI & DSP Engineer at Elbit systems.
Title: “Deep learning for wireless communication”
Abstract: What do you get when deep learning technology meets wireless communication?
Deep learning provides an end-to-end optimization of wireless networks, helps overcome RF complexities and non-linearity issues, and can improve the PHY/signal processing designs.
In this presentation, we'll introduce the use of deep learning modules in our radio and PHY layer.
***
🎙️ Speaker: Ronen Greenberg, AI & PHY Lead at Elbit Systems.
Title: “Adversarial attacks on wireless communication”
Abstract: Deep neural networks have become common in wireless communication systems as a key enabler to face the challenges of complex communication systems.
However, like other Deep neural networks-based applications, they are vulnerable to adversarial attacks. Adversarial attacks have exposed a vulnerability by which minor disruptions in their inputs, hinder the performance of the neural networks, hence raising concerns about securing AI-based systems. The main concern is that neural networks become significantly vulnerable in light of subtle perturbation presented by the adversary.
In the presentation, we will review adversarial use cases, the methods for attacking neural networks, methods to withstand those attacks, and how important it is to be aware of these attacks when we design AI-based solutions.
🎙️ Speaker: Roei Schuster, PhD candidate at Tel Aviv University.
Title: “Data poisoning attacks against textual ML”
Abstract:
The talk will first briefly survey the landscape of emerging attacker models against machine learning models, including poisoning/backdooring, training data leakage, model-stealing attacks, and adversarial examples. Then, we will focus on a few recent advancements in data-poisoning attacks against textual and NLP models, which are particularly vulnerable to such attacks due to the commonplace use of open and public training data sources, like Wikipedia or GitHub. For example, an attacker who manipulates Wikipedia text is often able to "change the meaning of words" in learned word embeddings (potentially affecting many NLP task solvers); an attacker who manipulates open-source code on GitHub is often able to control suggestions given by code-autocompletion systems (potentially causing them to promote insecure code). Finally, we will discuss the challenges in detecting and mitigating such attacks.