Skocz do zawartości

Aktywacja nowych użytkowników
Zakazane produkcje

  • advertisement_alt
  • advertisement_alt
  • advertisement_alt
bookbb

Adversarial AI Attacks, Mitigations, and Defense Strategies A cybersecurity professional's guide to AI attacks

Rekomendowane odpowiedzi

1aa78e2b161ef9aafa8b0c57c98af7f5.webp
Adversarial AI Attacks, Mitigations, and Defense Strategies: A cybersecurity professional's guide to AI attacks, threat modeling, and securing AI with MLSecOps by John Sotiropoulos
English | July 26, 2024 | ISBN: 1835087981 | 586 pages | PDF | 25 Mb
Learn how to defend AI and LLM systems against manipulation and intrusion through adversarial attacks such as poisoning, trojan horses, and model extraction, leveraging DevSecOps, MLOps and other methods to secure systems

Key FeaturesUnderstand the unique security challenges presented by predictive and generative AIExplore common adversarial attack strategies as well as emerging threats such as prompt injectionMitigate the risks of attack on your AI system with threat modeling and secure-by-design methodsPurchase of the print or Kindle book includes a free PDF eBookBook Description
Adversarial attacks trick AI systems with malicious data, creating new security risks by exploiting how AI learns. This challenges cybersecurity as it forces us to defend against a whole new kind of threat. This book demystifies adversarial attacks and equips you with the skills to secure AI technologies, moving beyond research hype or business-as-usual activities.
This strategy-based book is a comprehensive guide to AI security, presenting you with a structured approach with practical examples to identify and counter adversarial attacks. In Part 1, you'll touch on getting started with AI and learn about adversarial attacks, before Parts 2, 3 and 4 move through different adversarial attack methods, exploring how each type of attack is performed and how you can defend your AI system against it. Part 5 is dedicated to introducing secure-by-design AI strategy, including threat modeling and MLSecOps and consolidating recent research, industry standards and taxonomies such as OWASP and NIST. Finally, based on the classic NIST pillars, the book provides a blueprint for maturing enterprise AI security, discussing the role of AI security in safety and ethics as part of Trustworthy AI.
By the end of this book, you'll be able to develop, deploy, and secure AI systems against the threat of adversarial attacks effectively.
What you will learnSet up a playground to explore how adversarial attacks workDiscover how AI models can be poisoned and what you can do to prevent thisLearn about the use of trojan horses to tamper with and reprogram modelsUnderstand supply chain risksExamine how your models or data can be stolen in privacy attacksSee how GANs are weaponized for Deepfake creation and cyberattacksExplore emerging LLM-specific attacks, such as prompt injectionLeverage DevSecOps, MLOps and MLSecOps to secure your AI systemWho this book is for
This book tackles AI security from both angles - offense and defence. AI developers and engineers will learn how to create secure systems, while cybersecurity professionals, such as security architects, analysts, engineers, ethical hackers, penetration testers, and incident responders will discover methods to combat threats to AI and mitigate the risks posed by attackers. The book also provides a secure-by-design approach for leaders to build AI with security in mind.
To get the most out of this book, you'll need a basic understanding of security, ML concepts, and Python.
Table of ContentsGetting Started with AIBuilding Our Adversarial PlaygroundSecurity and Adversarial AIPoisoning AttacksModel Tampering with Trojan Horses and Model ReprogrammingSupply Chain Attacks and Adversarial AIEvasion Attacks against Deployed AIPrivacy Attacks - Stealing ModelsPrivacy Attacks - Stealing DataPrivacy-Preserving AIGenerative AI - A New FrontierWeaponizing GANs for Deepfakes and Adversarial Attacks(N.B. Please use the Read Sample option to see further chapters)

Download Links

Ukryta Zawartość

    Treść widoczna tylko dla użytkowników forum DarkSiders. Zaloguj się lub załóż darmowe konto na forum aby uzyskać dostęp bez limitów.

Udostępnij tę odpowiedź


Odnośnik do odpowiedzi
Udostępnij na innych stronach

Dołącz do dyskusji

Możesz dodać zawartość już teraz a zarejestrować się później. Jeśli posiadasz już konto, zaloguj się aby dodać zawartość za jego pomocą.

Gość
Dodaj odpowiedź do tematu...

×   Wklejono zawartość z formatowaniem.   Usuń formatowanie

  Dozwolonych jest tylko 75 emoji.

×   Odnośnik został automatycznie osadzony.   Przywróć wyświetlanie jako odnośnik

×   Przywrócono poprzednią zawartość.   Wyczyść edytor

×   Nie możesz bezpośrednio wkleić grafiki. Dodaj lub załącz grafiki z adresu URL.

    • 1 Posts
    • 17 Views
    • 1 Posts
    • 14 Views
    • 1 Posts
    • 31 Views
    • 1 Posts
    • 42 Views

×
×
  • Dodaj nową pozycję...

Powiadomienie o plikach cookie

Korzystając z tej witryny, wyrażasz zgodę na nasze Warunki użytkowania.