Skocz do zawartości

Aktywacja nowych użytkowników
Zakazane produkcje

  • advertisement_alt
  • advertisement_alt
  • advertisement_alt
Courses2024

Fundamentals Of Reinforcement Learning (2024)

Rekomendowane odpowiedzi

96d7b5bd705ae562ee5b4bce61b15fa1.jpeg
Free Download Fundamentals Of Reinforcement Learning (2024)
Last updated 10/2024
MP4 | Video: h264, 1920x1080 | Audio: AAC, 44.1 KHz
Language: English | Size: 4.01 GB | Duration: 10h 39m
A systematic tour of foundational RL, from k-armed bandits to planning via Markov Decision Processes and TD learning

What you'll learn
Master core reinforcement learning concepts from k-armed bandits to advanced planning algorithms.
Implement key RL algorithms including Monte Carlo, SARSA, and Q-learning in Python from scratch.
Apply RL techniques to solve classic problems like Frozen Lake, Jack's Car Rental, Blackjack, and Cliff Walking.
Develop a deep understanding of the mathematical foundations underlying modern RL approaches.
Requirements
Students should be comfortable with Python programming, including NumPy and Pandas.
Basic understanding of probability concepts is beneficial (probability distributions, random variables, conditional and joint probabilities)
While familiarity with other machine learning methods is helpful, it's not required. We'll build the necessary reinforcement learning concepts from the ground up.
Section assignments are in pure python (rather than Jupyter Notebooks), and often span edits to multiple modules, so students should be setup with an editor (e.g. VS Code or PyCharm)
Description
Reinforcement learning is one of the most exciting branches of modern artificial intelligence.It came to the public consciousness largely because of a brilliant early breakthrough of DeepMind: in 2016, they utilised reinforcement learning to smash a benchmark thought to be decades away in artificial intelligence - they beat the world's greatest human grandmaster in the Chinese game of Go.This was so exceptional because the game tree for Go is so large - the number of possible moves is 1 with 200 zeros after it (or a "gargoogol"!). Compare this with chess, which has only 10^50 nodes in its tree.Chess was solved in 1997, when IBM's Deep Blue beat the world's best Gary Kasparov. Deep Blue was the ultimate example of the previous generation of AI - Good Old-fashioned AI or "GOFAI". A team of human grandmasters hard-coded opening strategies, piece and board valuations and end-game databases into a powerful computer which then crunched the numbers in a relatively brute-force way.DeepMind's approach was very different. Instead of humans hard-coding heuristics for how to play a good game of Go, they applied reinforcement learning so that their algorithms could - by playing themselves, and winning or losing millions of times - work out good strategies for themselves.The result was a game playing algorithm unbounded by the limitations of human knowledge. Go grandmasters to this day are studying its unique and creative moves in its series against Lee Sedol.Since then, DeepMind have shown how reinforcement learning can be practically applied to real life problems. A reinforcement learning agent controlling the cooling system for a Google data centre found strategies no human control engineer had thought of, such as to exploit winter temperatures to save heater use. Another of their agents applied to an experimental fusion reactor similarly found superhuman strategies for controlling the highly complex plasma in the reactor.So, reinforcement learning promises to help solve some of the grand problems of science and engineering, but it has a whole load of more immediately commercial applications too - from the A/B testing of products and website design, to the implementation of recommender systems to learn how to match up a company's customers with its products, to algorithmic trading, where the objective is to buy or sell stocks to maximise a profit.This course will explain the fundamentals of this most exciting branch of AI. You will get to grips with both the theory underpinning the algorithms, and get hands-on practise implementing them yourself in python.By the end of this course, you will have a fundamental grasp these algorithms. We'll focus on "tabular" methods using simple NumPy arrays rather than neural networks, as one often gets the greatest understanding of problems by paring them down to their simplest form and working through each step of an algorithm with pencil and paper.There is ample opportunity for that in this course, and each section is capped with a coding assignment where you will build the algorithms yourselfFrom there, the world is your oyster! Go solve driverless cars, make bajillions in a hedge fund, or save humanity by solving fusion power!
Overview
Section 1: Introduction
Lecture 1 Introduction
Lecture 2 Course overview
Section 2: K-armed bandits
Lecture 3 Introduction to k-armed bandits
Lecture 4 Setting the scene
Lecture 5 Initial concepts
Lecture 6 Action value methods // Greedy
Lecture 7 Action value methods // Epsilon-greedy
Lecture 8 Action value methods // Efficient implementation
Lecture 9 Non-stationary bandits
Lecture 10 Optimistic initial values
Lecture 11 Getting started with your first assignement: the 10-armed testbed
Section 3: Markov Decision Processes (MDPs)
Lecture 12 Introduction to MDPs
Lecture 13 From bandits to MDPs // setting the scene
Lecture 14 From bandits to MDPs // Frozen Lake walk-through
Lecture 15 From bandits to MDPs // Real world examples
Lecture 16 Goals, rewards, returns and episodes
Lecture 17 Policies and value functions
Lecture 18 Bellman equations // Expectation equation for v(s)
Lecture 19 Bellman equations // Expectation equation for q(s, a)
Lecture 20 Bellman equations // Optimality equations
Lecture 21 Walk-through // Bellman expectation equation
Lecture 22 Walk-through // Bellman optimality equation
Lecture 23 Walk-through // Matrix inversion
Lecture 24 MDP section summary
Section 4: Dynamic Programming (DP)
Lecture 25 Introduction to Dynamic Programming
Lecture 26 Policy evaluation // introduction
Lecture 27 Policy evaluation // walk-through
Lecture 28 Policy improvement // introduction and proof
Lecture 29 Policy improvement // walk-through
Lecture 30 Policy iteration
Lecture 31 Value iteration // introduction
Lecture 32 Value iteration // walkthrough
Section 5: Monte Carlo methods
Lecture 33 Introduction to Monte Carlo methods
Lecture 34 Setting the scene
Lecture 35 Monte Carlo example // area of a pentagram
Lecture 36 Prediction
Lecture 37 Control - exploring starts
Lecture 38 Control - on-policy
Lecture 39 Control - off-policy // new concepts
Lecture 40 Control - off-policy // implementation
Lecture 41 Environment introduction // Blackjack
Section 6: Temporal Difference (TD) methods
Lecture 42 Introduction to TD methods
Lecture 43 Setting the scene
Lecture 44 Sarsa
Lecture 45 Q-learning
Lecture 46 Expected sarsa
Section 7: Planning methods
Lecture 47 Introduction to planning methods
Lecture 48 Filling the unforgiving minute
Lecture 49 Dyna-Q // introduction
Lecture 50 Dyna-Q // walk-through
Lecture 51 Planning with non-stationary environments: Dyna-Q+
Section 8: Congratulations and feedback
Lecture 52 Congratulations!
This course is ideal for AI enthusiasts, computer science students, and software engineers keen to dive into reinforcement learning. Perfect for those with some programming experience who want to understand and implement cutting-edge AI algorithms from the ground up.
Screenshot
Homepage

Ukryta Zawartość

    Treść widoczna tylko dla użytkowników forum DarkSiders. Zaloguj się lub załóż darmowe konto na forum aby uzyskać dostęp bez limitów.






Ukryta Zawartość

    Treść widoczna tylko dla użytkowników forum DarkSiders. Zaloguj się lub załóż darmowe konto na forum aby uzyskać dostęp bez limitów.

No Password - Links are Interchangeable

Udostępnij tę odpowiedź


Odnośnik do odpowiedzi
Udostępnij na innych stronach

Dołącz do dyskusji

Możesz dodać zawartość już teraz a zarejestrować się później. Jeśli posiadasz już konto, zaloguj się aby dodać zawartość za jego pomocą.

Gość
Dodaj odpowiedź do tematu...

×   Wklejono zawartość z formatowaniem.   Usuń formatowanie

  Dozwolonych jest tylko 75 emoji.

×   Odnośnik został automatycznie osadzony.   Przywróć wyświetlanie jako odnośnik

×   Przywrócono poprzednią zawartość.   Wyczyść edytor

×   Nie możesz bezpośrednio wkleić grafiki. Dodaj lub załącz grafiki z adresu URL.

    • 1 Posts
    • 5 Views
    • 1 Posts
    • 4 Views
    • 1 Posts
    • 6 Views
    • 1 Posts
    • 5 Views
    • 1 Posts
    • 5 Views

×
×
  • Dodaj nową pozycję...

Powiadomienie o plikach cookie

Korzystając z tej witryny, wyrażasz zgodę na nasze Warunki użytkowania.