- 275
- 64 462 942
StatQuest with Josh Starmer
United States
Приєднався 23 тра 2011
Statistics, Machine Learning and Data Science can sometimes seem like very scary topics, but since each technique is really just a combination of small and simple steps, they are actually quite simple. My goal with StatQuest is to break down the major methodologies into easy to understand pieces. That said, I don't dumb down the material. Instead, I build up your understanding so that you are smarter.
Contact, Video Index, Etc: statquest.org
Contact, Video Index, Etc: statquest.org
Human Stories in AI: Simon Stochholm
In this episode we have special guest Simon Stochhom, a lecturer at UCL in Denmark. Simon applies machine learning, especially deep learning, to images, video and time series in wide variety of settings. And by “wide variety”, I really mean it. Simon is fearless when it comes to seizing opportunities that come up and somehow turns them all into success stories.
If you'd like to support StatQuest, please consider...
Patreon: www.patreon.com/statquest
...or...
UA-cam Membership: ua-cam.com/channels/tYLUTtgS3k1Fg4y5tAhLbw.htmljoin
...buying my book, a study guide, a t-shirt or hoodie, or a song from the StatQuest store...
statquest.org/statquest-store/
...or just donating to StatQuest!
paypal: www.paypal.me/statquest
venmo: @JoshStarmer
Lastly, if you want to keep up with me as I research and create new StatQuests, follow me on twitter:
joshuastarmer
#StatQuest
If you'd like to support StatQuest, please consider...
Patreon: www.patreon.com/statquest
...or...
UA-cam Membership: ua-cam.com/channels/tYLUTtgS3k1Fg4y5tAhLbw.htmljoin
...buying my book, a study guide, a t-shirt or hoodie, or a song from the StatQuest store...
statquest.org/statquest-store/
...or just donating to StatQuest!
paypal: www.paypal.me/statquest
venmo: @JoshStarmer
Lastly, if you want to keep up with me as I research and create new StatQuests, follow me on twitter:
joshuastarmer
#StatQuest
Переглядів: 3 618
Відео
Log_e Song - Official Lyric Video
Переглядів 4,7 тис.21 день тому
Check out the track on Spotify: open.spotify.com/track/4OcFh2yFOTUqzmjjwJF5QY When I first started making StatQuest videos it never dawned on me that people would try to re-do my math on their own. I was also new to explaining things and just assumed that everyone already knew that, in statistics and machine learning, when you use the log, you use base 'e'. Big rookie mistake! Ever since then, ...
Human Stories in AI: Brian Risk@devra.ai
Переглядів 4,3 тис.Місяць тому
In this episode we have special guest Brian Risk, a multi-talented data scientist and the President and Founder of devra.ai, a company specializing in automated coding. Brian is also a great personal friend of mine and an amazing musician. If you'd like to support StatQuest, please consider... Patreon: www.patreon.com/statquest ...or... UA-cam Membership: ua-cam.com/channels/tYLUTtgS3k1Fg4y5tAh...
The matrix math behind transformer neural networks, one step at a time!!!
Переглядів 42 тис.Місяць тому
Transformers, the neural network architecture behind ChatGPT, do a lot of math. However, this math can be done quickly using matrix math because GPUs are optimized for it. Matrix math is also used when we code neural networks, so learning how ChatGPT does it will help you code your own. Thus, in this video, we go through the math one step at a time and explain what each step does so that you ca...
Human Stories in AI: Fabio Urbina
Переглядів 4,1 тис.Місяць тому
In this episode we have special guest Fabio Urbina, an Associate Director at Collaborations Pharmaceuticals. Fabio combines computational tools and machine learning with classical small-molecule, molecular, and cell biology techniques to address previously-difficult to probe scientific problems. Specifically, Fabio finds solutions to drug discovery with machine learning. If you'd like to suppor...
Human Stories in AI: Khushi Jain
Переглядів 7 тис.Місяць тому
In this episode we have special guest Kushi Jain, who works in Data Analytics Development at John Deere and in the Master’s in Computer Science - Data Science at the University of Illinois. Having recently graduated with her bachelor’s, Kushi participated in the data science club and also completed several internships at John Deere. If you'd like to support StatQuest, please consider... Patreon...
Human Stories in AI: Achal Dixit
Переглядів 8 тис.2 місяці тому
In this episode we have special guest Achal Dixit, a Data Scientist at Delhivery, the largest fully integrated logistics services in India. Achal solves problems using Data, statistics, and machine learning with a focus on business and people. Before Delhivery, Achal was a Business Technology Analyst at ZS. And before that, Achal was a research assistant at Imperial College London. If you'd lik...
Human Stories in AI: Rick Marks
Переглядів 8 тис.2 місяці тому
In this episode we have special guest Rick Marks, a professor at the University of North Carolina Chapel Hill School of Data Science and Society. Before UNC, Rick was a director at Google's Advanced Technology and Projects group, exploring new interaction approaches for ambient computing environments. And before that, Rick founded the PlayStation Magic Lab at PlayStation R&D. If you'd like to s...
Essential Matrix Algebra for Neural Networks, Clearly Explained!!!
Переглядів 42 тис.5 місяців тому
Although you don't need to know matrix algebra to understand the ideas behind neural networks, if you want to code them or read the latest manuscripts about the field, then you'll need to understand matrix algebra. This video teaches the essential topics in matrix algebra and shows how a neural network can be written as a matrix equation, and then shows how understand PyTorch documentation, err...
Word Embedding in PyTorch + Lightning
Переглядів 29 тис.6 місяців тому
Word embedding is the first step in lots of neural networks, including Transformers (like ChatGPT) and other state of the art models. Here we learn how to code a stand alone word embedding network from scratch and with nn.Linear. We then learn how to load and use pre-trained word embedding values with nn.Embedding. NOTE: This StatQuest assumes that you are already familiar with Word Embedding, ...
The Golden Play Button, Clearly Explained!!!’
Переглядів 23 тис.7 місяців тому
The Golden Play Button is usually super confusing. In this video, we break it down and walk you through it one-step-at-a-time. By the end of this StatQuest, you'll completely understand The Golden Play Button.
Another 3 lessons from my Pop!!!
Переглядів 11 тис.8 місяців тому
Since September 4th is Global Frank Starmer Day (and also his birthday), I thought we'd celebrate by talking about another three lessons that influenced my older sister and my older brothers. If you'd like to support StatQuest, please consider... Patreon: www.patreon.com/statquest ...or... UA-cam Membership: ua-cam.com/channels/tYLUTtgS3k1Fg4y5tAhLbw.htmljoin ...buying my book, a study guide, a...
Decoder-Only Transformers, ChatGPTs specific Transformer, Clearly Explained!!!
Переглядів 93 тис.8 місяців тому
Transformers are taking over AI right now, and quite possibly their most famous use is in ChatGPT. ChatGPT uses a specific type of Transformer called a Decoder-Only Transformer, and this StatQuest shows you how they work, one step at a time. And at the end (at 32:14), we talk about the differences between a Normal Transformer and a Decoder-Only Transformer. BAM! NOTE: If you're interested in le...
Transformer Neural Networks, ChatGPT's foundation, Clearly Explained!!!
Переглядів 553 тис.9 місяців тому
Transformer Neural Networks are the heart of pretty much everything exciting in AI right now. ChatGPT, Google Translate and many other cool things, are based on Transformers. This StatQuest cuts through all the hype and shows you how a Transformer works, one-step-at-a time. NOTE: If you're interested in learning more about Backpropagation, check out these 'Quests: The Chain Rule: ua-cam.com/vid...
Attention for Neural Networks, Clearly Explained!!!
Переглядів 217 тис.11 місяців тому
Attention for Neural Networks, Clearly Explained!!!
Sequence-to-Sequence (seq2seq) Encoder-Decoder Neural Networks, Clearly Explained!!!
Переглядів 151 тис.Рік тому
Sequence-to-Sequence (seq2seq) Encoder-Decoder Neural Networks, Clearly Explained!!!
Word Embedding and Word2Vec, Clearly Explained!!!
Переглядів 244 тис.Рік тому
Word Embedding and Word2Vec, Clearly Explained!!!
The AI Buzz, Episode #5: A new wave of AI-based products and the resurgence of personal applications
Переглядів 10 тис.Рік тому
The AI Buzz, Episode #5: A new wave of AI-based products and the resurgence of personal applications
CatBoost Part 2: Building and Using Trees
Переглядів 16 тис.Рік тому
CatBoost Part 2: Building and Using Trees
CatBoost Part 1: Ordered Target Encoding
Переглядів 27 тис.Рік тому
CatBoost Part 1: Ordered Target Encoding
The AI Buzz, Episode #4: ChatGPT + Bing and How to start an AI company in 3 easy steps.
Переглядів 7 тис.Рік тому
The AI Buzz, Episode #4: ChatGPT Bing and How to start an AI company in 3 easy steps.
One-Hot, Label, Target and K-Fold Target Encoding, Clearly Explained!!!
Переглядів 41 тис.Рік тому
One-Hot, Label, Target and K-Fold Target Encoding, Clearly Explained!!!
The AI Buzz, Episode #3: Constitutional AI, Emergent Abilities and Foundation Models
Переглядів 4,9 тис.Рік тому
The AI Buzz, Episode #3: Constitutional AI, Emergent Abilities and Foundation Models
Mutual Information, Clearly Explained!!!
Переглядів 75 тис.Рік тому
Mutual Information, Clearly Explained!!!
Cosine Similarity, Clearly Explained!!!
Переглядів 73 тис.Рік тому
Cosine Similarity, Clearly Explained!!!
Long Short-Term Memory with PyTorch + Lightning
Переглядів 55 тис.Рік тому
Long Short-Term Memory with PyTorch Lightning
The AI Buzz, Episode #2: Big data, Reinforcement Learning and Aligning Models
Переглядів 6 тис.Рік тому
The AI Buzz, Episode #2: Big data, Reinforcement Learning and Aligning Models
The AI Buzz, Episode #1: ChatGPT, Transformers and Attention
Переглядів 22 тис.Рік тому
The AI Buzz, Episode #1: ChatGPT, Transformers and Attention
Design Matrix Examples in R, Clearly Explained!!!
Переглядів 11 тис.Рік тому
Design Matrix Examples in R, Clearly Explained!!!
for the exponential distribution: why do all of your means stay in the interval [0,2] ? how come we don’t see some of those red lines over near 5 or 10?
Awesome bro thanks a lot. I like your Double bam!
Any time!
You mention that R^2 as a more intuitive / useful method for understanding goodness of fit than correlation, but doesn't R^2 require the assumption that the model is linear (so it cant be used for logistic regression and other non-linear models)? Does correlation have this same requirement too?
Yes, they both share that requirement.
The normal quantile values are provenient from the z-score calculation? In your example, the first quantile value in the dataset is 0.6 and the respective normal quantile value is -1.5. Does this value come from the z-score calculation? Thanks in advance
In this case I used a standard normal distribution, but you can use any normal distribution and just adjust the axis labels.
Please could someone explain how this works when AI is being used to predict into 3 or more classes. The threshold to make the confusion matrices would be much more difficult to change to test all permutations wouldn't it?
We only use this algorithm for classification tasks like obese or not obese, we use other algorithms to classify 3 or more categories.
@MLLearner is correct
WOWW! This was super helpful! Thanks Josh!
Glad it was helpful!
Josh if i was ____________________________________________REDACTED_______________________________________ thanks josh.
you are the unique ML teacher guy in the world, and I don't think anyone can explain this thing like you. Thank you myself for knowing your channel!
Wow, thanks!
Hilarious, easy to understand, and entertaining. Bravo!
Glad you enjoyed it!
😂Aah That song is cool 😎
Bam! :)
In India due to competition for engineering entrance exam teacher majorly focus on question types rather than concept in depth. Im glad there is a place where people learn for fun not just for competition, also appreiciate teachers like you who can turn subjects into magic. Keep up the good work! From now on I'm permanent on this channel. You are the best!
Thank you very much!
i literally was having a menatal breakdown coz i was unable to understand things. your video helped me a lot and also brought a smile on my face :))
Glad I could help!
this video was absolutely a BAM!!
Thanks!
Hello sir you are doing great work for our community,but I have a humble request please make video on maths learning topics which are important to become AI and ml engineer with proper guidance and free learning resources and full roadmap of learning mathematics please sir ! 🙏🙏 But thanks for your hardwork😊.
I'll keep that in mind. One vide you that might help you get started is this one: ua-cam.com/video/ZTt9gsGcdDo/v-deo.html
love and respect +1
Thanks!
Been smashing my head against the black box.
I hope this video helped!
Very well explained! thanks for making this
Glad it was helpful!
Can we have the slides as pdf
I'm putting them in a book that will come out early 2025.
Is learning Python enough for this field, or do i have to know other languages as well?
Depends on the job and your career ambitions. However, just about every coder I know knows at least 3 programming languages.
This will help me greatly for my MS project.
Good luck!
Can Kmeans clustering algorithm give wrong centroid values once in may be 20 tries ? i think in my code sometimes duo to random initialization of centroids it is ending up reaching some local extreme rather than obtaining the correct result.@statquest
See 3:58
5:45 I think we can blame the ML algorithm, because it is significantly worse than random guessing! 😁
:)
Thanks Josh, your channel is recommended from Murdoch University,Australia lecturers. Worth watching your channel
Thanks!
Hello Statquest, I would like to say Thank You for the amazing job, this content helped me understand a lot how Attention works, specially because visual things help me understand better, and the way you join the visual explanation with the verbal one while keeping it interesting is on another level, Amazing work<3
Thank you!
Waiting for Fourth BAM!
:)
Double bam
:)
BAM!
:)
Thank you so much!!! Love the sound effects and the jokes
Glad you like them!
Amazing video, Thank you ! can I ask, How are the weights and biases being learnt in this example ? Thank you 🙏
Since LSTM is a type of neural network, we find the best Weights and Biases using backpropagation, just like for any other neural network. For more details on how backpropagation works, see: ua-cam.com/video/IN2XmBhILt4/v-deo.html ua-cam.com/video/iyn2zdALii8/v-deo.html and ua-cam.com/video/GKZoOHXGcLo/v-deo.html The only difference with LSTMs is that you have to unroll them for all of your data first and then calculate the derivatives. In the example in this video, that means unrolling the LSTM 4 times (as seen at 17:49) and calculate the derivatives for each variable, starting at the output, for each copy and then add them together.
Excuse me, what is your code editor?
I use jupyter.
thanks for the video
You're welcome!
Hi StatQuest, you said that by scaling the inputs between 0 and 1 it makes the math easier, but what would change if the inputs were not scaled. Also great series of videos :))
Not much. The numbers would be larger and wouldn't fit so nicely in the small boxes I created.
Anyone watching this video on 2024 use ConfusionMatrixDisplay.from_estimator() instead of using plot confusing matrix() with the same parameters
Yes, I updated the notebook.
Seeing your face for the firsts time. You are such a gem to people interested in statistics and machine learning. Very simplified. If I wanna get some intuition before going deep into the text book in a very quick manner no doubt I'll watch stat quest. I wanna meet you at least once. Thank you so much man. Respect for you buddy.
Thank you very much! :)
Highly educational and entertaining! Thank :)
Thanks!
Finally got it ❤
bam! :)
Amazing explanation and great voice! It is a pleasure to listen to You and all examples are funny. Live the pictures and examples. Thank you!
Thank you very much!
BAM! 😎😂
:)
@@statquest thanks for the vid dude enjoyed it so much, you make great content
Thanks for the great explination.
You are welcome!
so whats the standard deviation of the mean of the means of the means called
I believe that is exactly what it is called.
@@statquest oh cool, stats are dope
why do you use 1:1 resampling instead of stratified resampling? The dataset contains 3.5 no_default:1 default. Does this affect SVM results?
What time point, minutes and seconds, are you asking about?
I have an exam in 3 days, yet now I am questioning why I didn't found this channel since the beginning of my semester............
Good luck!
@@statquest thank you!
you are legen wait for it dary
Thanks!
You are the boss
Thanks!
"BAM" it's 5 years And still best explanation of ML "Double BAM" i discovered this channel 😆
Thank you! :)
infinite BAM!!!
:)
U R A LIFE SAVER, THANK U!!
You're welcome!
Josh Starmer 2024
:)
Great work, Thank you Josh, I'm trying to connect ideas from different perspectives/angles, Does the lambda here somehow related to Lagrange multiplier ?
I'm not sure.
Thankssss!!!! Was wondering if our greatest Dr Starmer could come up with a video talking about CCA, Canonical Correspondence Analysis as welll!
I'll keep that in mind!