Share
Explore

icon picker
Knowledge Hub

Its never too late to start learning.

AI & ML


ubxsnqapcl7d1.png

Knowledge Hub:

Building LLMs from the Ground Up: A 3-hour Coding Workshop
REFERENCES: 1. Build an LLM from Scratch book: https://mng.bz/M96o 2. Build an LLM from Scratch repo: https://github.com/rasbt/LLMs-from-scratch 3. GitHub repository with workshop code: https://github.com/rasbt/LLM-workshop-2024 4. Lightning Studio for this workshop: https://lightning.ai/lightning-ai/studios/llms-from-the-ground-up-workshop 5. LitGPT: https://github.com/Lightning-AI/litgpt DESCRIPTION: This tutorial is aimed at coders interested in understanding the building blocks of large language models (LLMs), how LLMs work, and how to code them from the ground up in PyTorch. We will kick off this tutorial with an introduction to LLMs, recent milestones, and their use cases. Then, we will code a small GPT-like LLM, including its data input pipeline, core architecture components, and pretraining code ourselves. After understanding how everything fits together and how to pretrain an LLM, we will learn how to load pretrained weights and finetune LLMs using open-source libraries. --- To support this channel, please consider purchasing a copy of my books: https://sebastianraschka.com/books/ --- https://twitter.com/rasbt https://linkedin.com/in/sebastianraschka/ https://magazine.sebastianraschka.com --- OUTLINE: 0:00 – Workshop overview 2:17 – Part 1: Intro to LLMs 9:14 – Workshop materials 10:48 – Part 2: Understanding LLM input data 23:25 – A simple tokenizer class 41:03 – Part 3: Coding an LLM architecture 45:01 – GPT-2 and Llama 2 1:07:11 – Part 4: Pretraining 1:29:37 – Part 5.1: Loading pretrained weights 1:45:12 – Part 5.2: Pretrained weights via LitGPT 1:53:09 – Part 6.1: Instruction finetuning 2:08:21 – Part 6.2: Instruction finetuning via LitGPT 02:26:45 – Part 6.3: Benchmark evaluation 02:36:55 – Part 6.4: Evaluating conversational performance 02:42:40 – Conclusion
www.youtube.com
Machine Learning for Everybody – Full Course
Learn Machine Learning in a way that is accessible to absolute beginners. You will learn the basics of Machine Learning and how to use TensorFlow to implement many different concepts. ✏️ Kylie Ying developed this course. Check out her channel: https://www.youtube.com/c/YCubed ⭐️ Code and Resources ⭐️ 🔗 Supervised learning (classification/MAGIC): https://colab.research.google.com/drive/16w3TDn_tAku17mum98EWTmjaLHAJcsk0?usp=sharing 🔗 Supervised learning (regression/bikes): https://colab.research.google.com/drive/1m3oQ9b0oYOT-DXEy0JCdgWPLGllHMb4V?usp=sharing 🔗 Unsupervised learning (seeds): https://colab.research.google.com/drive/1zw_6ZnFPCCh6mWDAd_VBMZB4VkC3ys2q?usp=sharing 🔗 Dataets (add a note that for the bikes dataset, they may have to open the downloaded csv file and remove special characters) 🔗 MAGIC dataset: https://archive.ics.uci.edu/ml/datasets/MAGIC+Gamma+Telescope 🔗 Bikes dataset: https://archive.ics.uci.edu/ml/datasets/Seoul+Bike+Sharing+Demand 🔗 Seeds/wheat dataset: https://archive.ics.uci.edu/ml/datasets/seeds 🏗 Google provided a grant to make this course possible. ⭐️ Contents ⭐️ ⌨️ (0:00:00) Intro ⌨️ (0:00:58) Data/Colab Intro ⌨️ (0:08:45) Intro to Machine Learning ⌨️ (0:12:26) Features ⌨️ (0:17:23) Classification/Regression ⌨️ (0:19:57) Training Model ⌨️ (0:30:57) Preparing Data ⌨️ (0:44:43) K-Nearest Neighbors ⌨️ (0:52:42) KNN Implementation ⌨️ (1:08:43) Naive Bayes ⌨️ (1:17:30) Naive Bayes Implementation ⌨️ (1:19:22) Logistic Regression ⌨️ (1:27:56) Log Regression Implementation ⌨️ (1:29:13) Support Vector Machine ⌨️ (1:37:54) SVM Implementation ⌨️ (1:39:44) Neural Networks ⌨️ (1:47:57) Tensorflow ⌨️ (1:49:50) Classification NN using Tensorflow ⌨️ (2:10:12) Linear Regression ⌨️ (2:34:54) Lin Regression Implementation ⌨️ (2:57:44) Lin Regression using a Neuron ⌨️ (3:00:15) Regression NN using Tensorflow ⌨️ (3:13:13) K-Means Clustering ⌨️ (3:23:46) Principal Component Analysis ⌨️ (3:33:54) K-Means and PCA Implementations 🎉 Thanks to our Champion and Sponsor supporters: 👾 Raymond Odero 👾 Agustín Kussrow 👾 aldo ferretti 👾 Otis Morgan 👾 DeezMaster -- Learn to code for free and get a developer job: https://www.freecodecamp.org Read hundreds of articles on programming: https://freecodecamp.org/news
www.youtube.com

101


CNN

CS231n - Convolutional Neural Networks for Visual Recognition:
Lecture 1 | Introduction to Convolutional Neural Networks for Visual Recognition
Lecture 1 gives an introduction to the field of computer vision, discussing its history and key challenges. We emphasize that computer vision encompasses a wide variety of different tasks, and that despite the recent successes of deep learning we are still a long way from realizing the goal of human-level visual intelligence. Keywords: Computer vision, Cambrian Explosion, Camera Obscura, Hubel and Wiesel, Block World, Normalized Cut, Face Detection, SIFT, Spatial Pyramid Matching, Histogram of Oriented Gradients, PASCAL Visual Object Challenge, ImageNet Challenge Slides: http://cs231n.stanford.edu/slides/2017/cs231n_2017_lecture1.pdf -------------------------------------------------------------------------------------- Convolutional Neural Networks for Visual Recognition Instructors: Fei-Fei Li: http://vision.stanford.edu/feifeili/ Justin Johnson: http://cs.stanford.edu/people/jcjohns/ Serena Yeung: http://ai.stanford.edu/~syyeung/ Computer Vision has become ubiquitous in our society, with applications in search, image understanding, apps, mapping, medicine, drones, and self-driving cars. Core to many of these applications are visual recognition tasks such as image classification, localization and detection. Recent developments in neural network (aka “deep learning”) approaches have greatly advanced the performance of these state-of-the-art visual recognition systems. This lecture collection is a deep dive into details of the deep learning architectures with a focus on learning end-to-end models for these tasks, particularly image classification. From this lecture collection, students will learn to implement, train and debug their own neural networks and gain a detailed understanding of cutting-edge research in computer vision. Website: http://cs231n.stanford.edu/ For additional learning opportunities please visit: http://online.stanford.edu/
www.youtube.com

RNN

NLP

ML OPS

Computer Vision

Advanced

The spelled-out intro to neural networks and backpropagation: building micrograd
This is the most step-by-step spelled-out explanation of backpropagation and training of neural networks. It only assumes basic knowledge of Python and a vague recollection of calculus from high school. Links: - micrograd on github: https://github.com/karpathy/micrograd - jupyter notebooks I built in this video: https://github.com/karpathy/nn-zero-to-hero/tree/master/lectures/micrograd - my website: https://karpathy.ai - my twitter: https://twitter.com/karpathy - "discussion forum": nvm, use youtube comments below for now :) - (new) Neural Networks: Zero to Hero series Discord channel: https://discord.gg/3zy8kqD9Cp , for people who'd like to chat more and go beyond youtube comments Exercises: you should now be able to complete the following google collab, good luck!: https://colab.research.google.com/drive/1FPTx1RXtBfc4MaTkf7viZZD4U2F9gtKN?usp=sharing Chapters: 00:00:00 intro 00:00:25 micrograd overview 00:08:08 derivative of a simple function with one input 00:14:12 derivative of a function with multiple inputs 00:19:09 starting the core Value object of micrograd and its visualization 00:32:10 manual backpropagation example #1: simple expression 00:51:10 preview of a single optimization step 00:52:52 manual backpropagation example #2: a neuron 01:09:02 implementing the backward function for each operation 01:17:32 implementing the backward function for a whole expression graph 01:22:28 fixing a backprop bug when one node is used multiple times 01:27:05 breaking up a tanh, exercising with more operations 01:39:31 doing the same thing but in PyTorch: comparison 01:43:55 building out a neural net library (multi-layer perceptron) in micrograd 01:51:04 creating a tiny dataset, writing the loss function 01:57:56 collecting all of the parameters of the neural net 02:01:12 doing gradient descent optimization manually, training the network 02:14:03 summary of what we learned, how to go towards modern neural nets 02:16:46 walkthrough of the full code of micrograd on github 02:21:10 real stuff: diving into PyTorch, finding their backward pass for tanh 02:24:39 conclusion 02:25:20 outtakes :)
www.youtube.com
Let's build the GPT Tokenizer
The Tokenizer is a necessary and pervasive component of Large Language Models (LLMs), where it translates between strings and tokens (text chunks). Tokenizers are a completely separate stage of the LLM pipeline: they have their own training sets, training algorithms (Byte Pair Encoding), and after training implement two fundamental functions: encode() from strings to tokens, and decode() back from tokens to strings. In this lecture we build from scratch the Tokenizer used in the GPT series from OpenAI. In the process, we will see that a lot of weird behaviors and problems of LLMs actually trace back to tokenization. We'll go through a number of these issues, discuss why tokenization is at fault, and why someone out there ideally finds a way to delete this stage entirely. Chapters: 00:00:00 intro: Tokenization, GPT-2 paper, tokenization-related issues 00:05:50 tokenization by example in a Web UI (tiktokenizer) 00:14:56 strings in Python, Unicode code points 00:18:15 Unicode byte encodings, ASCII, UTF-8, UTF-16, UTF-32 00:22:47 daydreaming: deleting tokenization 00:23:50 Byte Pair Encoding (BPE) algorithm walkthrough 00:27:02 starting the implementation 00:28:35 counting consecutive pairs, finding most common pair 00:30:36 merging the most common pair 00:34:58 training the tokenizer: adding the while loop, compression ratio 00:39:20 tokenizer/LLM diagram: it is a completely separate stage 00:42:47 decoding tokens to strings 00:48:21 encoding strings to tokens 00:57:36 regex patterns to force splits across categories 01:11:38 tiktoken library intro, differences between GPT-2/GPT-4 regex 01:14:59 GPT-2 encoder.py released by OpenAI walkthrough 01:18:26 special tokens, tiktoken handling of, GPT-2/GPT-4 differences 01:25:28 minbpe exercise time! write your own GPT-4 tokenizer 01:28:42 sentencepiece library intro, used to train Llama 2 vocabulary 01:43:27 how to set vocabulary set? revisiting gpt.py transformer 01:48:11 training new tokens, example of prompt compression 01:49:58 multimodal [image, video, audio] tokenization with vector quantization 01:51:41 revisiting and explaining the quirks of LLM tokenization 02:10:20 final recommendations 02:12:50 ??? :) Exercises: - Advised flow: reference this document and try to implement the steps before I give away the partial solutions in the video. The full solutions if you're getting stuck are in the minbpe code https://github.com/karpathy/minbpe/blob/master/exercise.md Links: - Google colab for the video: https://colab.research.google.com/drive/1y0KnCFZvGVf_odSfcNAws6kcDD7HsI0L?usp=sharing - GitHub repo for the video: minBPE https://github.com/karpathy/minbpe - Playlist of the whole Zero to Hero series so far: https://www.youtube.com/watch?v=VMj-3S1tku0&list=PLAqhIrjkxbuWI23v9cThsA9GvCAUhRvKZ - our Discord channel: https://discord.gg/3zy8kqD9Cp - my Twitter: https://twitter.com/karpathy Supplementary links: - tiktokenizer https://tiktokenizer.vercel.app - tiktoken from OpenAI: https://github.com/openai/tiktoken - sentencepiece from Google https://github.com/google/sentencepiece
www.youtube.com
Let's build GPT with memory: learn to code a custom LLM (Coding a Paper - Ep. 1)
You've used an LLM before, and you might've even fine-tuned one, but...have you ever built one yourself? How do you start from scratch and turn a new research idea into a working model? This is the skill used by industry and academic researchers to turn cutting edge research ideas into production quality code. That's what we'll do in this series: we're going to implement a Google research paper! By the end of this course you'll have deep expertise of how a production-grade transformer model like GPT works end to end. You'll also have the ability to implement new research papers, implement your own research ideas, and comfortably modify and experiment with existing implementations. Along the way we'll cover lots of technical and non-technical topics to develop this skillset: - A lot of pytorch - How to select and critically read a paper / research idea - How to get up to speed in an unfamiliar area of research with tools and tricks - A recipe for breaking down big research papers into achievable, bite-size chunks - Tips and tricks I wish I had known - Understanding and implementing recurrence - Building GPT from scratch - A survey of position embedding research - Building a KNN vector database for memory - How to put it all together into a working model Links: - Link to Colab Notebook: https://colab.research.google.com/drive/1hFkwPw5bJd7fnYogdhGwIfWefUgTuRpN# - You can follow me on twitter: https://twitter.com/nickcdryan - Check out the membership site for a full course version of the series (coming soon) and lots of other NLP content and code! https://www.chrismccormick.ai/membership Chapters: 00:00 introduction and goals of the course 02:08 why this course? 03:14 prerequisites and what you should know beforehand 03:40 how to get the most value out of these videos 05:44 our high level plan for this course 06:29 tips for picking a paper 08:00 reference code implementations 08:56 tip for reading research papers 10:03 guide your reading with checklist questions 11:43 what’s the main idea of memorizing transformers? 12:17 what’s the motivation for memorizing transformers? 14:36 what’s the proposed solution? 17:03 what’s the main contribution of the paper? 19:58 how do they measure success of the model? 21:24 wait: how good is this model? 24:55 more follow up questions about the paper 26:45 let’s read the paper! 37:28 summary of what we’ll build 38:37 tips for building effectively 39:43 what’s coming next!
www.youtube.com

Diffusion Models

Audio

Shazam Clone in Go - An example of signal processing

News

Papers/Books

Papers explained series:
Transformers usage:
LLM from Scratch_compressed.pdf
LLM from Scratch_compressed.pdf
3.6 MB
ML Notes.pdf
ML Notes.pdf
3.4 MB



Python for Data Analysis" by Wes McKinney (3rd Edition, 2022)
Hands-On Machine Learning with Scikit-Learn, Keras, and TensorFlow" by Aurélien Géron (2nd Edition, 2019)
Introduction to Machine Learning with Python" by Andreas C. Müller and Sarah Guido
"Pattern Recognition and Machine Learning" by Christopher M. Bishop (2006) - A classic, but dense.
"The Elements of Statistical Learning" by Trevor Hastie, Robert Tibshirani, and Jerome Friedman (2nd Edition, 2009)

[Classic] Word2Vec: Distributed Representations of Words and Phrases and their Compositionality
#ai #research #word2vec Word vectors have been one of the most influential techniques in modern NLP to date. This paper describes Word2Vec, which the most popular technique to obtain word vectors. The paper introduces the negative sampling technique as an approximation to noise contrastive estimation and shows that this allows the training of word vectors from giant corpora on a single machine in a very short time. OUTLINE: 0:00 - Intro & Outline 1:50 - Distributed Word Representations 5:40 - Skip-Gram Model 12:00 - Hierarchical Softmax 14:55 - Negative Sampling 22:30 - Mysterious 3/4 Power 25:50 - Frequent Words Subsampling 28:15 - Empirical Results 29:45 - Conclusion & Comments Paper: https://arxiv.org/abs/1310.4546 Code: https://code.google.com/archive/p/word2vec/ Abstract: The recently introduced continuous Skip-gram model is an efficient method for learning high-quality distributed vector representations that capture a large number of precise syntactic and semantic word relationships. In this paper we present several extensions that improve both the quality of the vectors and the training speed. By subsampling of the frequent words we obtain significant speedup and also learn more regular word representations. We also describe a simple alternative to the hierarchical softmax called negative sampling. An inherent limitation of word representations is their indifference to word order and their inability to represent idiomatic phrases. For example, the meanings of "Canada" and "Air" cannot be easily combined to obtain "Air Canada". Motivated by this example, we present a simple method for finding phrases in text, and show that learning good vector representations for millions of phrases is possible. Authors: Tomas Mikolov, Ilya Sutskever, Kai Chen, Greg Corrado, Jeffrey Dean Links: YouTube: https://www.youtube.com/c/yannickilcher Twitter: https://twitter.com/ykilcher Discord: https://discord.gg/4H8xxDF BitChute: https://www.bitchute.com/channel/yannic-kilcher Minds: https://www.minds.com/ykilcher Parler: https://parler.com/profile/YannicKilcher LinkedIn: https://www.linkedin.com/in/yannic-kilcher-488534136/ If you want to support me, the best thing to do is to share out the content :) If you want to support me financially (completely optional and voluntary, but a lot of people have asked for this): SubscribeStar: https://www.subscribestar.com/yannickilcher Patreon: https://www.patreon.com/yannickilcher Bitcoin (BTC): bc1q49lsw3q325tr58ygf8sudx2dqfguclvngvy2cq Ethereum (ETH): 0x7ad3513E3B8f66799f507Aa7874b1B0eBC7F85e2 Litecoin (LTC): LQW2TRyKYetVC8WjFkhpPhtpbDM4Vw7r9m Monero (XMR): 4ACL8AGrEo5hAir8A9CeVrW8pEauWvnp1WnSDZxW7tziCDLhZAGsgzhRQABDnFy8yuM9fWJDviJPHKRjV4FWt19CJZN9D4n
www.youtube.com

Projects

Practice



Data Engineering



Dev Ops


Kubernetes Tutorial for Beginners [FULL COURSE in 4 Hours]
Full Kubernetes Tutorial | Kubernetes Course | Hands-on course with a lot of demos 💙 Become a Kubernetes Administrator - CKA: https://bit.ly/3MQtij9 💚 Become a DevOps Engineer - full educational program: https://bit.ly/3MRjKEE 💜 Become a DevOps expert and 10x your value: https://bit.ly/47lR9zx 🧡 Udemy courses: https://bit.ly/3BQkjbz #kubernetes #techworldwithnana Connect with me 👋 INSTAGRAM ► https://bit.ly/2F3LXYJ LINKEDIN ► https://bit.ly/3hWOLVT ▬▬▬▬▬▬ T I M E S T A M P S ⏰ ▬▬▬▬▬▬ 0:00 - Course Overview 2:18 - What is K8s 5:20 - Main K8s Components 22:29 - K8s Architecture 34:47 - Minikube and kubectl - Local Setup 44:52 - Main Kubectl Commands - K8s CLI 1:02:03 - K8s YAML Configuration File 1:16:16 - Demo Project: MongoDB and MongoExpress 1:46:16 - Organizing your components with K8s Namespaces 2:01:52 - K8s Ingress explained 2:24:17 - Helm - Package Manager 2:38:07 - Persisting Data in K8s with Volumes 2:58:38 - Deploying Stateful Apps with StatefulSet 3:13:43 - K8s Services explained ▬▬▬▬▬▬ COURSE OVERVIEW 📚 ▬▬▬▬▬▬ 🔥 What is Kubernetes 🔥 ► What problems does Kubernetes solve? ► What features do container orchestration tools offer? 🔥 Main K8s Components 🔥 ► Node & Pod ► Service & Ingress ► ConfigMap & Secret ► Volumes ► Deployment & StatefulSet 🔥 K8s Architecture 🔥 ► Worker Nodes ► Master Nodes ► Api Server ► Scheduler ► Controller Manager ► etcd - the cluster brain 🔥 Minikube and kubectl - Local Setup 🔥 ► What is minikube? ► What is kubectl? ► install minikube and kubectl ► create and start a minikube cluster 🔗 Links: - Install Minikube (Mac, Linux and Windows): https://bit.ly/38bLcJy - Install Kubectl: https://bit.ly/32bSI2Z - Gitlab: If you are using Mac, you can follow along the commands. I listed them all here: https://bit.ly/3oZzuHY 🔥 Main Kubectl Commands - K8s CLI 🔥 ► Get status of different components ► create a pod/deployment ► layers of abstraction ► change the pod/deployment ► debugging pods ► delete pod/deployment ► CRUD by applying configuration file 🔗 - Git repo link of all the commands: https://bit.ly/3oZzuHY 🔥 K8s YAML Configuration File 🔥 ► 3 parts of a Kubernetes config file (metadata, specification, status) ► format of configuration file ► blueprint for pods (template) ► connecting services to deployments and pods (label & selector & port) ► demo 🔗 - Git repo link: https://bit.ly/2JBVyIk 🔥 Demo Project 🔥 ► Deploying MongoDB and Mongo Express ► MongoDB Pod ► Secret ► MongoDB Internal Service ► Deployment Service and Config Map ► Mongo Express External Service 🔗 - Git repo link: https://bit.ly/3jY6lJp 🔥 Organizing your components with K8s Namespaces 🔥 ► What is a Namespace? ► 4 Default Namespaces ► Create a Namespace ► Why to use Namespaces? 4 Use Cases ► Characteristics of Namespaces ► Create Components in Namespaces ► Change Active Namespace 🔗 - Install Kubectx: https://github.com/ahmetb/kubectx#installation 🔥 K8s Ingress explained 🔥 ► What is Ingress? External Service vs. Ingress ► Example YAML Config Files for External Service and Ingress ► Internal Service Configuration for Ingress ► How to configure Ingress in your cluster? ► What is Ingress Controller? ► Environment on which your cluster is running (Cloud provider or bare metal) ► Demo: Configure Ingress in Minikube ► Ingress Default Backend ► Routing Use Cases ► Configuring TLS Certificate 🔗 Links: - Git Repo: https://bit.ly/3mJHVFc - Ingress Controllers: https://bit.ly/32dfHe3 - Ingress Controller Bare Metal: https://bit.ly/3kYdmLB 🔥 Helm - Package Manager 🔥 ► Package Manager and Helm Charts ► Templating Engine ► Use Cases for Helm ► Helm Chart Structure ► Values injection into template files ► Release Management / Tiller (Helm Version 2!) 🔗 Links: - Helm hub: https://hub.helm.sh/ - Helm charts GitHub Project: https://github.com/helm/charts - Install Helm: https://helm.sh/docs/intro/install/ 🔥 Persisting Data in K8s with Volumes 🔥 ► The need for persistent storage & storage requirements ► Persistent Volume (PV) ► Local vs Remote Volume Types ► Who creates the PV and when? ► Persistent Volume Claim (PVC) ► Levels of volume abstractions ► ConfigMap and Secret as volume types ► Storage Class (SC) 🔗 - Git Repo: https://bit.ly/2Gv3eLi 🔥 Deploying Stateful Apps with StatefulSet 🔥 ► What is StatefulSet? Difference of stateless and stateful applications ► Deployment of stateful and stateless apps ► Deployment vs StatefulSet ► Pod Identity ► Scaling database applications: Master and Worker Pods ► Pod state, Pod Identifier ► 2 Pod endpoints 🔥 K8s Services 🔥 ► What is a Service in K8s and when we need it? ► ClusterIP Services ► Service Communication ► Multi-Port Services ► Headless Services ► NodePort Services ► LoadBalancer Services
www.youtube.com

Systems Design




Computer Systems


Algorithms


Cloud

Self Hosting

Math


25 Math explainers you may enjoy | SoME3 results
Playlist of all entries: https://www.youtube.com/playlist?list=PLnQX-jgAF5pQS2GUFCsatSyZkSH7e8UM8 All non-video entries: https://some.3b1b.co/non-videos Thank you to Jane Street, both for funding the event, and providing eager and able guest judges to the final stages of the process. Organization and logistics were handled by James Schloss, aka @LeiosLabs Web development by Frédéric Crozatier 0:00 - The event 1:34 - Pixel Art Anti-aliasing 2:26 - The Enola Gay 3:40 - Pitch shifter 4:14 - Cayley Graphs 4:51 - Longest Increasing Subsequence 5:49 - Matrix Arcade 6:37 - Watching Neural Networks Learn 7:18 - Functions are vectors 7:38 - The art of linear programming 8:13 - Backburner problems 9:24 - Affording a planet 9:56 - When can’t math be generalized 10:49 - Rotation + Translation = Rotation 11:33 - Rethinking the real line 12:16 - Egyptian volumes 13:05 - A circular motion quirk 13:40 - Minimal surfaces 14:47 - Computing logs 15:19 - Mediants 16:17 - The shadow game 16:43 - Chasing Fixed Points 17:24 - Representing numbers 18:11 - Mirror ball 18:34 - String art 19:36 - Infinity 20:52 - Thanks ------------------ These animations on this channel largely made using a custom Python library, manim. See the FAQ comments here: https://3b1b.co/faq#manim https://github.com/3b1b/manim https://github.com/ManimCommunity/manim/ All code for specific videos is visible here: https://github.com/3b1b/videos/ The music is by Vincent Rubinetti. https://www.vincentrubinetti.com https://vincerubinetti.bandcamp.com/album/the-music-of-3blue1brown https://open.spotify.com/album/1dVyjwS8FBqXhRunaG5W5u ------------------ 3blue1brown is a channel about animating math, in all senses of the word animate. If you're reading the bottom of a video description, I'm guessing you're more interested than the average viewer in lessons here. It would mean a lot to me if you chose to stay up to date on new ones, either by subscribing here on YouTube or otherwise following on whichever platform below you check most regularly. Mailing list: https://3blue1brown.substack.com Twitter: https://twitter.com/3blue1brown Instagram: https://www.instagram.com/3blue1brown Reddit: https://www.reddit.com/r/3blue1brown Facebook: https://www.facebook.com/3blue1brown Patreon: https://patreon.com/3blue1brown Website: https://www.3blue1brown.com
www.youtube.com


Finance/Investing

Random



Want to print your doc?
This is not the way.
Try clicking the ⋯ next to your doc name or using a keyboard shortcut (
CtrlP
) instead.