0% found this document useful (0 votes)
2 views

summary notes

Uploaded by

omerrob13
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
2 views

summary notes

Uploaded by

omerrob13
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 61

💡

AI

videos list

biz
The Business Model Making 20-Year-Olds RICH - important

growth content
How To Grow 10k Followers on Instagram FAST (Full Strategy) [2-
LyHeqr_7M]
How To Use ChatGPT To Make 1 Month Of Content In 30 Mins
[hwmOSm5Lapw]
I BLEW UP An Instagram Account As Fast As I Could [_TMySdL68QE]
)‫נחשפתי לסודות השיווק של יוצרי תוכן שעושים מיליונים (מדריך תכל'סי בלי בולשיט‬
[57exYvlPoiA].f251

website + landing pages


4 Proven Steps to Build a MILLION DOLLAR Landing Page [_xjg5mOf-
m8].f251
9 Landing Page Hacks To Get More Leads INSTANTLY
[mGbLhEY7nPE].f251
Build a High-Converting Landing Page With this Proven Structure
[zr2txGhYt4U].f137 (nigga)
Copy This Free High Converting Landing Page Template To Triple Your
Leads [WirlXAiLZUQ].f251
Get More Leads in 2025 With THIS Landing Page Formula
[618YBkSlXt4].f248

How to ACTUALLY Sell Digital Products in 2025 [yjL3ft3T4bo].f251.webm

AI 1
general
Transform Your Bedroom Into A YouTube Studio ($5 - $250) [jXoPcFsb1ro]

vibe coding
I'm Addicted to Windsurf, Send Help [ukhe1013Jpk]
Anthropic’s Claude Computer Use Is A Game Changer | YC Decoded
[VDmU0jjklBo]

developing with themes\

how to learn

https://www.fast.ai/posts/2019-01-02-one-year-of-deep-learning.html

always ask, how else can i use it? you must strengthen your creativitty

alwyas connect it to other parts

https://forums.fast.ai/t/how-do-this-course-connect-to-the-current-era-of-
llms/112318

how to leanrn the courses


https://medium.com/@mr.acle/from-novice-to-researcher-two-years-of-
deep-learning-with-fast-ai-4120fc73f9c6

AI 2
AI 3
focus on :

projects

i love basketball - so i can give train a model to identify a basketball ball or


a football ball

i love startups - so i can give him

AI 4
i love budhisem - so i will try to help him identify a monk vs non monk, or
awaked pesron vs non awaked person

tips

https://forums.fast.ai/t/how-to-do-fastai-study-plans-learning-
strategies/39473

AI 5
AI 6
the idea - is make projects

https://medium.com/@init_27/how-not-to-do-fast-ai-or-any-ml-mooc-
3d34a7e0ab8c

shadcn

create the components in their actual code

change it there - than you dont need to actually ovverride it

vibe coding articles

important on keys - https://dev.to/vorniches/vibe-coding-yeah-ive-been-


doing-it-for-two-years-ea2

important - project
make learning efficently easy again → taking the research and apply it in an
app, so you won’t need to do that - you will just do that naturally, by using
the app.

course 1
home work

101 on jupyter notebook

read chapter 1

change the practice - more catogires, different data, etc - create


something of your own

maybe do a recognize of a soldier vs non soldier

AI 7
or, recognize an enlighnted person vs non enlignted

a sucssfull ceo vs non sucscsfull ceo

system

pepole learning in context

this is the best way to learn

neural netwroks

build pictures for us

neural net - you feed it examples, and it learned to recognize things

no need to handcode, but everything can e learned.

image recognition

Transfer leraning
pythorch

jupyter notbooke
powerfull way to build and explore

Datablock

blocks - input / output

get_items = what are the item

splitter = validation set

get_y = the label

items_tmfs - needs that the input will be in the same size

data loader → grap a bunch of the things in the same time, using gpu, that
can do 1000’s at the same time.

model

needs the data, and the model

AI 8
model - the neural network function you want to pass in

model → we start with a network that does a lot

fine tune - adjust them to just teach the model the differences between
your dataset and what is originaly trained for (start with a pretrained model
and then we fine tuned it)

deploy - deploy the model on something

segmentation -

tabular data - we usualy dont fine tune

deep learning

if its something a human can do very quickly - even if it is an expert one,


deep learning will be good at

if it is something that takes time, a lot of logical tought, maybe not

AI 9
what happened

normal programm - inputs , coding a program, and result

a mchine leraning model looks like this

weights = paramaters

the model is not a bunch of conditional / loops and thing

its a mathemetical function

AI 10
weights

after we get the results - we thinkg about how good they are - how good
are the results

so its a feedback loop to improve the weights

so use a neural network as a model, and try to create the step of the
feedback look, we are good!

after the training:

we no need the lost and integrated the weights to the model, and then we
get to the original use of a progmram

getting inputs and get an outputs

a trained model just mapps inputs to a results

homework:

1. first gather some data - for example, images

AI 11
course second iteration

Images:
a bunch of pixels

and its just numbers!

Data block:
Everything fast api needs to create a computer vision model.

Creating the model:


running the model - go through all the data, learns about it

A bird recognizer - starting at 23:00


neural networks - build this for us - we dont give it features, we ask it to
learn features

so, neural network look at the weights, and drew pictures of them: each
weight find something, than they picture it.

so they look at the photoes, and try to learn the features

deep leraning - we take teh features, and combine them to create more
advanced features - this is the idea

AI 12
each feature find something, from the image

AI 13
you start with arandome neural network, feed it examples, and you have it
learn to recognize things, and then it creates images for itself

than you combine the features, and it creates a more detailed feature
DETECTOR - it try to regocnize pattern, so it create a feature detector,
how i can detect feature

so the feature detector look for those feature in an image to classify it

deepter you get → more siphistcated features

AI 14
no need to hand code the features we
look for, they can all be learned

Tips
as you build your model, view your data at every step

than bring your data

Data block
what are all the things that change in your data into the right shape for that
model

inputs - blocks - inputs and output, what kind of model

get_items = the items you are looking on to train from

spliiter - the validation set, set aside some data, randomly 20% of it ,
for example

get_y = correct lable of the item, for example.

items_tfms = transform each item to the same size

using squish not cropped out

dataLoaders(path) → pythorch iterate through it to grab a bucnh of


your data at a time - will feed the training algorithem with a bunch of
your images at once (a batch)

dls - data loaders, contain iterators that pythorch can run through, to grab
batches of randomly splited out training images to train the model with,
and validation images to test the model with

AI 15
Learner:
combines a model - the actual neural network function we train, and the
data we train with

we pass the data and the model

model - the actual neural network function you want to pass in - and
there is a small numbers of those for the vast things you do

lear.preidct - try to predict the image

Fit vs fine tune


fit, when there is not a pretrained model that is fit to your use case

GPU:
can do thousands of thigns at the same time - which means it needs 1000
of things to do

pretrained
the weights, the paramaters, are avilable on the internet, for anybody to
download

so when you use a model, you basiclly download the paramaters!

fine tune

AI 16
take the pre-trained paramaters and adjust them to your own data - teach
the model the differences between your dataset and what he orignally
trained for

Steps:
1. gather data

2. get a model

3. fine tuned it - for the purpose of regonicze your own data set specificlly

valid_loss: on average, how far are we on the validation set

AI 17
Summary

AI 18
we have inputs and paramaters (weights)

the model is not a conditionals and loops and stuff - its a mathematical
function

takes the input, multipley them by the weights, and adds them up, then
does the same for the second set of weights, and adds them up, etc

than, takes those set of inputs into the next layer, and does the same
thing

and it does this a number of times - and the model is the neural
netwrok

the paramaters / weights is the important thing

after we do that, after we pass theminto the model, we take the results,
and decide how good they are → so if we pass in a picture of a bird,

AI 19
model predict it, so we get a result, than, we can see it label, and see if
it is wrong or good

than we can calculate the loss - number of how good are the
results

for example - if you look at 100 photos, which is the % that is right?

than we update the weights to a better weights from the previous


epoch

so the loss should be a bit better

this is the update - than we make it a little bit better enough times
until its good

neural network:

very flexible, can solve any computable function

so we should use them as our model

once find the correct weights, we done, we integrate them into the
model, and we have our result

so you will have just one more line in the code of the ijection the model

AI 20
Question
a layer is basiclly another epoch?

no

code
def search_images(keywords, max_images=200): return
L(DDGS().images(keywords, max_results=max_images)).itemgot('image')
import time, json

urls = search_images("budhist monk photo", max_images=1)


urls[0]
from fastdownload import download_url
dest = "monk.jpg"
download_url(urls[0], dest, show_progress=False)
from fastai.vision.all import *
im = Image.open(dest)
im.to_thumb(256,256)
download_url(search_images("regular person photo", max_images=1)
[0],"regularPerson.jpg", show_progress=False)
Image.open("regularPerson.jpg").to_thumb(256,256)
searches = "buddhist monk", "regular person"
path = Path("monk_or_not")
for o in searches:
dest = (path/o)
dest.mkdir(exist_ok=True, parents=True)
download_images(dest, urls=search_images(f'{o} photo'))
time.sleep(5)
resize_images(path/o, max_size=400, dest=path/o)

failed = verify_images(get_image_files(path))
failed.map(Path.unlink)
len(failed)

AI 21
Lecture 3
learning tips

first watch lecture

than run lesson notebook

than rewatch lesson with notes

than reproduce results-

use the clean folder - contain only the code - before running each
cell, ask:

”what is it for?” “what is the output”

repeat the all thing with a different dataset

important rescources
https://forums.fast.ai/t/lesson-3-notebooks/104821

AI 22
https://forums.fast.ai/t/lesson-3-official-topic/96254

seeting fast ai in mac

https://forums.fast.ai/t/best-practices-for-setting-up-fastai-on-mackbook-
pro-with-m1-max-chip/98961/4

https://forums.fast.ai/t/best-practices-for-setting-up-fastai-on-mackbook-
pro-with-m1-max-chip/98961/3

lecture 3

main concepts
training piece → than gets the model

feed the model inputs and it spits output based on the model

error rate- accuracy

Queastions
what does it mean “inference” seconds in training a model

Model.pkl
fit function to data

Derivitative
if you increase the input, the output increase /decrease - basiclly, the slop
of the function

Tensor

AI 23
works with a list, with anything basiclly.

Optimization
gradienct desecent - caluclate. the gradient(paaramaters), and decrease
the loss

This is deep learning - deep learning is a


metaphor for life → do one iteration, and
improve over time.
just do this

then optimize

AI 24
values adding together

gradient desecent to optimize the paramaters

and samples of inputs and outputs you want

the computer draws the owl

using gradient descenet, to set some parametrs, to make a wiggle


functions, which is addition of vectors?

AI 25
model choosing is the last thing
once you clean , gather the data, and augmented it

you can reason yourself about a model - depends on the task

you need it the most accurate? the fastest? etc

train the model first!

fit function to data


we start with a flexible function (neural network), and we get it to do a
praticualr thing - recognize the pattern in our data

so the idea is to find a function that fit our data? so neural networks is
just a function

loss function - a measure of how good the function

for each paramater - if we move it up, is it makes it better? or


not?

Derivitative - if you increaes the input

‫ אז הנגזרת תיהיה גדולה‬, ‫בהתחלה השיפוע נניח מאוד גבוה‬

‫ הנגזרת מתקרבת לאפס‬,‫ככל שהשיפוע יקטן‬

‫ השיפוע גבוה או לא‬,‫ האם בערך עצמו‬,‫אז הנגזרת בעצם תגיד לכל ערך‬

slope === gradient

how rapidaly the function change at value 3 - ‫כלומר בערך של‬


‫הפרמטר הזה‬
‫?האם הפונקציה משתנה מהר‬

Our goal - is to make the loss small, we want to make the loss
smaller

AI 26
if we know how our function will change -

we have a paramater, and the function, that tell us how rapidly the
function change in this paramater

our goal is to make the loss smaller

so, we will change the paramataer a bit

and see if it is make our loss function output a better

AI 27
the magnitue tell us that at this point , the slop is fairly steep, so
the loss change significant when we adjust w - each time we
adjust w, the output change significatly

so lets remove by some value * the slope, and see what happens,

AI 28
why use the slope?

because, the slope will allow us to undersatnd how much of a


step we can take:

in big slope, it means, that it changes very rapidally, so we


might take smaller step

on more general slope, we should take bigger step,


because the change is so small each adjusmet, so we can
take bigger step

The book
chapter 1
chapter 1

AI 29
AI 30
AI 31
Error rate

What is machine leanring

The essence of machine learning

AI 32
wheiths are variables - and weight assigments are particutler choice of
values for those variables

inputs- what we need to prodouce the outpput

weight assigmnets - another type of inptues, that define how to progarm


operter

A model:

a model can do many things, depending on the weights (model


paramater)

traning a machine learning model:

at the end of the day, once we chose the best weight assigment - we get
a “program”

AI 33
The key distniction -think about what is the problem you are
trying to solve…
training a model - is basiclly means finding a good paramaters

and with current modeling:

AI 34
So to reacp:
we need inputs

we need the weights, paramaters, which are the weights of the neural net

than the neural net are mathematical function, so ths results is what we get
from this function

than, we need to test the effecitvnes of the current weeight assigment -


this is basicly to understand the preformance of the current weight
assigment, so we need to define - what is a a good preformance vs bad
prefromance of the model

Neural network

AI 35
How image recognizer works

Classification vs regrssion
regresion model -what will be the tempertaure tomrrow

validation set

Overrifting model

AI 36
AI 37
ERROR RATE

SGD

AI 38
Pre tarined model

AI 39
Error rate
the idea - we have a function that just tells us the quality of the model
predition - by looking at the validation set, and tell you how much you were
rong

AI 40
Weights

Converting a dataset to an image

AI 41
Jargon recap

Summary
lost function

AI 42
Loss function

AI 43
Epoch
running through our entire database

Tabular data

Training models from scarch

AI 44
Progress of training a neural netwrok

AI 45
instalation

jupyter notbooke

AI 46
The file is stored in your Kaggle notebook's temporary file system, which
means:

The file remains available during your notebook session

You can access it in other cells of your notebook

The file will be deleted when the notebook session ends unless you
save it to a permanent location

So, in summary: you're downloading an image of a Buddhist monk from the


URL in urls[0] , saving it as "monk.jpg" in your notebook's working directory,
and then displaying a thumbnail version of it in your notebook.

Chapter 4
tips on learning

chapter 4

showing whats inside the path, the actual folder

Everything in a computer is a number

AI 47
so the idea - just lets look at a specific section of the image.

everything is number , so we view the number representation of the


section of the image

so we look at a sction startin 4 pixels from the top and left - so we view
it as a numpy array, which is a numbered representation of the image

AI 48
gradient

pixels

so the image as total of 768 pixels

AI 49
Tensor

AI 50
AI 51
dimensions

a stack - think about a “‫ “ שטוחה‬- meaning, each matriexes is “‫”שטוחה‬, and


they stack on each other

AI 52
Metric

Over fit

AI 53
AI 54
Stochasit graident descent
testing the effectivnes of any current weight assigment in terms of
preformance, and provide a mechanisem for altering the weight assigment
to improve prefromance - and do that automatic, so a mchine would learn
from it expirence

AI 55
weights and pixels

AI 56
AI 57
AI 58
AI 59
Derivitative
Lets assume i have a loss function ok, which depends upon a paramater

so our loss function is the quadratic function, yes?

than, because its our loss function, wetry some arbitrary input to it, and
see what is the result - what is the loss value

now, we would like to adujst the paramaterby alittle bit, and see what
happens

so its the same as the slope - think of it, like you change it,

AI 60
Our options in the course
course summaries

jupyer notebook

AI 61

You might also like