bootcamp_machine-learning
bootcamp_machine-learning copied to clipboard
Ex02 Day01 convergence is slower than in correction
- Day: 01
- Exercise: 01
The correction demands that the fit function converges in 200iterations with lr = 1e-5. It seems to converge much slower.
def add_bias_units(x):
bias = np.ones((x.shape[0], 1))
return np.concatenate((bias, x), axis = 1)
def gradient(x, y, thetas):
x = add_bias_units(x)
error = np.matmul(x, thetas) - y
gradients = np.matmul(x.T, error) / len(y)
return gradients
def fit(x, y, thetas, alpha, max_iter):
for _ in range(max_iter):
thetas = thetas - alpha * gradient(x, y, thetas)
return thetas
from fit import *
x = np.array(range(1,101)).reshape(-1,1)
y = 0.75*x + 5
theta = np.array([[1.],[1.]])
print(fit(x, y, theta, 1e-5, 2000)) #LINE FROM THE SCALE
#[[1.01682288]
# [0.80945473]]
print(fit(x, y, theta, 1e-4, 200000)) # WHAT WORKS FOR ME
# [[4.97090918]
# [0.75043422]]
Also in the correction it says it should converge to tetha = [4.0, 0.75] but with y = 0.75 * x + 5 theta should converge to [5.0, 0.75]
Fixed on:
- [ ] Github
- [ ] Gitlab
sur demande d'@ezalos
def main():
x = np.array(range(1, 101)).reshape(-1, 1)
y = 0.75 * x + 5
theta = np.array([[1.], [1.]])
# print(fit_(x, y, theta, 1e-5, 2000)) # LINE FROM THE SCALE
# [[1.01682288]
# [0.80945473]]
print(fit_(x, y, theta, 5e-4, 20000)) # WHAT WORKS FOR ME
# [[4.65879969]
# [0.75509291]]