bootcamp_machine-learning icon indicating copy to clipboard operation
bootcamp_machine-learning copied to clipboard

ML00 ex00: examples' precision

Open mli42 opened this issue 3 years ago • 0 comments

  • Day: 00
  • Exercise: 00

In the examples there is a multiplication between a matrix containing floats and a vector containing ints. The output is written as it was int, but it should be float

Examples

m1 = Matrix([[0.0, 1.0, 2.0],
			 [0.0, 2.0, 4.0]])
v1 = Vector([[1], [2], [3]])
m1 * v1
# Output:
Matrix([[8], [16]]) # << should be float
# Or: Vector([[8], [16] # << should be float

Also, sometimes, the output of a matrix has different precision (0. != 0.0):

Matrix([[0., 2., 4.], [1., 3., 5.]])
Matrix([[0.0, 1.0], [2.0, 3.0], [4.0, 5.0]])

In my opinion, it should be 0.0 everywhere

Fixed on:

  • [ ] Github
  • [ ] Gitlab

mli42 avatar Aug 05 '22 14:08 mli42