filter = [ [0, 1, 0], [1, -4, 1], [0, 1, 0]]
This filter emphasizes the horizontal lines and edges in the image.
filter = [ [-1, -2, -1], [0, 0, 0], [1, 2, 1]]
This filter emphasizes the vertical lines in the image.
filter = [ [-1, 2, 1], [-2, 1, 2], [-1, 2, 1]]
This filter emphasizes the shadows in the image, like underneath the stairs and the walkway.
Convolving an image is useful for computer vision because it allows the user to choose the aspects of an image that are the most important and then have the computer emphasize those features. Then, when we need to run a model, the prominent features filtered in the picture arrays will be easier for the model to identify.
I ran the convolution with the same 3 filters as the mnist stairs picture.
This filter seems to emphasize the very outline of the flower. There is less contrast in this image between the background and the flower (compared to all of the contrast in the mnist picture), which means that the filter took away more of the detail.
This filter does a better job of emphasizing the individual petals of the flower, but it also picks up on the sticks in the background of the picture, so it might not be as useful for identifying petals. It is the only filter to highlight the center of the flower, though.
This filter emphasizes the sticks in the background of the picture, which aligns with the shadows identified in the mnist picture.
Pooling makes the image smaller (condenses) which makes it easier for computers to process, while still maintaining the most important aspects of the image. Usually in pooling, or at least as explained in the ML video, the values are maximized, but the for loops that complete the pooling seem to be choosing every other value in the array, which would be a more randomized pooling of the image.
The resulting matrix is an 7x7 grid, because the edge values cannot be filtered.