Expressivity of neural networks. Recall that the functional form for a single neuron is given by y = \sigma (hw, xi + b, 0), where x is the input and y is the output. In this exercise, assume that x and y are 1-dimensional (i.e., they are both just real-valued scalars) and \sigma is the unit step activation. We will use multiple layers of such neurons to approximate pretty much any function f. There is no learning/training required for this problem; you should be able to guess/derive the weights and biases of the networks by hand