Had a chance to catch up and do a cross-fit routine with @melvin_suen this weekend, and what a workout it was! In particular, there was one exercise that I struggled with - the rope climb. At first, I did not think I can do it, but after getting it once, I was thrilled. With each rep I modified my technique based on the time it took to get up, what worked and what did not - ultimately improving my performance at the station.

A core principle in neural networks is backward propagation. It begins by calculating the error value - or simply put, the difference between the desired output and the achieved output. Once this is known, the ‘emphasis’ or weight applied to each input is updated to minimize the error for the next round. As a result, each round should yield an outcome closer to the desired state. Additionally, each round will entail greater emphasis on the inputs (or techniques) that will contribute towards a better outcome. Note that this is a simplified explanation of the concept and I highly encourage everyone to learn more about this concept, especially those interested in artificial intelligence!

How have you applied rapid learning techniques in daily activities? Comment below 👇💯

#CanvasTheory:

1) What outcome are you trying to achieve?

2) Perform the task with a set of known inputs

3) Measure the outcome

4) Determine the delta between what you desired and what you achieved

5) Re-balance the emphasis placed on each input (or technique), where greater emphasis is placed on techniques that worked and less on techniques that did not

6) Repeat

7) Add new inputs / techniques as needed

Note: While this is better suited for mathematical problems, can still do it with qualitative data