Yingshaoxo's thinking about machine learning

Machine Learning is all about try_and_cache. (Try action, get whether that action is good or bad to reach a goal)


First, it has some basic input data and actions. It trys to memory the base input data to action pairs.


Then if single data element is not enough to overcome the difficultys, it will try to memory the input data sequence to action pairs.


The input sequence or list length will be bigger and bigger until it overcome the difficultys.


The interesting part about human thinking is that sometimes they'll do abstriction to use short_id to replace long input sequence, so that they don't have to remember a lot of long sequence but a few special examples.


And sometimes it will meet unknow long sequence input data, in this case, it will try to use random small length sequence data to perform actions.


If somehow it uses random sub_sequence data win a success, it will remember those sub_sequence data as a long sequence data.


But if his/her memory is about to run out, he/she will try to use sub_sequence to reach the same result, so that it won't take too much memory and computing resources.


In this case, those small rules become common sense or basic rules.