Mar

19

 Apophenia has come to represent the human bias and tendency to seek patterns in random information. Our brains crave patterns and to make sense out of things. It’s looking at a random cloud and remarking how it resembles a duck with a bill. It’s the man in the moon, the Jesus toast, etc.

Luke’s “randomania”, on the other hand, is the flip side of the coin. It is the tendency to attribute chance probability or randomness to what is actually patterned data. It is the bias of thinking there is nothing to be seen or discovered, when there really is. It’s rather rare to catch ourselves doing this, because once we think that something is just “noise” we tend to ignore it and walk on by, never to return.

anonymous writes: 

An interesting thing about markets is at one level of focus one has noise, but in the same time period, in a higher level of granularity, there are regularities.

In an apparent anomaly, the physical laws may be different at sub atomic levels, than at larger levels. 


Comments

Name

Email

Website

Speak your mind

1 Comment so far

  1. Sir John Law on April 2, 2018 12:45 pm

    Machine learning offers insight. Called Bias variance dilemma.

    In machine learning, a computer is fed training sets of data and, using a model, attempts to generate generalizations that will hold true for other data sets. Ideally, a programmer wants to choose a model that both accurately captures the specifics of the training set but also generalizes well to unseen data. Unfortunately, it is impossible to do both at the same time and thus any chosen model will suffer from either bias or variance error. Models that suffer from bias error mare erroneous simplifying assumptions which cause their generalizations to to distort of miss important trends in the data. (too broad or simplified) Models that suffer from variance error are too sensitive to the particularities of their training set and make generalizations that don’t hold well for other data sets (too specific or parochial). In computer science, coders use a method called ensembling in which a myriad of models, both bias and variance oriented, essentially “vote” to analyze the training set in order to form a consensus generalization.

    Conclusion: Any process of generalization is going to be flawed. The generalization is either going to be too broad or too specific. However, any generalization can benefit from ensembling. Having many models, both bias and variance oriented, form generalizations and having those generalizations considered in some sort of weighed voting mechanism should produce superior results than simply using one model.

    (https://en.wikipedia.org/wiki/Bias%E2%80%93variance_tradeoff)

Archives

Resources & Links

Search