As the knowledge of the world were performed, it has been established that the objects and physics systems are made of smaller pieces that drives their behaviour. When the theory and the calculus don't take in consideration the smallest parts that compounds the system it leaves out a considerable amount of information, that implies a lose in predictions accuracy.
If "the last elements" are defined as the elements that contains the whole information of one physics system, then for one physics system that contains a certain amount of information, the lasts elements have to exist.
In a physics world exists magnitudes as velocity, charges, mass an so on. The amount of obtained information when we tag all the smallest particles of a system with its magnitudes, is less than we tag group of particles with their aggregate magnitudes. That's why the "last elements" of a system, are also the smallest one.
Uncertainty
We define uncertainty in this text as the grade of deviation that one calculation within one theory has related to a real situation that it pretends to simulate. For example, a broker that uses charting methods in his investments has a high uncertainty, because the capacity that charting gives to a real economics prediction is null.
So, we can continue as follows:
- First, we calculate the T results that theory gives for all n possible physical circumstances.
- Second, we accede to a register that contains all the real results R for all possible physical circumstances.
- Third, using T and R we calculate one indicator that gives a level of uncertainty of the theory. For example this one:
Strict causality
We understands "strict causality" when the theory has 0 uncertainty. A strict causality theory has to include the "last elements", in other case it would exist information out of model that is not being considered that affects the behaviour of the system. Today this last elements are considered the particles of the standard model, but the uncertainty of this model (whether Heisenberg uncertainty is fundamental or not) is not 0.
Its defines "partial causality" when the theory has uncertainty. In this group we have theories or models with low, medium or high uncertainties. We have theories that gives priceless knowledge with good approach to reality, theories with some predictable capacities, or theories as one call to astrologist.
Scale of aleatory
First we are going to consider whole experiments that one theory can emulate, which have R real results. For every theory which uncertainty is bigger than 0, and which pretend to calculate exactly R, there exists a digit where the calculus has the same uncertainty as choosing this digit by throwing one coin. There exist digits in one calculus that we can ignore because doesn't give us additional information. This is the scale of aleatory of one model. This scale would be bigger or smaller depending on how uncertain the model is.
If we make a graphic which shows the magnitude (T-R)/R for every n, the scale of aleatory is given by the 2 margins [+u,-u] that contains just the half of the points. This scale can be understood as the volume of phase space that one model is capable of delimiting with the probability of 50% being the results inside of it. This is the scale in one model leave to give us more information.
In whatever moment the capacity of getting information from a system that we pretend emulate is limited. And the computational capacity to work on this information inside one mode is limited too.
Theories by scales
The theory that emulates the last elements behaviour, it can do it if the number of particles becomes too large, because the technical limits. There are one moment in when we need to propose new model that contents new definitions and new simplifications and norms to emulate a bigger system. We have to eliminate information per item, to increase the number of items that we can considerate in our computational model. The cutting of this information brings one increase in the uncertainty of the new model, necessarily about the state of the last elements but also about the state of the new conjugated elements.
As we see in the previous figure, the A theory is that works with last elements and the which contains the whole information of the system, but this theory is limited to a small amount of particles. In this moment we can use a B theory that includes some simplifications and is practical for a bigger amount of elements, because we have lowered the density of information per particle. We can imagine another theory C that follows B and so on.
Changing to a bigger scale theory
Imagine that the information that one theory works on, it has the form of a vector (a,b,c......n) from each of the individual elements. We can arrange this vectors into a matrix, where each row is a vector. One theory makes a transformation on the matrix giving another one, that corresponds with the future the past, and so on.
The matrix that contains all the information on the systems is the last matrix. To create a new arrangement of information to a bigger scale model, we combine the rows and columns to obtain the new matrix that the bigger scale theory works on. For example the new column called Temperature is and statistical combination about the velocities of the particles of one system. By this way, we get a much smaller matrix to work on.
The result of continue iteration in this way, is to create a handy matrix, with new parameters, in order to apply the knowledge for a more complex and exigent themes.
Every theory has is own matrix
We would be confuses to say that when we reach more and more generalist models, we need to abandon the combinations between rows and columns to create elements like "mountains, animals, wages", but today the voice recognition software works very well and is based in conjugation of basic physic information like pressure.
One enough argument for this is the next. Every physic system capable of being understood has its own last matrix, every imaginable concept only appears when the last matrix has some determinate data. We can make a collection of last matrix that contains that concept, and compose a equation that gives if one wide element like "dog" is present. So we can compose a correspondence between last matrix and things of the macroscopic world.
The theory that emulates the last elements behaviour, it can do it if the number of particles becomes too large, because the technical limits. There are one moment in when we need to propose new model that contents new definitions and new simplifications and norms to emulate a bigger system. We have to eliminate information per item, to increase the number of items that we can considerate in our computational model. The cutting of this information brings one increase in the uncertainty of the new model, necessarily about the state of the last elements but also about the state of the new conjugated elements.
As we see in the previous figure, the A theory is that works with last elements and the which contains the whole information of the system, but this theory is limited to a small amount of particles. In this moment we can use a B theory that includes some simplifications and is practical for a bigger amount of elements, because we have lowered the density of information per particle. We can imagine another theory C that follows B and so on.
Changing to a bigger scale theory
Imagine that the information that one theory works on, it has the form of a vector (a,b,c......n) from each of the individual elements. We can arrange this vectors into a matrix, where each row is a vector. One theory makes a transformation on the matrix giving another one, that corresponds with the future the past, and so on.
The matrix that contains all the information on the systems is the last matrix. To create a new arrangement of information to a bigger scale model, we combine the rows and columns to obtain the new matrix that the bigger scale theory works on. For example the new column called Temperature is and statistical combination about the velocities of the particles of one system. By this way, we get a much smaller matrix to work on.
The result of continue iteration in this way, is to create a handy matrix, with new parameters, in order to apply the knowledge for a more complex and exigent themes.
Every theory has is own matrix
We would be confuses to say that when we reach more and more generalist models, we need to abandon the combinations between rows and columns to create elements like "mountains, animals, wages", but today the voice recognition software works very well and is based in conjugation of basic physic information like pressure.
One enough argument for this is the next. Every physic system capable of being understood has its own last matrix, every imaginable concept only appears when the last matrix has some determinate data. We can make a collection of last matrix that contains that concept, and compose a equation that gives if one wide element like "dog" is present. So we can compose a correspondence between last matrix and things of the macroscopic world.