Home   Science Page   Data Stream Momentum   Directionals   Root Beings   The Experiment

2.9 Summary & Conclusions

Summary

Section 1: The Complexities of Data Accumulation

Where are we now? What have we said? In Section 1, we returned to the basics, looking into the complexities of data accumulation. We discovered data's relativity, its uncertainty. We then looked into the two dimensionality of Data Accumulation to deal with the ambiguities of measurement. The two dimensions of Real Data are Duration and Data Density. Data Density is the percentage that the Source participated in Activity. Data Density is linked with Criterion to separate what is from what is not.

Section 2: Data Streams

In Section 2A, we looked a little closer at Data Streams. We discovered that there were Real and Random Data Streams. The Real DS had a Source, while the Random DS had no Source. Further the Real DS broke into Live and Dead Data Streams. The Live DS were connected to the Source. While based upon a real source, the Dead DS were unconnected to the Source, except by functions. We also found that Live DS were connected actively, Will, and passively, Death, to their Source. Section 2B: With two pieces of consecutive Data (and another on the way) we have a Data Stream. With a DS come measures of central tendencies and variation. These measures are wholly dependent upon the Data Stream and are independent of the Source. In Section 2C, we see that Data Set measures are fixed while Data Stream measures are always changing. We derive two crucial contextual equations. One is the equation for the Nth Mean Average. The second is the equation for the difference of consecutive means. In Section 2D, we look at some Pie Graphs which illustrate the influence of the various elements on the New Average. In Section 2E, we discuss the concepts of Real & Potential Impact. We see in some graphs that the New Average is quite volatile when the number of samples is small, and is quite sedentary when the number of samples is large. Because the Data Stream is always growing, the potential impact of the New Data upon the New Average is always shrinking.

Section 3: Data Stream Density

In Section 3A, we introduce the concept of Data Stream Density. It is defined as a ratio between the Probable and the Possible. We distinguish this from Data Density introduced in the first section, as Data Density moving through Time. In Section 3B, with a series of Time Momenta Diagrams, we distinguish between Random, Dead and Live Data Streams. To connect Spiral Time Theory with Data Density, we associate Random DS with pure free will, uninfluenced by the past, with a Data Density of zero. We associate Dead DS with a deterministic world ruled by functions or divine law with a DS Density of one. We associate Live DS with a mixture of these two. Hence its DS Density is between zero and one. In Section 3C, we look at the Standard Deviation as a measure of variation which allows us to define and predict the limits of the Realm of Probability. We discover that these measures are both descriptive and predictive. In Section 3D, we apply the Deviational limits to our three types of Data Streams and find that they behave as they are supposed to. We also find that although these central tendencies are predictive that inherent to Live Data Streams will be abrupt changes, which we call Quakes.

Section 4: Data Stream Momentum & Improbable Possibility

In Section 4A, we define Data Stream Density as the ratio of the area of Improbable Possibility and the Range of Possibility. This was achieved by focusing on the Realm of Probability and the Range of Possibility. In 4B, we are able to visualize these concepts geometrically. In 4C, we define Data Stream Momentum as the product of Data Stream Density and the Average. In looking at Spiral Time Theory, we find that the Data Stream Momentum influences the exercise of 'free will'. Because Live DS are spontaneous they are not predetermined by functions. Because the Past has momentum, the will is not free but conditioned. In 4D, we see how this definition differentiates the three types of Data Streams. We find that Random DS have no momentum, while both Live and Dead Streams have momentum. Although Live and Dead Streams are indistinguishable in terms of Data, Life appears to be unpredictable. Hence Life seems to generate unpredictable Live Data Streams. Although we can never truly know if the Streams are truly Live, for practical purposes they act unpredictably alive and thus we will assume for this study that they are. They will still obey all the rules of Live Data Streams whether they are Live or Dead.

Section 5: Data Stream Characteristics

In Section 5A, the differences between Live and Dead Data are examined, focusing upon the amount and quality of the information transferred. In 5B, we look at the differences in predictability between Random, Random Average, Dead, and Live Streams. In 5C, we see that choosing different Durations yields totally different characteristics from the same Data Stream. This is Data Stream Emergence. In 5D, we define the difference between a Stream which dies and one that is killed. In 5E, we find that the central tendencies of Data Streams are descriptive and predictive. In 5F, Data Stream Quakes are defined as abrupt changes in Data Stream Momentum. The limits of change are discussed. Also some of the problems of too much stability are analyzed. As N, the number of samples grows larger and larger, all the measures become more and more static and unresponsive to change. In 5G, we look at two underlying assumptions of this study. One is that choice exists. The second is that Data Streams have a life of their own.

Section 6: Flow Density

In Section 6A, we examine the dimensional aspects of Data accumulation. Then Data Stream Momentum is redefined as Data Stream Flow to avoid brain puns. The differences between DS Flow and physical Momentum are explored. In 6B, Flow Density is defined. We see that total Flow Density is a measure of the orderliness of a Data Stream. In 6C, the generality of Average and Deviation is stressed. Again the disadvantage of our increasingly static measures as N grows larger is discussed. We look again at the change in flow or momentum as a significant descriptive measure.

Section 7: Data Stream Force & Proportional Impact

In Section 7A, Newton's concepts are applied to Data Streams, specifically the idea of Force. Data Stream Acceleration is defined the change in averages of a Data Stream. In 7B, we discuss the idea of proportional impact with an every day example. In 7C, we define the proportional impact of the New Data upon the Stream as its Z Score related to the Stream.

Section 8: Interactive Data Streams

In Section 8A, we see that Data Streams are changed by collective impacts, not by individual impacts. In 8B, we define the collective Force or impact as the Data Stream Current. In 8C, Internal Force or Will is discussed. It is discovered that there are 4 types of will that interact in distinctive ways. In 8D correlation between Data Streams is discussed. A contextual equation for correlation is derived. In 8E, Newton's Equal and Opposite Law is applied to Data Streams. Finally in 8F, the logic of the system is examined. It is found that no matter how hard one looks, there is nothing inside a Data Stream. Its virtual forces act independently of Source. Numbers rule supreme. Not really.

Conclusions

This Notebook examines many different aspects of Data Streams. It argues that the measures of a Data Stream, specifically its measures of central tendency and change, i.e. Average and Deviation, are both descriptive and predictive. Many other derivative measures based upon the Average and the Deviation are defined and examined. All of the Data Stream measures, so far defined, are based upon a growing N, because the number of elements in any Data Stream is always growing. Once N grows too large all the measures become very static. If one were looking for the absolutes of a Dead Data Stream this would be good. But we are looking for sensitive and relevant measures to describe the changes in a Live Data Stream. Live Data Streams by definition experience Quakes. The existence of Quakes creates discontinuities, which render traditional functions useless in describing Data Streams. The next Notebook, Decaying Averages, introduces an Average and a Deviation that decay. This creates measures, which are both sensitive and relevant to the changes of a Data Stream. The beauty of the Decaying Average system is that the measures and their derivations are identical to those introduced above with one small, but significant, difference. Read on to discover the marvelous world of Decaying Averages.

 

Home   Science Page   2. Data Stream Momentum   Next Notebook