Wednesday, August 28, 2013


At the end of
is a discussion that is hard to unravel because Schn almost always gives the example of conditional entropy of a decision point in the next base pair. 
Berry paradox:
I think Elsberry references this and doesn't see its connection to his example function that searches program space for a Q-compressible string.
What happens if you start with this
    ``the first positive integer that cannot be specified in less than a billion words''
instead? Everything has a rather different flavor. Let's see why. 
The first problem we've got here is what does it mean to specify a number using words in English? This is very vague. So instead let's use a computer. Pick a standard general-purpose computer, in other words, pick a universal Turing machine (UTM). Now the way you specify a number is with a computer program. When you run this computer program on your UTM it prints out this number and halts. So a program is said to specify a number, a positive integer, if you start the program running on your standard UTM, and after a finite amount of time it prints out one and only one great big positive integer and it says ``I'm finished'' and halts.
Bernoulli's Principle:
They appeal to searches with “links” in the optimization space and smoothness constraints that enable “hill-climbing” optimization [32]. Prior knowledge about the smoothness of a search landscape required for gradient based hill-climbing,8is not only common but is also vital to the success of some search optimizations.9
Such procedures, however, are of little use when searching to find a sequence of, say, 7 letters from a 26-letter alphabet to form a word that will pass successfully through a spell checker, or when choosing a sequence of commands from 26 available commands to generate a logic operation such as XNOR [46]. The ability of a search procedure to work better than average on a class of problems is not prohibited by COI.10
He shows that pHe shows that pure random chance cannot create information, and he shows how a simple smooth function (such as y = x2) cannot gain information. (Information could be lost by a function that cannot be mapped back uniquely: y = sine(x).) He concludes that there must be a designer to obtain CSI. However, natural selection has a branching mapping from one to many (replication) followed by pruning mapping of the many back down to a few (selection).
Treating Uncertainty (H) and Entropy (S) as identical OR treating them as completely unrelated. The former philosophy is clearly incorrect because uncertainty has units of bits per symbol while entropy has units Joules per Kelvin. The latter philosophy is overcome by noting that the two can be relatedif one can correlate the probabilities of microstates of the system under consideration with probabilities of the symbols
Claim of identicalWilliam Dembski in the book No Free Lunch stated that the two forms are mathematically identical (page 131)
The random number generator used in ev is a deterministic function, yet the ev program clearly shows an increase in the information (as defined by Shannon) in the binding sites. (In other words, all the complex discussion and mathematics that Dr. Popescru puts out is a smoke screen that covers the simple situation at hand.) There is also the point from thermodynamics that information can be gained in a system (ie the entropy can go down) so long as there is at least a minimum compensating increase in the entropy outside the system
Information is measured as the decrease in uncertainty of a receiver or molecular machine 
imagined flipping a coin 1000 times to get 1000 bits of information. . . . . So a random sequence going into a receiver does not decrease the uncertainty of the receiver and so no information is received. But a message does allow for the decrease. Even the same signal can be information to one receiver and noise to another, depending on the receiver!

No comments:

Post a Comment