__http://schneider.ncifcrf.gov/information.is.not.uncertainty.html__

is a discussion that is hard to unravel because Schn almost always gives the example of conditional entropy of a decision point in the next base pair.

Berry paradox:

I think Elsberry references this and doesn't see its connection to his example function that searches program space for a Q-compressible string.

__http://www.cs.auckland.ac.nz/CDMTCS/chaitin/unm2.html__I think Elsberry references this and doesn't see its connection to his example function that searches program space for a Q-compressible string.

``the first positive integer that cannot be specified in less than a billion words''

instead? Everything has a rather different flavor. Let's see why.

The first problem we've got here is what does it mean to specify a number using words in English? This is very vague. So instead let's use a computer. Pick a standard general-purpose computer, in other words, pick a universal Turing machine (UTM). Now the way you specify a number is with a computer program. When you run this computer program on your UTM it prints out this number and halts. So a program is said to specify a number, a positive integer, if you start the program running on your standard UTM, and after a finite amount of time it prints out one and only one great big positive integer and it says ``I'm finished'' and halts.

The first problem we've got here is what does it mean to specify a number using words in English? This is very vague. So instead let's use a computer. Pick a standard general-purpose computer, in other words, pick a universal Turing machine (UTM). Now the way you specify a number is with a computer program. When you run this computer program on your UTM it prints out this number and halts. So a program is said to specify a number, a positive integer, if you start the program running on your standard UTM, and after a finite amount of time it prints out one and only one great big positive integer and it says ``I'm finished'' and halts.

Bernoulli's Principle:

They appeal to searches with “links” in the optimization space and smoothness constraints that enable “hill-climbing” optimization [32].7 Prior knowledge about the smoothness of a search landscape required for gradient based hill-climbing,8is not only common but is also vital to the success of some search optimizations.9

They appeal to searches with “links” in the optimization space and smoothness constraints that enable “hill-climbing” optimization [32].7 Prior knowledge about the smoothness of a search landscape required for gradient based hill-climbing,8is not only common but is also vital to the success of some search optimizations.9

Such procedures, however, are of little use when searching to find a sequence of, say, 7 letters from a 26-letter alphabet to form a word that will pass successfully through a spell checker, or when choosing a sequence of commands from 26 available commands to generate a logic operation such as XNOR [46]. The ability of a search procedure to work better than average on a class of problems is not prohibited by COI.10

__http://schneider.ncifcrf.gov/paper/ev/dembski/specified.complexity.html__

He shows that pHe shows that pure random chance cannot create information, and he shows how a simple smooth function (such as y = x

^{2}) cannot gain information. (Information could be lost by a function that cannot be mapped back uniquely: y = sine(x).) He concludes that there must be a designer to obtain CSI. However, natural selection has a branching mapping from one to many (replication) followed by pruning mapping of the many back down to a few (selection).

__http://schneider.ncifcrf.gov/pitfalls.html__

**Treating Uncertainty (H) and Entropy (S) as identical OR treating them as completely unrelated.**The former philosophy is clearly incorrect because uncertainty has units of bits per symbol while entropy has units Joules per Kelvin. The latter philosophy is overcome by noting that the two can be

*related*if one can correlate the probabilities of microstates of the system under consideration with probabilities of the symbols

**Claim of identical**:

__William Dembski__in the book

__No Free Lunch__stated that the two forms are mathematically identical (page 131)

-----------------------------------------------

The random number generator used in ev

__http://schneider.ncifcrf.gov/paper/ev/dembski/specified.complexity.html__The random number generator used in ev

**, yet the ev program clearly shows an increase in the information (***is a deterministic function*__) in the binding sites. (In other words, all the complex discussion and mathematics that Dr. Popescru puts out is a smoke screen that covers the simple situation at hand.) There is also the point from thermodynamics that information can be gained in a system (ie the entropy can go down) so long as there is at least a minimum compensating increase in the entropy outside the system__**as defined by Shannon**__http://schneider.ncifcrf.gov/glossary.html#information__

Information is measured as the decrease in

__uncertainty__of a receiver or

__molecular machine__

----------------------------------------------

__http://schneider.ncifcrf.gov/pitfalls.html__

imagined flipping a coin 1000 times to get 1000 bits of information. . . . . So a random sequence going into a receiver does not decrease the uncertainty of the receiver and so no information is received. But a message does allow for the decrease. Even the same signal can be information to one receiver and noise to another, depending on the receiver!

## No comments:

## Post a Comment