Path: illuminati.io.com!uunet!europa.eng.gtefsd.com!howland.reston.ans.net!cs.ut
exas.edu!not-for-mail
From: ritter@indial1.io.com (Terry Ritter)
Newsgroups: sci.crypt

Subject: Re: Toward Axiomatic Fenced DES (long!)
Date: 6 Jun 1994 01:10:21 -0500
Organization: UTexas Mail-to-News Gateway
Lines: 319
Sender: nobody@cs.utexas.edu
Message-ID: <199406060606.BAA10833@indial1.io.com>
NNTP-Posting-Host: news.cs.utexas.edu

 In <2stndb$9ae@news.umbc.edu> olson@umbc.edu (Bryan G. Olson; CMSC (G))
 writes:


>Maybe I'm nit-picking again, but if you want to re-include stuff I
>edited out of your post, please use a different annotation mark from
>what you use to denote my writing.

 You say that if I want to include *my own* previously-posted stuff,
 in *my own* article, I should use *your* formatting convention?

 Most conversations on Usenet do not use different annotation marks
 for each individual author, even though that might appear to be
 optimal.

 Of course, I am sorry if I confused anyone.


>:   >When no
>:   >distribution is given, we generally assume the uniform distribution.
>
>That's only if you do say, as I did, that the selection is at random
>from some set.  If you don't use the language of statistics the
>default assumption is universal quantification, or in some contexts
>existential quantification.

 This will probably fall on deaf eyes, but I *do not wish* to assume
 a random selection, and therefore do not say that.  I wish instead
 to assume every possibility, which is, of course, the ultimate
 uniform distribution.  This is statistical in the sense that I can
 report some statistic values, and I can do this independent of any
 random selection.  If someone later wishes to make some "random"
 selections, statistics would tell them to expect to sample the set
 of all possible results, which I quantify in terms of mean and
 distribution.

 It may be that, at some point, it will be necessary to introduce
 random selection.  I wish to postpone that as long as possible.


>: > 2.7  THEOREM:  (Position difference counts.)   The number of
>: > n-position code values which differ in exactly m positions is
>: > (n)          m
>: > (m) * (|S|-1) .
>: >
>
>I didn't pick on this one, but you better say "from any given code
>word", or it's not clear what they differ from.

 Good comment.


>The first line of your proof asks us to assume the theorem is true.
>From this assumption you derive something known to be true.  So ?
>If you derive the result from something true, that's a proof; if
>you derive something false from the assumption the theorem is false,
>that's also a proof.  That the theorem implies something true is
>not a proof.  I tell my students to be clear as to whether each
>line is something you know, or something you want to prove.
>
>[Terry:]
>:  This proof (2.9) looks good to me as it stands.

 I'm not at all sure that ask that assumption.  I do ask one to
 assume that two expressions which are identical necessarily have
 identical properties.  If one is expressed as a binomial, the
 other necessarily has a binomial distribution.

 Alternately, from the disputed proof it is obvious that simple
 algebraic manipulations can convert 2.7 into the binomial form.
 Since this would clearly define the distribution, I fail to see
 the distinction.  In any case, including the actual algebra is
 probably overkill, in which case the proof itself is similar to
 the one given, less the initial assumption comment.


>[...Terry:]
>: >:  4.7  THEOREM:  (Probable change propagation.)  Any change whatsoever
>: >:  to the input value to a complete invertible substitution is likely
>: >:  to change about half the bits in the output value.
>: >
>: >:       (Proof:  Changing the input value selects among all remaining
>: >:       output code values.  If the output is considered to be binary
>: >:       bits, we expect about half those bits to change (2.9).)
>: >
>[Bryan:]
>: >Not true.  The identity function is a complete invertible permutation.
>: >I can give you many input changes which result in significantly less than
>: >half the output bits changing.
>: >
>: >You need some idea of probability here.
>
>[Terry:]
>:  I imply that in "expect."  I take the expectation to be the
>:  arithmetic average over all possibilities, each one taken once.
>
>You don't even say "expect" in the theorem, only the proof.  Without
>the idea of probability you can't assume all the remaining output
>codes are equally likely.

 So the point of these comments was that I should change "likely"--
 which may have an unintended meaning--to "expected"?  Fine.


>: >:  4.8  DEFINITION:  AVALANCHE is a statistical property of a discrete
>: >:  function in which any change whatsoever on the input is expected to
>: >:  produce a change in about half the bits in the output value.
>[Bryan:]
>: >
>: >This one is going to be hard to fix.
>
>[Terry:]
>:  It's a *definition*: there is nothing to fix.
>
>It's a bad definition.  What discrete functions have this property ?

 All block ciphers.


>[Bryan:]
>: >For any permutation on n-bit
>: >strings (n>0) there exists some input change which will flip just one
>: >output bit.  Proof: for permutation P use the change:
>: >            P ; flip-first-bit ; inverse-P.
>
>[Terry:]
>:  Yes, I would say this is known.  For a complete 8-bit substitution,
>:  there are in fact eight.  See 2.8:
>
>: > 2.8  EXAMPLE:  The number of 8-bit binary codes which differ in m
>: > bits is:
>: >
>: >      distance     count
>: >            0         1
>: >            1         8
>:                        ^--------- see!
>: >            2        28
>: >            3        56
>[...]
>
>Well, here's another:
>        P; if first bit is one, flip-second-bit,
>           otherwise flip-third-bit; inverse-P.
>
>Did you count that one among your 8 ?

 Are you trying to be cute?  The 256-element set of an 8-bit code
 is almost hand-enumerable.  It is easily checked exhaustively by
 the simplest computer.  Do *you* think there are more than 8 code
 values which differ in one bit?  If so, let's see one.  Do you
 dispute the truth of 2.7 and 2.9?  Or are you just trying to
 confuse the issue?


>(Hmm, given a permutation P(x) on (n-bit) codewords, how many
>derangements D(x) of the codewords have the property that for all
>codewords x, P(x) differs from P(D(x)) in exactly one of the n bit
>positions ? )
>
>Maybe you better tell us exactly what is in the set of any changes.

 The set of changes is the set of all possible code values less
 the current code value.


>More to the point, if there is always some change induces a one-bit change
>in the output of P, then no P on codewords of more than 2 bits can fulfill
>your definition of having the avalanche property.

 Oh, they fulfill *my* definition.  Perhaps I have not communicated
 that definition to you, or perhaps you would rather not see it.

 Avalanche is necessarily defined over all possible input changes,
 for any possible original value, and all possible results.


>You admit that 4.7 is talking about the average over the set of all
>permutations of n-bit codewords.  You can't apply it to an arbitrary
>one.

 Actually, the 2**n - 1 possible *changes*.  Which is why I use
 "about."

 Probably the way to formalize this is to prove the distribution
 for the changes instead of all possibilities, or show that the
 difference is on the order of 1/2**n.  This should not be too
 hard.  That is, the distribution of changes is all 2**n possible
 code values less a single code value which is origin of those
 changes.  Every code value but one is a change from any
 particular code value.  Over all possible origin code values,
 each resulting code value occurs the same number of times.


>Well, if you want to know how much information you need to determine
>one permutation on the set of n-bit codewords, given that all such
>permutations are equally likely, it's approximately:
>
>                n * 2**n  bits.

 No.  I want to state that in order to define even one element of
 a substitution one must have some information which correlates
 input with output.  This is actually an interpretation of the
 definition, but is one basis for an implication of "strength."
 In the case of Fenced DES, if we assume that the internal block
 cipher is "strong," we can then conclude that there will never
 be information linking input and output values, so even small,
 simple permutations cannot be developed except by exhaustion.


>Don't take the large size of this number as good news.  It means that
>when we design ciphers, we have to deal with that very small subset of
>permutations which can be specified in a reasonable amount of space.
>Numbers we get from averaging over all permutations may not apply to
>these.

 I do in fact deal with the set of all 256-element permutations
 which can be developed by shuffling using a keyed cryptographic
 random number generator.  True, the size of that RNG is only 992
 bits, 2**992 instead of 256!**32 (for the 32 substitutions in
 4x Fenced DES).  If you were a cryptanalyst, would you take this
 as good news?  I can use a larger RNG . . . .

 It is necessary, of course, that substitutions not be related for
 the purpose of cryptanalysis.  Accordingly, the first action of
 the shuffling system is to shuffle a list of substitutions, and
 then shuffle the substitutions in that order.  This is, by itself,
 a 32! or 117-bit keyspace, before we get to an exploitation of any
 relation between actual substitutions.


>[...Terry]
>: >:  5.2  THEOREM:  An invertible substitution is a bit-mixer.
>[Bryan:]
>: >
>: >May or may not be.
>
>[Terry:]
>:   >5.1  DEFINITION:  A BIT-MIXER combines multiple input bits such
>:   >that each output value is defined by each and every input bit.
>
>:  OK, you might like "function of" or a more explicit rendering.
>:  Still, the point is clear.  The output value from a substitution
>:  is "dependent upon" or "a function of" each and every input bit.

< deleted by Bryan; re-introduced by Terry; notation by Terry >
>[:] However, upon reflection, I think the definition is insufficient
>[:] to capture my real intent, so this is something I will have to
>[:] work on.  Note that the entire purpose for this line of reasoning
>[:] is reflected in the comment of 12.2:
>[:]
>[:]>      (Comment:  DES with a known key is an example of a block
>[:]>      mixing transform with absolutely no strength at all by itself,
>[:]>      which nevertheless adds strength through bit mixing.)
>[:]
>[:] which is significant because of previous questions of whether a
>[:] mixer without any "strength" can have any cryptographic advantage.
>[:] This development also justifies the examination of Block Mixing
>[:] Transforms which have no "strength" but nevertheless mix.

 In other words, these particular results, while interesting or
 even significant, have very little affect on the Fenced DES
 conclusions.  They may affect how one sees the design of block
 ciphers, however.


>Are you saying that ALL invertable permutations are bit mixers ?

 Yes.


>That there exists some invertable permutation which is a bit mixer ?

 Yes.


>That most invertable permutations are bit mixers ?

 Yes.


 I think I need to work on the definition to make it clearer what
 bit-mixing does.  But, over all possible input values, the simple
 change of one input bit position will produce avalanche output.
 Moreover, over all possible input values, for any possible
 invertible substitution, each input bit position is balanced,
 producing the same number of changes in each result bit.

 It is reasonable to compare this property to exclusive-OR, or half
 of a Block Mixing Transform.  Maybe there is a generalization of
 the mixing property.


>[...Terry:]
>:  My principle assumption is that an invertible substitution is the
>:  ideal block cipher, and this is consistent with the usual analysis.
>:  If an ideal block cipher is an invertible substitution, it must
>:  behave like an invertible substitution, and have the same bit-change
>:  distribution.  And, to the extent that the real block cipher
>:  approaches the ideal, it must behave in the same way.
>
>Perhaps most of the invertable substitutions on n-bit codewords for a
>given blocksize n (say, greater than 64) are good block ciphers in the
>sense that they are secure, but most are terrible because you can't
>write the program that computes them in any reasonable amount of
>space.

 Which is exactly the point.  I claim that Fenced DES is a design
 for producing efficient mechanisms (e.g., writing programs) which
 model a very large, invertible, key-selected substitution.  I claim
 that such a substitution is the ideal model of any possible block
 cipher.  I also claim that a mechanism based on the Fenced DES
 design can compute enough substitutions to have a large-enough
 keyspace to be cryptographically effective.

 ---
 Terry Ritter   ritter@io.com