Newsgroups: sci.crypt
Path: cactus.org!cs.utexas.edu!uunet!mnemosyne.cs.du.edu!nyx10!colin
From: colin@nyx10.cs.du.edu (Colin Plumb)

Subject: Re: Block Mixing Transformations
Message-ID: <1994Mar15.124011.19463@mnemosyne.cs.du.edu>
X-Disclaimer: Nyx is a public access Unix system run by the University
+             of Denver for the Denver community.  The University has neither
+             control over nor responsibility for the opinions of users.
Keywords: DES replacement, Large blocks
Sender: usenet@mnemosyne.cs.du.edu (netnews admin account)
Organization: Nyx, Public Access Unix at U. of Denver Math/CS dept.
References: <1994Mar13.051515.27175@cactus.org> <1994Mar14.223702.
+           6452@mnemosyne.cs.du.edu> <1994Mar15.065615.3895@cactus.org>
Date: Tue, 15 Mar 94 12:40:11 GMT
Lines: 123

In article <1994Mar15.065615.3895@cactus.org>,
Terry Ritter  wrote:
>
>
> In: <1994Mar14.223702.6452@mnemosyne.cs.du.edu> colin@nyx10.cs.du.edu
> (Colin Plumb) comments on block mixing transforms:

>>From A+B, it is easy to obtain most of T, save only the term of p which
>>may be added in the modular reduction.
>>
>>Because this depends on the high
>>bit of A+B, it is also easy to see when this has been added.
>
> Well, the original polynomial reduction (when p is added) is
> conditional upon the shifted-out-and-lost upper bit of T.
> It is not there (not in the output) to see.

Yes it is.  T is A^B, which is the same as X^Y.  The high bit is
highly visible.

> Certainly we can expect to form the inverse T (say, T') given
> both X and Y.  Then, if we had access to A and B, we could see
> what p was when it was added.  Hopefully, we will not have that
> access.  I believe I made that requirement clear:
>
>>> It is crucial to remember that these simple, high-speed, but linear
>>> mixing transforms can be said to have "strength" only if the input
>>> and output values are never both available.  That is, these
>>> structures do not by themselves handle "known-plaintext" attack.
>>> (Of course, the same could be said for many other simple internal
>>> mechanisms used in block cipher construction.)

And I observed that half the bits of the input are trivially derivable
from the output (A^B = X^Y), and the other half are also trivially derivable
half the time (and you know which half!), and almost as easy the other
half of the time.
>
> In many (most?) cases A and B should already be randomized by the
> time they get to the mixer being attacked.  Without "exploitable
> patterns," finding p will get a whole lot trickier.

If that's the case, almost any mixing pattern will suffice.

> It certainly is true that the structure is weak.  Since it is weak,
> it must be protected.  But when the structure is protected, finding
> p becomes difficult.
>
> Anyway, I claim the real issue is: "Does it mix well?"  The answer
> may be: "Not all that well, but we can afford to do a lot of it."

> One of the examples in the original article showed 61 mixing
> operations for a single 256-bit block operation.  Given that
> the mixing structure is so weak, perhaps a better approach
> would be:
>
>    S S S S S S S S S S S S S S S S S S S S S S S S S S S S S S S S
>    mix mix mix mix mix mix mix mix mix mix mix mix mix mix mix mix
>    --mix-- --mix-- --mix-- --mix-- --mix-- --mix-- --mix-- --mix--
>    ------mix------ ------mix------ ------mix------ ------mix------
>    --------------mix-------------- --------------mix--------------
>    ------------------------------mix------------------------------
>    --------------mix-------------- --------------mix--------------
>    ------mix------ ------mix------ ------mix------ ------mix------
>    --mix-- --mix-- --mix-- --mix-- --mix-- --mix-- --mix-- --mix--
>    mix mix mix mix mix mix mix mix mix mix mix mix mix mix mix mix
>    S S S S S S S S S S S S S S S S S S S S S S S S S S S S S S S S
>    mix mix mix mix mix mix mix mix mix mix mix mix mix mix mix mix
>    --mix-- --mix-- --mix-- --mix-- --mix-- --mix-- --mix-- --mix--
>    ------mix------ ------mix------ ------mix------ ------mix------
>    --------------mix-------------- --------------mix--------------
>    ------------------------------mix------------------------------
>    --------------mix-------------- --------------mix--------------
>    ------mix------ ------mix------ ------mix------ ------mix------
>    --mix-- --mix-- --mix-- --mix-- --mix-- --mix-- --mix-- --mix--
>    mix mix mix mix mix mix mix mix mix mix mix mix mix mix mix mix
>    S S S S S S S S S S S S S S S S S S S S S S S S S S S S S S S S
>
> This is 122 total mixing operations, but they are O(n) so the
> cost is something like 4 * 18 * 128 = 9216 (and probably less).

My observation about the extremely high degree of linearity in the
operation was that anything made up only of these mixing operations
is weak.  I'm not sure the above is more trustworthy than the
substitutions.
>
>   levels * mixings-per-level * cost-per-mixer * size
>
>    2 *  1 * 4 * 128 +
>    4 *  2 * 4 *  64 +
>    4 *  4 * 4 *  32 +
>    4 *  8 * 4 *  16 +
>    4 * 16 * 4 *   8 =  9216

Oh!  You were using super-wide mixers.  I was assuming you were using
a fixed-width mixer, in  FFT butterfly style:

_        _
 \ /\   /
  *  \ /
_/ \  *  _
    \/ \/
_   /\ /\_
 \ /  *  
  *  / \
_/ \/   \_

> If we use the IDEA-derived mixer we have:
>
>   levels * mixings-per-level * cost-per-mixer * size^2
>
>    2 *  1 * 2 * 128^2 +
>    4 *  2 * 2 *  64^2 +
>    4 *  4 * 2 *  32^2 +
>    4 *  8 * 2 *  16^2 +
>    4 * 16 * 2 *   8^2 =  188,416
>
> or over 20 times the computation of the simple approach.

I see your idea now.  Changing the widths like that will require a bit
more thinking.  The preservation of Z^B still applies, but the substitutions
-- 
	-Colin
make it rather more complex.