Measured Distant Avalanche in Large Block Ciphers


A Ciphers By Ritter Page


Terry Ritter


Some block ciphers handle data blocks of dynamically variable size. These are not stream ciphers, but are instead true block ciphers, in which each and every input bit affects each and every output bit in a balanced way. But in a large block, bits which are far apart must somehow produce similar effects, which exposes the possibility that mixing could be more effective for near bit positions than for far ones. We can explore possible locality in mixing by injecting data changes at different points across the plaintext data block, and counting the resulting bit-changes at multiple points across the ciphertext. Accumulated bit-change counts can be organized in a "contingency table" for a test of independence. Experiments conducted on Mixing and Variable Size Block Cipher constructions with 16-byte, 64-byte and even 512-byte (that is, 4096-bit) blocks show no evidence of mixing locality.


Contents



From: ritter@io.com (Terry Ritter)
Newsgroups: sci.crypt
Subject: Measured Distant Avalanche in Large Block Ciphers (LONG!)
Date: Thu, 05 Mar 1998 01:24:44 GMT
Lines: 403
Message-ID: <34fdfea7.1999205@news.io.com>


MEASURED DISTANT AVALANCHE IN LARGE BLOCK CIPHERS


Terry Ritter   ritter@io.com
Ritter Software Engineering
http://www.io.com/~ritter/

1998-03-04  (Army Day)



ABSTRACT

Some block ciphers handle data blocks of dynamically variable size.
These are not stream ciphers, but are instead true block ciphers,
in which each and every input bit affects each and every output bit
in a balanced way.  But in a large block, bits which are far apart
must somehow produce similar effects, which exposes the possibility
that mixing could be more effective for near bit positions than for
far ones.  We can explore possible locality in mixing by injecting
data changes at different points across the plaintext data block,
and counting the resulting bit-changes at multiple points across
the ciphertext.  Accumulated bit-change counts can be organized
in a "contingency table" for a test of independence.  Experiments
conducted on Mixing and Variable Size Block Cipher constructions
with 16-byte, 64-byte and even 512-byte (that is, 4096-bit) blocks
show no evidence of mixing locality.


INTRODUCTION

The ideal block cipher is a keyed simple substitution table of
sufficient size.  Unfortunately, with 128-bit blocks, there would
be 2**128 entries in that table, which is completely out of the
question.  So the modern block cipher is a *construction* intended
to *simulate* a keyed substitution of the desired size.  At issue
is the effectiveness of the construction technology; in particular,
the ability of the mixing to handle distant bits equally well.

A more fundamental question might be *why* one would *want* a
large block cipher.  The short answer is that there can be real
advantages to large blocks:

   1.  Large blocks support an unsearchable amount of "uniqueness"
       or "entropy" in each plaintext block, which allows us to
       avoid randomization by block chaining.  This means we can
       use electronic code book (ECB) mode, and avoid the need to
       generate and transport (or save) an initial value (IV).
       ECB mode also supports the ciphering of independent blocks,
       both in-parallel for high speed, and out-of-order as occurs
       in packet-oriented applications.

   2.  A large block has room for information other than data.
       For example, an authentication field can avoid a separate
       authentication pass across the data at a higher level, and
       thus speed overall operation.  And a dynamic keying field
       can provide ultra-high-speed keying.  That same field also
       can be seen as a strength-producing homophonic "mode," a
       useful block cipher mode which is simply impractical in
       ciphers which only have small blocks.

Note that a cipher which supports blocks of dynamically *variable*
size can handle "legacy" 64-bit blocks, "modern" 128-bit blocks,
and even "independent" 64-byte blocks in a single unchanged program.
Thus, when large blocks are inappropriate, legacy blocks are always
available.  And when large blocks are used, we can always "step down"
the block size at the end of a message, and so limit data expansion
to that of legacy ciphers.  This means that there is little or no
"down side" to using ciphers which have a dynamically variable block
size.

Here we seek to investigate the quality of the mixing in some large
block constructions by experiment.


AVALANCHE

The term "avalanche" comes from Feistel [1], and refers to the
observed property of a block cipher constructed in "rounds" with
respect to a tiny change in the input.  The change of a single
input bit generally produces multiple bit-changes after one round,
many more bit-changes after another round, and so on, until about
half of the block will change.  An analogy is drawn to an avalanche
in snow, where a small initial effect can lead to a dramatic result.
Feistel writes:

   "As the input moves through successive layers the pattern of
   1's generated is amplified and results in an unpredictable
   avalanche.  In the end the final output will have, on average,
   half 0's and half 1's . . . ." [p.22]

In any block cipher we expect every input bit to affect every output
bit:  If we change an input bit, we expect about half of the output
bits to change, and we expect this to be independent of input or
output bit position.  This is in fact inherent in the concept of a
large invertible substitution, but when we *simulate* such a table,
it becomes necessary to somehow force the "mixing" of every input
bit into every output bit in a balanced way.  We might use the term
"distant avalanche" to refer to bit changes in a block position far
removed from the input change.


NEW MIXING TECHNOLOGIES

Over the past decade of my full-time work in this field, I have had
the rare privilege to introduce to cryptography two new and original
forms of mixing for block cipher construction:  Balanced Block Mixing
and Variable Size Block Ciphers.  These technologies mix more data,
better, and with less effort than earlier techniques.  For example,
Mixing ciphers

   http://www.io.com/~ritter/CRYPHTML.HTM#BBMTech

have a known mixing effort of n log n for n-bit blocks of any
power-of-2 size.  Further, each input byte is known to participate
in each mixed result byte exactly once, and with the same
significance.

On the other hand, VSBC's

   http://www.io.com/~ritter/CRYPHTML.HTM#VSBCTech

have a mixing effort which is *linear* in n for blocks of any size
(to the byte), so these are often faster than Mixing designs.  And
the dynamically random block sizing available in VSBC designs can
add a level of strength which is simply unavailable to *any*
fixed-width cipher.

The better quality mixing in these technologies allows diffusion and
confusion operations to be optimized and applied separately.  The
result is cipher designs which are relatively simple, thus easier
to understand and model, and which also scale down to experimental
size.  The worth of cipher *scalability* can hardly be overstated,
since only with a tiny version can we hope to approach a block cipher
experimentally in comprehensive ways.  But the sampling of mixing
locality does not require scalability and so can be applied to
even conventional designs.


PROBING LOCALITY

We wish to capture any locality of mixing present in large block
constructions.  Presumably we could try to capture the effect of
every input bit or byte on every output bit or byte, although we
would then be overwhelmed with data.  But it seems that we might
capture most of the significance of mixing locality by injecting
changes in just a few widely separated block positions, and also
probing for changes at a few selected block positions.  Surely,
if there is a problem mixing in either direction, we should find
it when we change a value at one end of the block and sense the
result at the other.

In particular, we construct a random key, and use it to initialize
the cipher tables; this gives us a particular large substitution
as an emulation.  Then we construct a random data block, save that,
encipher it, and also save the result.  We produce a random byte as
the change to be used at each change position.  Then we copy the
saved data block, XOR the random byte at the selected block position,
and encipher.  In general, we will see about half the bits change
across the block, but we only probe a few output byte positions.
For each input change position, we accumulate the number of
bit-changes seen at each output probe position over many random
data blocks.  We place the results in a contingency table, and
compute a chi-square statistic which is small when the results are
independent.  Each different random key will produce a different
analysis and a different chi-square value.


RESULTS

If there is some locality weakness in mixing, it should express
itself most clearly in measurements on large blocks.  Accordingly,
we start with 512-byte (4096-bit) blocks, noting that these are
fully 64 times as large as "legacy" blocks.  The Mixing cipher
results are given in Figure 1.


 Bit Changes in 1000 Plaintexts of 512-byte Mixing Cipher   Fig. 1

            [  0]         [169]         [340]         [511]
 [  0]   3986   0.00   3932   0.72   4087   0.32   4050   0.04
 [169]   3941   0.36   4013   0.24   4039   0.02   4049   0.06
 [340]   3984   0.01   4008   0.18   4016   0.23   4029   0.00
 [511]   3981   0.20   3952   0.00   4024   0.00   3981   0.17


In figure 1 we have a single trial of 1000 different plaintext
blocks, each processed 5 times:  first without change, then once
each with bytes 0, 169, 340, or 511 changed; these are the rows.
In each case, the number of ciphertext bits which change in byte
positions 0, 169, 340 and 511 are accumulated in separate column
elements.  For example, the value 4050 in the top row and rightmost
column means that over the 1,000 times when input byte 0 was changed,
output byte 511 had a total of 4050 bit changes.  The value 0.04 is
the chi-square contribution for that particular element.

In statistics, the structure of figure 1 is called a "contingency
table" (e.g. [2: 240]).  In many ways this is similar to the common
one-dimensional chi-square test for comparing distributions.  Here
we are asking whether the columns (probe positions across the
ciphertext block) are independent of the rows (change positions
across the plaintext block).  That is, if there is locality in
mixing, it should affect close bit positions more strongly than
distant bit positions, so we should see more changes for close bit
positions than for distant ones.  Here we are measuring the number
of output bit changes for each possible case, to see whether output
probe position is independent of input injection position.  In a
contingency table, independence is the null hypothesis.

The expected value for a contingency table element is the sum of
the values in that row, times the sum of the values in that column,
divided by the total of all values, so each element may have a
slightly different expected value.  The sum of all chi-square
values is interpreted with the chi-square statistic and the degree
of freedom which is the number of rows minus one, times the number of
columns minus one.  This is a "one-tail" statistical test, in that
no value can be too small; we are only concerned with values which
repeatedly show a tendency to be unexpectedly large.

If we sum all the individual chi-square values in figure 1, we get
a chi-square total of 2.56, which can be interpreted with figure 2
as being a very low and thus a very good value.


 Chi-Square Critical Values, for DF = 9   Fig. 2

   1%        5%       25%       50%       75%       95%       99%
2.0879    3.3251    5.8988    8.3428   11.3888   16.9190   21.6660


In the same way as figure 1, we can collect a set of 20 such trials,
as shown in figure 3.


 Chi-Square Totals for 512-byte Mixing Cipher   Fig. 3

    2.56    5.54    3.11    6.86    1.52
    4.64    7.01    2.24    2.34    1.77
    3.75    6.61    8.10    2.77    2.23
    7.24    3.08    5.86    3.77    4.38


Figure 3 was the first set in the sequence recorded for this
article, but does seem rather tight, so we might take another look.
A somewhat wider range was recorded next, as shown  in figure 4.


 Chi-Square Totals for 512-byte Mixing Cipher   Fig. 4

    9.00    3.87    5.41    5.31    1.22
    2.03    8.47   10.31    8.14    1.58
    3.34   11.61    3.24    4.70    7.87
    5.70    5.70    4.77    4.50    6.13


The chi-square value of 11.61 in figure 4 corresponds to just
over a probability of 0.75, and so might be said to reject the
hypothesis that rows and columns are independent at the 0.25 level
of significance.  Of course, scientific results normally require a
0.05 or 0.01 level of significance, which is never approached here.
And in cryptography we can generally afford to do a great number
of experiments and so place the level of significance even higher.
As usual, we expect high chi-square values to occur occasionally
by chance, and so expect to be concerned about only repeatable and
significant indications of problem.

VSBC results are generally similar, as shown in figure 5.


 Bit Changes in 1000 Plaintexts of 512-byte VSBC   Fig. 5

            [  0]         [169]         [340]         [511]
 [  0]   4008   0.06   3997   0.01   4006   0.03   3993   0.03
 [169]   3953   0.61   4027   0.19   4003   0.14   4059   0.52
 [340]   3959   0.00   3911   0.46   4028   0.57   3960   0.01
 [511]   4043   0.24   4018   0.02   4023   0.04   3996   0.18


In figure 5, the chi-square total is 3.11, which corresponds to
a probability of 0.04, which is very good.  (Actually, we should
say only that the value is insufficient to cause us to reject
the hypothesis of independence.)  The full 20 trials in the first
VSBC set are shown in figure 6.


 Chi-Square Totals for 512-byte VSBC   Fig. 6

    3.11    2.18    3.15    4.50    3.67
    6.23    3.96    6.23    5.40    4.26
    4.10    1.97    3.63    2.34    4.31
    4.14    1.92    4.04    5.15    5.06


Subsequent tests on smaller blocks are remarkably similar.  The
test results on 64-byte block ciphers (still fully 8 times as large
as "legacy" blocks) with 1,000-plaintexts are shown in figures 7
and 8.  In these tests, block elements are both changed and probed
at positions 0, 20, 41, and 63.  (Probing the changed element always
provides a contrast to more distant elements, and should make it
easier to detect mixing locality effects if they exist.)


 Chi-Square Totals for 64-byte Mixing Cipher   Fig. 7

    2.53    6.24    3.09    2.15    3.86
    3.26    2.76    4.91    5.59    5.52
    5.51    5.65    3.17    0.98    3.23
    2.20    5.79    2.86   10.26    7.92


 Chi-Square Totals for 64-byte VSBC   Fig. 8

    6.31    3.55    3.32    3.70    6.65
    3.47    4.17    5.81    3.24    5.20
    2.31    3.31    5.85    3.25    3.77
    2.48    7.82    3.21    6.16    2.98


The test results on 16-byte block ciphers with 1,000-plaintexts
are shown in figures 9 and 10.  In these tests, block elements are
changed and probed at positions 0, 4, 9 and 15.


 Chi-Square Totals for 16-byte Mixing Cipher   Fig. 9

    3.63    5.27    4.76    5.45    4.97
    3.35    6.50    2.82    4.60    2.70
    4.92    2.61    2.21    7.89    4.39
   12.60    3.58    5.28    2.73    6.27


 Chi-Square Totals for 16-byte VSBC   Fig. 10

    1.84    9.69    5.30    4.13    2.02
    5.27    5.01    6.19    1.12    4.13
    4.69    7.13    6.39    4.98    5.50
    0.87    5.30    1.79    6.02    7.31


When tests are going well, the outcomes can seem pretty boring.
But make no mistake:  If these tests had *not* gone well, they
would have become very interesting very fast.  About the only
thing remaining is to see whether the distribution gets noticeably
more severe in larger trials, so we run some trials with
10,000 plaintexts in figures 11 and 12.


 Bit Changes in 10,000 Plaintexts of 512-byte Mixing Cipher   Fig. 11

            [  0]         [169]         [340]         [511]
 [  0]  39613   0.29  40010   0.16  40027   0.83  39666   0.60
 [169]  39702   0.01  40029   0.25  39780   0.10  39802   0.01
 [340]  39882   0.36  39925   0.05  39847   0.04  39829   0.03
 [511]  39734   0.00  39802   0.46  39776   0.15  40036   1.08


In figure 11 we have a chi-square total of 4.43 for a probability
of 0.119, which is fine.


 Bit Changes in 10,000 Plaintexts of 512-byte VSBC   Fig. 12

            [  0]         [169]         [340]         [511]
 [  0]  40165   0.76  39948   0.00  39876   0.34  39913   0.05
 [169]  39988   0.12  39878   0.00  39880   0.04  39871   0.01
 [340]  39836   0.77  40101   0.37  40094   0.16  39954   0.02
 [511]  40074   0.12  40011   0.25  40225   0.16  40202   0.20


In figure 12 we have a chi-square total of 3.38 for a probability
of 0.052, which is also fine.

All of these results seem very representative of the endless data
which can be collected in repeated tests.


CONCLUSIONS

These results resoundingly fail to reject the null hypothesis.
We thus conclude that the null hypothesis is correct, and output
position is in fact independent of input position, with respect
to bit changes, for both Mixing cipher and Variable Size Block
Cipher designs of 512-byte (4096-bit) width.

By supporting the idea that some effective large block designs can
exist, the results also support the viability of large block cipher
design in general.


REFERENCES

[1]  Feistel, H.  1973.  Cryptography and Computer Privacy.
Scientific American.  228(5): 15-23.

[2]  Huntsberger, D.  1967.  Elements of Statistical Inference.


---
Terry Ritter   ritter@io.com   http://www.io.com/~ritter/
The Crypto Glossary:  http://www.io.com/~ritter/GLOSSARY.HTM



From: ardell@DESPAMpipeline.com (Gary Ardell)
Newsgroups: sci.crypt
Subject: Re: Measured Distant Avalanche in Large Block Ciphers (LONG!)
Date: Fri, 06 Mar 1998 21:44:26 GMT
Lines: 17
Message-ID: <350066aa.9625060@news.pipeline.com>
References: <34fdfea7.1999205@news.io.com>

Terry,

I enjoyed your post.  Thanks.

I may be missing something obvious here but I don't see the
reasoning behind focusing on changes in individual bytes rather
than on individual bits.  Why not flip all single input bits and
check all single output bits (using your same basic sampling
approach)?

Regards,

Gary



From: ritter@io.com (Terry Ritter)
Newsgroups: sci.crypt
Subject: Re: Measured Distant Avalanche in Large Block Ciphers (LONG!)
Date: Fri, 06 Mar 1998 23:58:35 GMT
Lines: 70
Message-ID: <350087b5.5283891@news.io.com>
References: <34fdfea7.1999205@news.io.com> <350066aa.9625060@news.pipeline.com>


On Fri, 06 Mar 1998 21:44:26 GMT, in
<350066aa.9625060@news.pipeline.com> in sci.crypt
ardell@DESPAMpipeline.com (Gary Ardell) wrote:


>I enjoyed your post.  Thanks.

And thank *you*.


>I may be missing something obvious here but I don't see the
>reasoning behind focusing on changes in individual bytes
>rather than on individual bits.  Why not flip all single
>input bits and check all single output bits (using your same
>basic sampling approach)?

There seem to be two parts to this question:  bits vs. bytes, and all
vs. some.

With respect to "bits vs. bytes," the systems under test both have a
fundamental mixing size of 8-bits.  If we want to detect problems in
8-bit mixing, 8-bit values seem the appropriate size to address.  In
this way, we can at least hope that a problem will be easier to see in
the context of byte results, rather than a necessarily smaller
difference in 8 contiguous but otherwise unremarkable bits.

With respect to "all vs. some," the point is to achieve insight,
rather than data.  If we have already covered the worst case (the
nearest possible locality compared to the most distant locality in
both directions), it is hard to see how more data would help.  And the
amount of data could be tremendous:  Instead of having 4 * 4 = 16
counters, we would have, for the 512-byte case, 4096 * 4096 =
16,772,216 counters.  We could confine ourselves to small blocks that
we *could* test this way, but the whole point is to investigate mixing
across *very* *large* blocks.  We are necessarily limited to devising
tests that are not only revealing, but also practical.

When I began measuring large block constructions a number of  years
ago, one of the first things I did was to investigate what one should
expect in terms of random bit-change totals (which everyone will now
agree should be binomial, although there was some discussion about it
at the time).  I like the binomial reference distribution because it
makes a very strong statement about the probability of a rare
occurrence.  In the context of a large block we get a relatively small
range which is reasonable, and values out of that range have a
computable probability which we can understand directly: if a
1-in-a-million event occurs twice in a few tests, we pretty well know
the story *without* a great deal of deliberation about chi-square
statistics, confidence levels, and so on.  In that sense, the test has
the potential to give a clearer and more definitive statement than a
single average value or even chi-square results of a nominally flat
distribution.  Of course we do want a wide range of tests in any
event.

In actual large block tests using the binomial reference, I have
constructed random keys, random data, and changed each input bit, or
changed random numbers of random bit positions.  And my experience is
that, while this can give one a general feel for the cipher (and
certainly make a loud statement of significant fault), it is pretty
general.  In the case of the distant avalanche tests, I was looking to
specifically address the criticism that "large blocks are bad because
they are obviously hard to mix."  Hopefully the results are food for
thought, which is the best I could hope for anyway.

---
Terry Ritter   ritter@io.com   http://www.io.com/~ritter/
The Crypto Glossary:  http://www.io.com/~ritter/GLOSSARY.HTM



From: ardell@DESPAMpipeline.com (Gary Ardell)
Newsgroups: sci.crypt
Subject: Re: Measured Distant Avalanche in Large Block Ciphers (LONG!)
Date: Mon, 09 Mar 1998 16:10:55 GMT
Lines: 42
Message-ID: <3504131a.6688407@news.pipeline.com>
References: <34fdfea7.1999205@news.io.com> <350066aa.9625060@news.pipeline.com> <350087b5.5283891@news.io.com>

Terry,

Clearly, passing the chi-squared test on the number of flips gets the most
fundamental question about the degree of avalanche:  "Are changes in
each byte fully propagating to all the output bytes?"  This is all you intended
and it seems that your test accomplish this mission given some assumptions
about the nature of possible problems.  (For example, if cipher failures are rare
but spectacular, then the sample size could have to be huge.)

However, having passed the test, numerous pathologies still seem possible
e.g. that output flips are not independent.  (I don’t say likely.)  As you know
better than I, numerous very poor random number generators can easily pass
the chi-squared test but fail more strident tests.   Therefore, it would seem
useful to me to exploit the rich battery of tests that have been designed to
test for weaknesses in random number generators.   For example, here is
one procedure.

Randomly draw a  20 MB of plain text (better yet, 3-DES a 20 MB file).  Call
that file, PLAIN.   Encrypt that file without chaining with your test cipher and
store the results to another file, CYPERBASE.    Let’s focus on input-bit one
for the second.  Make a copy of PLAIN and flip bit one in every input block.
Now encrypt this new file without chaining and xor it against CYPERBASE.
This file should look to all the world like 20 MB of random numbers.  So,
run it through Diehard to see what if any failures show up.  Doing this once
for every input bit comprehensively tests the notion that there is a statistically
obvious exploitable weaknesses in the cipher.   Doing this with truncated
rounds should help identify the pathologies that crop up and the tests that
best flag these problems.

(Note: Diehard may have a weakness in detecting patterns in really long blocks.
I don’t know.  If this is a worry, we can make PLAIN big enough to allow us to
pull 32-bit chunks out of each output block and still get the required 20 MB file.)

This seems to me like a manageable statistical protocol with the potential to
produce even more comfort in the power of the test cipher.

Yes?  No? Maybe?

Regards,

Gary



From: ritter@io.com (Terry Ritter)
Newsgroups: sci.crypt
Subject: Re: Measured Distant Avalanche in Large Block Ciphers (LONG!)
Date: Tue, 10 Mar 1998 20:05:03 GMT
Lines: 70
Message-ID: <35059c24.9942152@news.io.com>
References: <34fdfea7.1999205@news.io.com> <350066aa.9625060@news.pipeline.com> <350087b5.5283891@news.io.com> <3504131a.6688407@news.pipeline.com>


On Mon, 09 Mar 1998 16:10:55 GMT, in
<3504131a.6688407@news.pipeline.com> in sci.crypt
ardell@DESPAMpipeline.com (Gary Ardell) wrote:

>Terry,
>
>Clearly, passing the chi-squared test on the number of flips gets the most
>fundamental question about the degree of avalanche:
>[...]
>
>However, having passed the test, numerous pathologies still seem possible
>e.g. that output flips are not independent.  (I don’t say likely.)  As you know
>better than I, numerous very poor random number generators can easily pass
>the chi-squared test but fail more strident tests.   Therefore, it would seem
>useful to me to exploit the rich battery of tests that have been designed to
>test for weaknesses in random number generators.
>[...]

First, let me just state the obvious: RNG's are inherently different
than block ciphers.  When we ask whether each works right, we worry
about completely different things.  For example, RNG's generally
iterate some fixed and often small amount of internal state; block
ciphers do not.  With RNG's we worry about correlations between
adjacent or delayed values; there is nothing like this in block
ciphers.  And I expect that most RNG tests have been (explicitly or
implicitly) designed to detect RNG-style problems, rather than block
cipher style problems.


>[...]
>This file should look to all the world like 20 MB of random numbers.  So,
>run it through Diehard to see what if any failures show up.  Doing this once

Let me point out that in each of my 10,000 plaintext tests of one
512-byte cipher, we are looking at 4 * 10,000 * 512 = 20.48 MB of
output.  So we are seeing a substantial sample.

But it would be wrong to simply throw ciphertext into a file and
expect general tests to somehow derive information from the result.
The Diehard tests, for example, expect 32-bit integer values, not
512-byte blocks.  If we put in wider values, the tests lose their
meaning, and if we only take 32 bits of the ciphertext, we are not
testing what we want to test.  These tests simply are not oriented
toward block cipher testing.


>[...]
>This seems to me like a manageable statistical protocol with the potential to
>produce even more comfort in the power of the test cipher.

On the contrary, I think it is usually far more important to test for
particular qualities, based on an analysis of the system under test.
Aiming at a specific quality gives us the basis for tests which
selectively expose that quality while filtering out much of the
inevitable noise.  This seems by far the best way to find tiny
problems.

And unless we first define what we are testing for, and then test for
that, our results will not deliver much of a conclusion.  If we use a
general approach to testing, the best we could say is "we didn't find
anything," without being able to describe either what we did not find
*or* what was tested, in any real sense that relates to the mechanism
itself.

---
Terry Ritter   ritter@io.com   http://www.io.com/~ritter/
The Crypto Glossary:  http://www.io.com/~ritter/GLOSSARY.HTM



From: jsavard@teneerf.edmonton.ab.ca (John Savard)
Newsgroups: sci.crypt
Subject: Re: Measured Distant Avalanche in Large Block Ciphers (LONG!)
Date: Fri, 06 Mar 1998 22:13:34 GMT
Lines: 10
Message-ID: <35007452.8557955@news.prosurfr.com>
References: <34fdfea7.1999205@news.io.com>

ritter@io.com (Terry Ritter) wrote, in part:

>   2.  A large block has room for information other than data.

Seeing that comment, I can't resist a historical note. LUCIFER,
grandaddy of block ciphers, with its 128-bit blocks, was intended to
handle blocks containing 64 bits of data, with identifying information
and a serial counter (for randomization) in the other 64 bits.

John Savard



From: ritter@io.com (Terry Ritter)
Newsgroups: sci.crypt
Subject: Re: Measured Distant Avalanche in Large Block Ciphers (LONG!)
Date: Sat, 07 Mar 1998 07:12:08 GMT
Lines: 36
Message-ID: <3500f316.32703452@news.io.com>
References: <34fdfea7.1999205@news.io.com> <35007452.8557955@news.prosurfr.com>


On Fri, 06 Mar 1998 22:13:34 GMT, in
<35007452.8557955@news.prosurfr.com> in sci.crypt
jsavard@teneerf.edmonton.ab.ca (John Savard) wrote:

>ritter@io.com (Terry Ritter) wrote, in part:
>
>>   2.  A large block has room for information other than data.
>
>Seeing that comment, I can't resist a historical note. LUCIFER,
>grandaddy of block ciphers, with its 128-bit blocks, was intended to
>handle blocks containing 64 bits of data, with identifying information
>and a serial counter (for randomization) in the other 64 bits.

Thank you for reading the article.

What I call a dynamic keying field is indeed described in one of the
Feistel patents (now expired).  But it is not particularly useful when
there are only 64 bits in a block; this is a feature which *depends*
upon having something better than DES and IDEA.  Many tears are shed
when constructing standard designs, but the inability to use such a
field in DES may have been an especially bitter loss.

Surely dynamic keying will be at least *possible* with 128-bit blocks,
but a 64-bit key would be fully half of such a block.  That would
*double* ciphering overhead, which may not be acceptable in many
commercial applications.

A 64-bit dynamic key is only 1/64th of a 512-byte block, however,
which seems a far more acceptable overhead.

---
Terry Ritter   ritter@io.com   http://www.io.com/~ritter/
The Crypto Glossary:  http://www.io.com/~ritter/GLOSSARY.HTM

Terry Ritter, his current address, and his top page.

Last updated: 1998-05-03