Path: msuinfo!caen!uwm.edu!cs.utexas.edu!not-for-mail
From: ritter@io.com (Terry Ritter)
Newsgroups: sci.crypt
Subject: Algorithms
Date: 11 Nov 1994 22:21:39 -0600
Organization: UTexas Mail-to-News Gateway
Lines: 189
Sender: nobody@cs.utexas.edu
Message-ID: <199411120422.WAA04049@pentagon.io.com>
NNTP-Posting-Host: news.cs.utexas.edu
In <39n23d$loq@net.auckland.ac.nz> egendorf@mail1.sas.upenn.edu
(Robert Egendorf) writes:
> What algorithms besides IDEA and 3xDES are extremely
>difficult to break?
First of all, we do *not* know that either IDEA or Triple-DES
*are* difficult to break (that is, for everybody).
In particular, IDEA has an internal structure which I would term
"almost linear." While not strictly linear, it may be linear
enough to work on, and so almost begs to be broken. Alas, this
is not likely to be something one does in the odd afternoon. Any
serious attack on any serious cipher is likely to be an arduous
process involving a great deal of time and lots of trial and error.
Success, while fun, is not very profitable. I wish I had the time
to throw at it.
All block ciphers are relatively-simple mechanisms which attempt
to simulate a huge substitution table; this is no more than Simple
Substitution on a large data value. I expect that no simple machine
can do this perfectly well. Therefore, we might well have a
situation where massive statistical exploration (from a new
generation of computers) could lead to key- or data-bit
correlations which have not yet been discovered.
We commonly assume that the keysize of Triple DES is at least twice
as long (has 2**56 times as many keys) as ordinary DES, and that may
be. But consider the simple newspaper amusement ciphers, which are
also Simple Substitution: Clearly, even three sequential cipherings
through different alphabet permutations will be no harder to solve
than one. (In this case we do not solve for each of the keys
independently, but this probably does not matter.)
Now, in a real block cipher, CBC mode will use the data space
better, so the character-frequency attack on the newspaper ciphers
will not work. Nevertheless, this is one example of a real attack
which is *not* complicated by additional levels of ciphering. I
have no reason to believe that it is the only such attack. Thus we
do *not* know that Triple DES is any stronger than DES itself.
>[...]
>Also, Terry
>Ritter posted something about an algorithm he developed. Has anyone
>evaluated this?
Mr. Ritter has posted the design of at least three encryption
mechanisms in detail:
1) Penknife -- a 63-bit key, plus random 32-bit line key on each
ciphertext line, file cipher for email.
The mechanism is basically a two-level stream cipher, using
both nonlinear Dynamic Substitution and conventional additive
combiners, with a small nonlinearized RNG. Penknife produces
only network-transportable ciphertext in ASCII lines. It can
automatically skip email headers and .sigs when deciphering, or
keep them with the deciphered message. Penknife is an example
of an error-resilient re-originating stream cipher.
2) Cloak2 -- a 992-bit key, plus 992-bit message key, file cipher.
The mechanism is a two-level nonlinear Dynamic Substitution
combiner stream cipher, with 16 second-level combiners, and huge
nonlinearized RNG's. The ciphertext is binary.
Both of the above are commercial secret-key ciphers currently
implemented for MSDOS (and which now function well under Microsoft
Windows). They are not exportable. But while the ciphers proper
are important, probably the major practical value of these systems
lies in extensive key-management by open alias, with the ability
to cleanly update keys without disturbing most users. Central key
management can be important to allow a business to retain access
to information for which it originally paid. And archived
ciphertext under old, replaced keys can be accessed easily.
3) Fenced DES -- currently in experimental implementation.
Intended to expand the width of DES (or other block cipher)
and thus increase overall strength, with minimal increase in
computation.
For example, quad-width Fenced DES (256-bit block) ciphers
faster (per byte) than Double-DES. This was done by innovating
a general concept of a Block Mixing Transform which reversibly
mixes data and can be efficiently scaled to virtually arbitrary
size. The fast transforms are weak, thus requiring isolation;
a fencing array of thirty-two 8-bit keyed substitution tables
both on the data input and output provide the needed isolation.
The mathematical structure of the transforms and the
simple substitution layers means that we can make some fairly
specific statements about avalanche, a required property of
block ciphers which is normally investigated only by experiment.
Of course, the keyed nature of the fencing tables does imply a
substantial set-up delay, but also substantial added strength.
My original proposals for the wide-block-DES project (which led to
Fenced DES) were weak, ranging from not very useful, to not strong
enough. Not unexpectedly, the only real analyses which I saw were
directed against these weak approaches.
Unfortunately, there is not much to say when you don't have an
attack to discuss, so the open literature tends to accumulate
ciphers of unknown strength. Of course, since there *is* *no*
practical cipher *with* a proven strength, this is just part of
the whole cipher game.
Personally, I think if you want a block cipher with "strength," it
is crucial to have a wider block than 64 bits, especially if the
cipher will handle the massive real-time graphics of the near
future. One low-controversy approach is Richard Outerbridge's
scheme (Applied Cryptography, Fig. 8.10), although it has only a
double-width block and a computation requirement (per byte) of
three times DES.
But let's return to:
> What algorithms besides IDEA and 3xDES are extremely
>difficult to break?
The issue I want to bring out is: "how difficult do we need"?
In the first place, ciphers do not exist in a vacuum: Ciphers can
protect information, but they can only protect information which is
otherwise secret. And if the secret information can be "obtained"
(coerced, stolen, bribed, etc.) cheaper than "breaking" the cipher,
there is probably very little point in having a stronger design.
Having spent some time attacking ciphertext, I can assure you that
*I* would take virtually any other alternative.
However, it has recently occurred to me that some might believe in
the existence of "the aliens among us." OK, suppose we assume The
Grays have molecular-sized supercomputers by the liter, and so can
afford to attack ciphertext itself by key-exhaustion: this would
make most of our common ciphers useless. But we can make a system
which should be strong enough to evade even such an attack, simply
by doing what we know how to do, in grand scale:
For example, I used a 992-bit key (over 10 times as *long* as
an 80-bit "large enough" secret key) in my Cloak2 stream cipher,
because it was a reasonably clean design without too much overhead.
Since that design uses an RNG with about 37.8K (Bytes) of state,
I could instead "easily" equip it to use honest 310,048-bit keys,
provided we are willing to generate and keep such keys. (The user
keys would be kept inside enciphered alias files, of course, but a
40K message key on a 2K message might seem a little much.)
Note how much trouble we have doubling or quadrupling the block
size and key space in block ciphers, and the ease with which we
can expand the keyspace of a stream cipher. Generating any of
2**310,000 ciphertexts at will (instead of, say, 2**80), should
make things somewhat harder for The Grays :-).
Ciphering speed would be generally unchanged in the resulting
cipher, which is generally not the case when we try to expand a
block cipher. Of course we could use even more complex combining
systems which *would* be slower (and would also isolate the RNG
better), and we would also look longingly toward fast "really
random" hardware. But is all this really necessary?
The issue seems to be clearer for business applications: There,
a cipher need be only strong enough to prevent significant losses.
Even the loss of confidential information can be evaluated in terms
of lost contracts or lawsuit costs. There are always losses and
suits and business costs, and as long as the losses remain small,
security would seem sufficient. (Of course, one does seek to avoid
the proof that one's security has been weak.) Thus, in business,
there may be room for a range of what we would consider to be
relatively "weak" ciphers, especially if they can be made very
fast and efficient.
On the other hand, international banking transfers really huge
amounts of money around as data. While there are checks and
balances on this, it is possible to imagine some sort of terrorist
attack on international funds transfers. Presumably, the NSA has
acted to provide or certify acceptable systems to protect at least
the US banks.
We do not make every system "ultimately strong," because there *is*
no "ultimate," and, eventually, we have to decide how much strength
we want to pay for. Since we cannot *measure* the "strength" of a
cipher, we are necessarily forced into arguing what someone else
can and cannot do when they attack it. While this is not an
especially strong logical position, it is all we have.
---
Terry Ritter ritter@io.com