Nowadays, almost *anyone* can produce a practically
unbreakable "cipher," simply by repeatedly enciphering under various
different well-known ciphers. Similarly, almost *anyone* can
"cut down" or weaken a well-known design to run faster in software.
So the modern practice of cryptography is thus not about
strength
or speed in the abstract, but instead about *tradeoffs* between
strength,
speed,
and other cipher features. In particular,
scalability
supports experimental analysis of cipher strength which is simply
*impossible* on full-size ciphers. And
flexibility
can improve system performance in ways not anticipated in typical
cipher-speed tests.

Everyone wants a practical
cipher
which is **proven** "absolutely secure," but such a cipher does
not exist, and probably never will. Even the famed "unbreakable"
one-time pad
(OTP) is mainly a theoretical concept:
The OTP which is proven "unconditionally secure" is **not** the
realized OTP which is used in practice, but instead the
*theoretical* OTP which has an ideal theoretical-class
random
keying source.
We cannot build such a source, but even if we could, we could not
*prove* it, and **absolute proof** *is required*
for mathematical-level guarantees.
None of this prevents a cipher from being *effectively*
unbreakable, of course, it just means that we cannot expect to attain
theoretical ideals in the real world.

There is **NO** theory of cipher strength such that, if we only
follow the rules, we are guaranteed a strong cipher. Nobody who will
talk can even *measure* the
strength
of an arbitrary cipher. This means that cipher construction is
fundamentally *art* instead of *science,* despite the fact
that strength is *argued* in excruciating technical detail.
Unfortunately, these arguments will be quite unintelligible to the
average customer or cipher user.

The real truth about strength is that even *two decades* of
mathematical analysis have **NOT** produced a particular strength
for the U.S. Data Encryption Standard
(DES).
Indeed, the analysis continues to find somewhat weaker strengths
associated with new and more effective
attacks.
And this analysis may *never* be complete, mainly because DES
does not
scale
to a useful experimental size. Since there is no constructive
theory of practical
strength,
experimental analysis is just about the only general tool we have
to anticipate unknown attacks. Few cipher designs scale down to
experimental size; ours do.

Our ciphers generally use
substitution tables
which are
keyed
or constructed from among all possible tables.
While random tables do have some chance of being "weak," in tables
of reasonable size that chance is extraordinarily small:
For tiny 4-bit tables like those used in DES, about 1 percent are
actually *linear* (which is weak). But for the 8-bit tables
used here, only about 1 in 10^{72} (that is, 1 in 2^{239})
are linear, and this is about like finding a random cipher key by
chance, *twice in a row.* Our designs make each output bit
depend upon *multiple* tables, which should cover any similar
risk.

Some other block cipher designs, like DES, use pre-defined tables,
which turn out to be the basis for modern
Differential Cryptanalysis
and
Linear Cryptanalysis
attacks.
But our ciphers *have* no fixed tables. We expect that the
unique internal structure of one or more tables *must*
somehow be exposed before our ciphers can be
broken.
Our designs *hide* our table structures, while designs with
pre-defined tables simply *give* this information away.

In the VSBC design, the particular table used at each position is also selected by the data being ciphered. This means that few if any blocks of a message will use the same ciphering structure. This makes it hard for an Opponent to find blocks with common structure so the internal features can be attacked.

In file ciphering and similar applications, the
VSBC
design can add strength **beyond** what we normally consider the
cipher proper. By ciphering a file in blocks which each have a
random size, we can hide the extent of each block. This means that
The Opponent cannot know where any particular block starts or ends,
which is knowledge *assumed* by every known block cipher attack.
So by ciphering a file as a sequence of different-size blocks, we
gain an entirely new and separate level of strength, beyond what we
normally call "the cipher," and also beyond what we would normally
think of as a strength analysis. For some details of this sort of
design, see:
**Variable Size Block Cipher
Designs (Realized Prototypes) (13K)**.

Yet another strength opportunity, applicable to both
Mixing
and
VSBC
designs, is the use of a
dynamic keying
or
homophonic
field. This is just a random value which is placed
in the block along with
plaintext
data. When enciphering, the homophonic value selects among a
huge number of different
ciphertexts
*for the exact same plaintext.*
Beyond providing additional keying, this field also assures that
The Opponent can only have *part* of the plaintext block needed
to perform
known-plaintext or
defined-plaintext
attacks. Again, this is strength **beyond** what we normally call
"the cipher," and also beyond what we would normally think of as
a strength analysis. This is a strength opportunity which favors
large blocks and block size flexibility.

It is easy to make a fast weak cipher, but making a fast
*strong* cipher is something else again. Most simple (and
thus *fast*) ciphers have already been tried and found wanting,
at least as general solutions. Because a cipher designer cannot
*measure*
strength,
it is all too easy to make a wrong design decision and end up with
a surprisingly weak cipher. As attack technology improves, we need
*more* strength -- and thus generally *more*
computation and comparatively less speed -- just to stay ahead.

On a 100 MHz processor, with 64-byte blocks, current software implementations of the Mixing cipher process about 600,000 bytes per second, while the VSBC ciphers about 1,000,000 bytes per second.

Sometimes, our ciphering computations can do double duty.
For example, a
VSBC
design can produce an automatic
hash
of up to 40 bits across all ciphered data in no extra time.
If we send this value with the ciphertext, the deciphering process
should produce exactly the same result, *provided* the message
was unchanged. This is a form of
authentication
which is essentially free.

Alternately, both the
Mixing
*and*
VSBC
designs have the
large blocks
which support an
authentication
field. This is just a random value, known at both ends, which
shows that the block has been properly deciphered. This does
reduce the amount of data in a block, and so the effective data
rate; if we use 40 bits of a 64-byte (512-bit) block, the cost is
about 8 percent of throughput. But *either* authentication
approach can avoid the need for a separate and expensive
authentication scan across the data, and so can deliver higher
system throughput than that implied by cipher timing results.

Similarly, the flexibility of both the Mixing and VSBC designs may avoid the need for re-blocking the data or other processing needed with other ciphers. This also may deliver higher system throughput which will not show up in cipher timing results.

The
Mixing
design has a unique speed advantage *in hardware.* Even
though we *expect* hardware implementations to be fast, Mixing
ciphers are special in that most of the computation can be performed
*in parallel* in hardware. This has the potential to support
gigabyte processing rates in present technology, see:
**Extreme Hardware Speed in Large
Block Mixing Ciphers** (8K).

Both Mixing and VSBC designs move as much of the computation as possible out of ciphering-time block operations and into the start-up initialization of keyed substitution tables. The amount of table storage is selected at initialization time, but 16K bytes (64 tables) might be typical.

Both the
Mixing
and
VSBC
designs can directly cipher large data
blocks,
and this is the opportunity to use *Electronic Codebook*
(ECB)
mode instead of (say) *Cipher Block Chain*
(CBC)
mode: When the block size is large enough (say, 64 bytes) so that
amount of uniqueness in a typical block exceeds a searchable amount
(say, 64 bits), there may be no need to randomize the plaintext,
and so no need for chaining, and so no need for the random-like
IV
which starts it off and causes the
ciphertext
to be larger than the
plaintext.

Once we get rid of the need to change the IV, we can start to talk about ciphering wholly within existing data structures. Secure ciphertext can take exactly the same amount of space as the original plaintext. There is no reason to store or transmit a changed IV.

Both the Mixing and VSBC designs can directly cipher entire 512-byte disk sectors, for example, without needing to also save or somehow construct an IV for each sector.

By avoiding chaining, we also avoid problems applying the cipher to environments where blocks need to be ciphered independently. These might include low-level network ciphering (since packets may appear out of sequence), and database access.

Large blocks also provide room for authentication and dynamic keying fields. An authentication field can avoid a separate, time-consuming authentication pass across the data at a higher level. And, when needed, a dynamic keying field can reduce keying latency to zero.

Both the Mixing and VSBC designs support any type of key (text, binary, picture data, whatever), of arbitrary length. In these ciphers, there is no need for each applications designer to develop appropriate key processing for every application.

Both designs are
block ciphers which
directly support modern 128-bit blocks, legacy 64-bit blocks, and
independent 64-BYTE blocks, in *the exact same unchanged cipher.*
Both of these ciphers support blocks of *dynamically* variable
size at ciphering time. The
Mixing
design supports blocks of dynamically-variable power-of-2 size in
bytes (e.g., 8 bytes, 16, 32, 64, ..., 512 bytes and larger). The
VSBC
design supports blocks of dynamically-variable size to the byte,
which could be ideal for variable-size database fields (when
necessary, accumulated to at least 8 bytes).

Both Mixing and VSBC designs can directly replace plaintext with ciphertext, and so cipher wholly within existing data structures. When a cipher does not expand ciphertext, ciphering can be introduced between existing hardware or software layers. In this way, ciphering can be added without changing the design of the existing system. One interesting approach might be to cipher at the data-transport level; this could avoid the need to separately re-engineer every communications application for cryptography.

Both the
Mixing
and
VSBC
designs are
*scalable*
ciphers. That is, these cipher descriptions are size-independent.
Not only do our ciphers support blocks of dynamically-variable size,
the designs *also* support tiny versions with smaller tables.
These tiny versions are *models* of ciphers, rather than ciphers
themselves. While full-size ciphers can *never* be exhaustively
tested, the models *can* be approached experimentally,and any
flaws we see in them probably will be present in the full-scale
versions we propose to use.

A scalable cipher design can produce a large or small cipher
from the exact same construction rules. Just as we believe that
mathematics works the same for numbers large or small, a backdoor
cipher built from fixed construction rules must have the same sort
of backdoor, whether built large or small.
With a scalable cipher we can construct small versions which
*can* be exhaustively tested. See, for example:
**Measured Boolean Function
Nonlinearity in Mixing Cipher Constructions** (24K), and
**Measured Boolean Function
Nonlinearity in Variable Size Block Ciphers** (29K).

Scalability does far more than just *simplify* testing:
Scalability is an enabling technology that supports experimental
analysis which is otherwise *impossible.* Because there is
no cookbook of cipher strength, the ability to actually
*measure* small versions of big ciphers can hardly be
overemphasized. We deserve *more* than an incomplete analysis
that takes two decades, and the way to *get* more is to use
scalable cipher designs.

*Last updated:* 2001 May 28