My Contributions

A Ciphers By Ritter Page

Terry Ritter

2006 Jan 1


After more than a decade of full-time work in cryptography, I am sometimes challenged to summarize my work in a form "for mere mortals." While I can give a general overview of what I have done, most of that consists of fairly detailed improvements to existing technology. Seeing each improvement requires at least a general understanding of the technical problem which each improvement solves. Even academic cryptographers may have difficulty perceiving these issues since they do not address common academic problems.


Contents


No Ciphers for Sale, Just Technology and Consulting

I offer no current cipher programs, mainly because I am not happy with current computers as a security platform. Nor is the issue just Windows, but any system which is hidden or too complex for individuals to analyze and validate. It is unlikely that any cipher system could operate securely on an insecure computer (at least not to commonly expected cryptographic levels of security). There is also no point in ciphering when the plaintext itself is fundamentally at risk. Moreover, no machine connected to The Net can be considered secure, and it seems doubtful that any firewall could be certified to cryptographic levels of assurance.

Personally, I use one computer on The Net, and a different computer for my work. The work computer is not, and never has been, connected to The Net or even any LAN. Only the work computer can support secure information, and only very limited information is passed back and forth on floppy or CDR. Having two isolated computers is one way to build substantial security, but it is awkward.

My current crypto business consists of promoting my novel technology for license, and, of course, crypto consulting.


What is Cryptography?

Cryptography is the art and science of hiding information while in storage or transit. The basic idea is to transform or encipher data into a different form (ciphertext), so that it seems to lack meaning to those who do not know how to transform it back. Normally, a cipher will support a plethora of different transformations, far too many to try them all, each selected by a different key value. The goal is to create a needle in a haystack, where the correct deciphering is just one of trillions of possible decipherings, all but one wrong.

Understanding cryptography can be difficult because it is fundamentally unlike anything else we design and build. Almost universally, we build something for an effect we can measure, and we can see how well we do. But cryptography "works" only to the extent that it keeps our information secret from opponents we do not know and who will not tell us when we fail. In fact, we can expect opponents to moan about how difficult it would be to break a cipher which they actually can break easily, to get us to use a cipher they can expose. In practice, cryptography is a contest, and the environment is deception and misdirection. That is completely different from what we normally find in science and technology, and calculated deception is something for which few are prepared.

After a decade and more in cryptography, I find myself increasingly distressed by the state of the field and many of those who supposedly represent it. Various example exist:

  • One example is the continued and overwhelming emphasis on mathematical proof which unfortunately cannot be applied in practice by any user.
  • Another example is the continued refusal of the crypto texts (and, presumably, crypto instructors) to properly model the proven vulnerability in practice of the one time pad.
  • Yet another issue is the unmentioned reality that most ciphers in no sense have proven security, and yet are commonly referred to and seen as secure, merely on the basis of frequent use and casual academic vetting. No rational examination can show any valid basis whatsoever for a belief in the strength of our ciphers. But very few users are made aware of their risk.
  • Even a casual reading of cryptographic history should present whole sequences of example where one side was convinced of the strength of their ciphers, and was wrong, to their great cost. Only the arrogant could possibly imagine that the same thing could not happen to us as well. These issues are deliberately hidden from both buyer and user by a conspiratorial refusal to accept and expose the actual uncertainty present in the field.
I assume that contracts often go to those willing to hide uncertainty under mathematically complex but ultimately unwarranted claims.


Cryptography in the 90's

By far the major issue in cryptography during the past couple of decades has been public key cryptography. With a conventional or secret key cipher, we use some key value to select an arbitrary transformation into ciphertext, and then use exactly the same key value to reverse the transformation. Anyone who enciphers a message also can decipher it.

In a public key cipher, there are two (related) key values: One can be made public so anyone can use that transformation, while the other key is kept secret. So anyone can encipher a message, but only the party holding the hidden private key can expose the enciphered messages.

A public key transformation is very, very slow, so most "public key" ciphers actually are hybrid ciphers. These ciphers use public key technology just to transfer one arbitrary value, which is then used as the key for a much faster secret key cipher.

One problem with public key technology is that, if someone can intercept messages in transit, and also get the far party to use a new key (quite plausible, since public keys supposedly can be freely distributed), the person "in the middle" can expose messages without breaking any cipher at all. Thus, even a public key cipher which was mathematically proven unbreakable (which is probably impossible anyway), could expose information to a man in the middle attack. Only public key ciphers have that weakness, and that is not a weakness I am willing to accept.

Basically I work on automated secret-key ciphers, which, of course, are major components in most public key ciphers anyway. But the well-known state of the art has come a long, long way from its origin in classic or hand ciphering.


Background: Modern Secret-Key Ciphering

Modern automatic ciphering benefits greatly both from the availability of computing machinery, and the resulting growth of information-processing knowledge. Thus:

  • We have a machine to handle huge numbers of operations quickly and precisely, and
  • We have new understandings and new computer techniques.

Consequently, in addition to generalizing the older concepts like:

  • Using a 256-element character set (instead of 26)
and eliminating simplifications and their related weaknesses, such as:
  • Selecting one of many tables at pseudorandom (instead of in rotating sequence), and
  • Transposing bits (as opposed to characters or bytes)
we also can: These things are unavailable to classic ciphers, and the result can seem daunting compared to classic cryptograms. But this is the well-known state of the art.


Old Crypto Problems / New Crypto Mechanisms

I believe I have made a few fundamental advances beyond the state of the art in cryptographic mechanisms. Since these mechanisms were intended to address fundamental problems in practical cryptography, they probably make no sense unless we first understand the problems.

  1. Stream Cipher Weakness. A conventional stream cipher is basically a keyed random number generator (RNG) whose output is mixed with plaintext data, typically using exclusive-OR, thus producing ciphertext. But cryptanalysis assumes that a large amount of plaintext and the related ciphertext (known-plaintext) is available to the opponent. Under these conditions exclusive-OR has no strength at all, and immediately exposes the RNG sequence. Then, knowing both the RNG design and RNG output, the opponents have a good chance at reconstructing the internal state of the RNG, and that will break the cipher.

    To address the problem of stream cipher weakness, academic cryptography created ever more complex RNG designs. Since each RNG is just some computer state which is combined or mixed in some way to produce a new state value, the obvious idea was to have a multiplicity of RNG's, and mix them together in some way to gain exponential strength. Surprisingly, that almost never works (see: The Story of Combiner Correlation: A Literature Survey).

    My take on the problem was different: I saw the negative outcome being the result of both exclusive-OR weakness and RNG weakness. Since RNG strength was being addressed, my response was to look at the combining process, to see if that could be improved. At the time (around 1988), it was unclear whether unbiased, reversible, nonlinear combining could even exist. Ultimately, I called my answer Dynamic Substitution.

    Dynamic Substitution can be seen as a modified form of Simple Substitution to replace the conventional exclusive-OR stream cipher combiner. In Dynamic Substitution, each current plaintext character is substituted through a table and becomes ciphertext as usual, but then the just-used substitution element is exchanged with some other as selected by the RNG sequence, which is novel. In this way, the more often a particular transformation is used, the more often it changes.

    Dynamic Substitution is a keyed, nonlinear combiner with state. In addition to simply replacing exclusive-OR, it can be used in a sequence of combinings, or in a selection among different combiners, neither of which make sense with exclusive-OR. Dynamic Substitution thus also enables various architectures which were previously unhelpful.

    Also see the Dynamic Substitution section on my home page.

  2. Dynamic Transposition Possibilities. Once we have a Dynamic Substitution, that begs the question as to whether a Dynamic Transposition can exist as well. The answer is: "Yes."

    Dynamic Transposition has two parts:

    1. The construction of a data block with an equal number of 1's and 0's (this can be done fairly easily, for arbitrary data, with relatively low expansion).
    2. The use of keyed bit-shuffling to select one from among any possible bit-permutation.
    Each block of a message is shuffled independently.

    The strength of the technique is in the fact that the ciphertext result could have been produced by a plethora of different shuffling permutations, even given the exact same plaintext. Known plaintext thus does not disclose the keyed shuffling sequence, which is what we are hiding. Dynamic Transposition is thus largely immune to conventional known plaintext attack in a way that exclusive-OR simply cannot be.

    Dynamic Transposition also offers a theoretical clarity which cannot exist in conventional block ciphers: Conventional block ciphers cannot begin to key-select from among every possible huge emulated simple substitution table because there are far, far too many potential tables, making a key which could select any of them far too large to use. In contrast, Dynamic Transposition can produce any possible bit-permutation, given only an RNG with large enough state. There is thus no anxiety about a particular selected set of transformations which may or may not have useful statistical properties for opponents.

    I see Dynamic Transposition as a useful basis for provable strength in practice (although I have no such proof). See the 1991 Cryptologia article Transposition Cipher with Pseudo-Random Shuffling: The Dynamic Transposition Combiner, and the 2001 sci.crypt article clarifying an earlier discussion: Dynamic Transposition Revisited Again.

  3. Block Cipher Construction. In my view, most block ciphers have a remarkably ad hoc construction, given their academic origins. In particular, I see the use of rounds as testimony that the desired confusion and diffusion did not occur the first time the operation was run. But simply doing something again and again until the result is somehow good enough is no substitute for doing it right in the first place.

    A conventional block cipher is nothing more than an emulated huge substitution table. The substitution must be computed as needed because such a table must be far too large to construct and store completely. In my view, what was needed was a construction technique which could connect keyed and randomly-constructed modest-size tables to simulate a larger-size table. If we could do that efficiently, we could repeat it using the emulated tables, to get even larger tables, and thus eventually support huge block sizes. (Huge blocks reduce the probability that the same plaintext block will re-occur, thus perhaps avoiding the need for an initialization vector and thus allowing out-of-sequence ciphering as can occur in network packet processing.)

    The elemental Balanced Block Mixer (BBM) is a two-input, two-output mixing function which is reversible, in which the mixing can be keyed. The implementation consists of two orthogonal Latin squares; the mixing may consist of keyed, explicit, nonlinear tables or simple linear computations, when that is acceptable (for example, see: Fenced DES).

    If we put two small blocks or values of data into a BBM, we get two values out, where each output value provably depends in a balanced way upon both of the input values. Moreover, the elemental oLs transformations can be substituted for the arithmetic computations in an FFT or Walsh transformation. By using FFT-like patterns, we can quickly mix the results from many small substitution tables such that each result provably depends upon every table, which is exactly what we need to build an emulated larger table. And, in marked contrast to conventional FFT or Walsh transformations, BBM computations do not expand the data range (see: ciphertext expansion.)

    The BBM structure thus allows us to build a large block cipher out of small substitution tables or S-boxes which we then mix together. And, independent of whether or not we key-select the mixing, we can key-select the tables from among all possible tables. (Also see the Balanced Block Mixing area of my home page.)

  4. Any-Size Message / Fixed-Size Blocks. One problem with conventional block ciphering is that, inevitably, most messages will consist of some number of whole blocks plus some extra which is not a full block. Since block ciphering can only cipher full blocks, there often will be some ciphertext expansion. There are various ways of handling the end part (including stream ciphering the end data), but these techniques often have consequences which can be problematic in particular common situations.

    One way to avoid the last block problem entirely is to dynamically adjust block size so that the last block is never very short and can be enciphered as a full block. The ability to adjust block size is novel, and I personally originated the term "Variable Size Block Cipher" (VSBC) in 1995 (see: VSBC Newsgroup Discussion); the term was then picked up without attribution for more popular academic use.

    VSBC's have the usual conventional block cipher characteristics, but can cipher blocks of virtually arbitrary size (beyond some minimum) to the byte, as dynamically selected at ciphering time. This occurs in the context of a "constant depth" process, which needs no additional computation per byte, even with huge blocks, so there is no motive to cipher data in small blocks.

    Even when plenty of message remains, VSBC blocks can vary in size to help resist cryptanalysis; by adding random amounts of random data (nulls), simply knowing the original plaintext is not enough to match with resulting ciphertext. Thus, VSBC's can avoid known plaintext attacks in ways that go beyond conventional block ciphering strength, so that some of the conventional assumptions of cryptanalysis do not apply. In summary:

    • VSBC's can support huge blocks and so allow a return to ECB-mode (Electronic Code Book) operation, which is frowned-upon with DES.

    • VSBC's can support ciphering already-defined fields in existing systems without the need for added Initial Value (IV) storage. While an IV is not necessarily difficult to make or use, it does expand the amount of store needed. This makes it difficult to cipher wholly within existing data structures.

    • VSBC's also support the ability to hide both the start and end positions of each block. (We send some random data at the start, and again at the end, and use blocks of keyed pseudorandom size in between.) This ability to hide the extent of a block directly confronts one of the things we normally assume an opponent will have in any attack on a block cipher.

    • VSBC's make it easy to expand the block to contain control fields, for dynamic keying, authentication, or both.

    Also see the Variable Size Block Ciphers section on my home page.


Analysis by Project

Analysis is central to what I do, and has been for all of my professional life.

Analysis is understanding in depth, and is far more than just particular choreographed cryptographic attacks. Current attacks are the result of a quarter-century of research, mostly on conventional block ciphers. For my novel cipher designs, when conventional attacks are not appropriate, we simply do not have a similar depth of understanding, and we cannot present what we do not have.

Before Windows, in the days of DOS (the late 80's and early 90's), I did personally design, implement, and offer several Dynamic Substitution cipher systems for sale, including CLOAK2 and Penknife.

  1. CLOAK2 Stream Cipher

    One can see the design of my 1993-1995 DOS-based CLOAK2 secret key stream cipher in The Cloak2 Design.

    The CLOAK2 cipher uses novel Dynamic Substitution combiner technology which I invented, patented, own and described in Cryptologia in 1991. It also uses the huge Additive RNG's which were part of my article: The Efficient Generation of Cryptographic Confusion Sequences which also appeared in Cryptologia in 1991.

    In my view, the CLOAK2 security arguments are clearer and more believable than those for modern block ciphers. CLOAK2 does not use, and thus does not rely upon, public key techonology, and so is far closer to our normal experience with metal keys and locks and doors.

    CLOAK2 has substantial secret key management methodology often overlooked by new cipher designers. See, for example: CLOAK2 Features: Key Management.

    CLOAK2 strength arguments are addressed in: The Cloak2 Design: Strength Arguments. The attacks discussed include: Brute Force on RNG, Brute Force on User Key, Chosen Key, Differential Cryptanalysis, Ciphertext Only, Known Plaintext, Key-Stream Repetition, Birthday Attack on Message Keys, and Chosen Plaintext. This list is more than we usually see in cipher presentations.

    The discussions themselves are simplified by the fact that CLOAK2 is a stream cipher, not a block cipher, and by the use of Dynamic Substitution technology which is widely discussed elsewhere on my pages, and which itself directly opposes the usual stream-cipher attacks.

  2. Balanced Block Mixing Technology

    The development of BBM technology was a classic multi-year R&D project with multiple open reports -- all available on line -- describing incremental results. This work started in 1994.

    BBM technology is not just one idea, and certainly not just yet another "cipher." The original approach used the idea of counted balance in the mixing mixing functions. Subsequent articles give other constructions and analysis from different directions. One of those is an actual proof that even a single input bit-change would produce a bit-change on all output ports. That proof thus guarantees that all next- level S-Boxes would be "active." The proof avoids the need for any such measurement or optimization.

    A particularly innovative analytic approach was my 1997 experimental statistical analysis of Boolean function nonlinearity distributions in Mixing constructions. In 1998 I found how to create keyed orthogonal Latin squares and extended the results to nonlinear mixing. Those experiments support the idea that a nonlinearity distribution similar to that of ideal S-Boxes can be created by mixing boxes of half the bit size (and then half again and so on). In this way we can create huge emulated boxes -- conventional block ciphers -- by mixing small boxes in sufficient levels. In practice, the exact same code can handle legacy 64-bit blocks, modern 256-bit blocks, and 4K byte large blocks.

  3. Dynamic Transposition

    My latest cipher is a description, not an implementation: Dynamic Transposition Revisited Again, an article which is now almost two years old. It is actually a detailed elaboration of the Dynamic Transposition combiner I also discussed in Cryptologia in 1991. About the last fifth of the article discusses "ATTACKS," and it starts off:

    "Complex and hard-to-refute conventional block-cipher attacks, such as Differential and Linear Cryptanalysis, would seem to be completely irrelevant to ciphers of the Dynamic Transposition type. Where there are no S-boxes, there are no linear or differential relations in S-boxes. Where there is no emulated table, there is no linear or differential relationship in that table. And where there is no Feistel structure, there is no ability to peel off Feistel layers. To have a block cipher where we can make these statements is really quite remarkable."

    That analysis then continues for page after page, in something like 19 more paragraphs.

    Personally, I think Dynamic Transposition is one of the most important ciphers around, for several reasons:

    1. First, it is one of the few open examples of a serious transposition cipher.
    2. Next, it is an example of a block cipher which is not a conventional block cipher; that is, it does not attempt to emulate a huge substitution. It does not have and does not need diffusion, or avalanche, or S-boxes, or rounds.
    3. Third, the design demonstrates that it is possible to have a cipher function which is "complete." That is, the cipher has sufficient internal state to select among every possible permutation, instead of just a virtually infinitesimal pre-selected subset, as in conventional block ciphers.
    4. Fourth, the design is a clear attempt to finesse Shannon perfect secrecy into practical implementation. It thus attempts to leverage actual theoretical impossibilities, instead of simply using the ad hoc constructs of current block ciphers.
    5. Last but not least, it demonstrates a ciphering function which is permutation, but in which the particular permutation which occurs apparently cannot be distinguished by the opponent, even if the opponent has full known plaintext. This is because a vast number of different permutations will each produce the exact same ciphertext. This is a particularly interesting strength issue.


Relationships Discovered

I have also found a couple of interesting combinatoric relationships:

  1. Augmented Repetitions. One relationship is related to the birthday paradox. If we are willing to admit a new definition for what I call "augmented" repetitions, very simple and reversible formulas predict the number of augmented doubles, triples, etc. for random sampling from a given population. (See: Estimating Population from Repetitions in Accumulated Random Samples.) The point of this, of course, is to be able to estimate a huge population size fairly accurately from trials with on the order of square-root-of-the-population samples. Population estimation can help to certify hardware really random number generators (RNG's); the values appear to correspond well to entropy computations, and thus confirm those results. Population estimation is extremely general, and can be applied to bit-sequences as well as value-sequences. Population estimates perhaps also can be used to estimate the number of unique keys in small versions of scalable block ciphers.

  2. Combinatoric Model for Runs-Up/Runs-Down Tests. I have identified the combinatoric model for runs-up and runs-down RNG tests, and so eliminate a statistical bias which one gets using the approximation in Knuth II on sequences of byte-range values. See: Chi-Square Bias in Runs-Up/Down RNG Tests


Supporting Work

  1. Practical Latin Square Combiners
  2. Orthogonal Latin Squares, Nonlinear Balanced Block Mixers, and Mixing Ciphers
  3. Measured Boolean Function Nonlinearity in Mixing Cipher Constructions
  4. Active Boolean Function Nonlinearity Measurement in JavaScript.
  5. Random Noise Sources
  6. Experimental Characterization of Recorded Noise.


General-Use and Reference Pages

  1. Learning About Cryptography
  2. Crypto Glossary
  3. Normal, Chi-Square and Kolmogorov-Smirnov Statistics Functions in JavaScript
  4. Base Conversion, Logs, Powers, Factorials, Permutations and Combinations in JavaScript


Literature Surveys

Basically a reviewed index to the literature on various topics. Now a bit old, but still useful for those who do not know the older literature:

  1. S-Box Design: A Literature Survey
  2. The Story of Combiner Correlation: A Literature Survey (the story of stream ciphers)
  3. Differential Cryptanalysis: A Literature Survey
  4. Linear Cryptanalysis: A Literature Survey
  5. Walsh-Hadamard Transforms: A Literature Survey
  6. Linear Complexity: A Literature Survey
  7. RNG Surveys: A Literature Survey
  8. RNG Implementations: A Literature Survey
  9. Random Electrical Noise: A Literature Survey
  10. Random Number Machines: A Literature Survey
  11. Randomness Tests: A Literature Survey
  12. Latin Squares: A Literature Survey


Peer-Reviewed and Other Journal and Magazine Articles

From my crypto era:

  1. The Great CRC Mystery
  2. Substitution Cipher with Pseudo-Random Shuffling: The Dynamic Substitution Combiner
  3. Transposition Cipher with Pseudo-Random Shuffling: The Dynamic Transposition Combiner
  4. The Politics of Software Patents
  5. The Efficient Generation of Cryptographic Confusion Sequences
  6. Voice and Video Cryptography in a DSP Environment
  7. Estimating Population from Repetitions in Accumulated Random Samples
  8. Cryptography: Is Staying with the Herd Really Best? (Copyright 1999, IEEE.)


Usenet sci.crypt Articles

A few of the better articles from a much larger body of work:

  1. Fenced DES
  2. Chi-Square Bias in Runs-Up/Down RNG Tests
  3. Measured Boolean Function Nonlinearity in Mixing Cipher Constructions
  4. Measuring Boolean Function Nonlinearity by Walsh Transform
  5. Measured Boolean Function Nonlinearity in Variable Size Block Ciphers
  6. Measured Boolean Function Nonlinearity in Feistel Cipher Constructions
  7. Measured Distant Avalanche in Large Block Ciphers
  8. Break This 4-Bit Block Mixing Cipher
  9. Break This 8-Bit Block Mixing Cipher
  10. Practical Latin Square Combiners
  11. Orthogonal Latin Squares, Nonlinear Balanced Block Mixers, and Mixing Ciphers
  12. Random Noise Sources
  13. Experimental Characterization of Recorded Noise
  14. Dynamic Transposition Revisited Again


On Ciphers and Trust

For some time, now, I have been involved in trying to firstly expose, and secondly address some aspects of cryptography which are fundamentally unscientific:

Ciphers appear to differ from all other constructed devices in our society in that we cannot know whether or not they "work." We demand that a cipher not expose our information to an opponent, but we have no way to know whether our cipher actually succeeds. It is unreasonable to hope that our opponents will inform us of their successes. So we have no way to know when we must improve our cipher, or use a completely different one. This fundamental inability to measure "success" has massive implications with respect to cryptographic engineering.

We cannot know our opponents, and so also cannot know their capabilities. Since strength in practice is inherently contextual (the weakness of a cipher being dependent upon the knowledge of particular opponents), the inability to know the context is a fundamental problem in the discussion of practical cryptographic strength. Ideally we would have a strength proof that would apply to even the most capable opponent; but only rarely are current proofs useful in practice.

These issues go to the heart of the scientific investigation of practical cryptography, as opposed to academic cryptography. Although many things about cryptography are measurable and so can be scientific, the overall success of a cipher is not one of those things. (It seems especially ironic that this is the only thing we really need.) Academic work which fails to clearly make the distinction between theory and practice further confuses an already widely-misunderstood situation.

Once we accept that practical cryptography is a uniquely unscientific situation, we can seek to address it. For example, if we assume that some "strong enough" cipher designs exist (although we may not be able to identify them), we may gain an advantage by enciphering multiple times with different ciphers and keys. (For example, see Shannon's proposal: algebra of secrecy systems). In addition, using a "stack" of ciphers means that known plaintext information for any particular cipher simply would not be available to an opponent. Since known-plaintext is the basis for many if not most attacks, preventing known-plaintext exposure for the individual ciphers is a big advantage.

We can carry the war to our opponents by using a continuously expanding set of ciphers, thus requiring opponents to identify, obtain, analyze, and break each cipher we might use. Currently, we all tend to use "standard" ciphers, which thus protect a huge amount of potentially profitable data. The common use of a standard cipher sets up a profitable and fixed target for the opponent and thus increases the risk for the user. Unlike most fields concerned with danger, cryptography seems remarkably sanguine about deliberately creating a single point of failure situation, where the failure of just one cipher can have catastrophic results.

I have discussed these issues extensively, including 1996 discussions on email cipher weakness (see: Cryptography is War!), extensive 1999 Usenet discussions (see: Fixing Strength Problems in Cipher Use), and a 1999 article published in IEEE Computer (Cryptography: Is Staying with the Herd Really Best?).



Terry Ritter, his current address, and his top page.