Cipher Review Service

We Look for Flaws

A Ciphers By Ritter Page

Terry Ritter, P.E.
Ritter Software Engineering


The Cipher Review Service: Cipher Review, Expected Results, The Cipher Description, Assumptions Made

Ciphering Background: About Ciphers, Example Attacks, Cipher Keying

Software Ciphering Background: Software Architecture, The Classic File Cipher, Streams versus Blocks, (Block Ciphering, Stream Ciphering, Consequences Imply Features), Block-Oriented File Ciphers, Stream-Oriented File Ciphers

Cipher Review

We offer a preliminary analysis of new ciphers in which we attempt to find and describe weaknesses which may be present. This is not a certification: Not finding a weakness does not mean that weakness is not there.

This is intended to be just what can be found in a total of a week or so of work, first getting familiar with the cipher and then analyzing that design. That is not enough time to run experiments and collect statistical results: that could take weeks or months.

We expect to deliver:

To do this, we need:

Expected Results

Based on years of experience in cipher design, we can look for and hope to find some kinds of problems in new secret key cipher designs. Unfortunately, the academic cryptographic literature does not hope to cover all possible ciphering approaches. And it is unlikely that we could build and demonstrate even a simulated successful attack on a new structure in a few days of work. So the most likely result will be a suspicion of weakness, based on our extrapolation from some known weakness, but without actual proof.

The customer who has put considerable investment into their cipher should consider what it would mean for them to receive our opinion that their cipher has weakness. Sometimes the weak aspects can be patched, in which case our analysis just becomes part of the development effort, leading to some re-design.

But the customer also should consider the possibility that, in our opinion, their cipher might have fundamental problems, perhaps even to the extent of requiring some completely different cipher design to achieve security. If a particular design is found to be weak, various respected designs are available in the open literature and can be at least considered as replacements. Such a change need not be difficult if the software is modular and allows the ciphering module to be easily replaced.

The Cipher Description

Basically, we need several different descriptions:

  1. Actual tested and running computer source code (the ultimate reference, with C or Delphi Pascal preferred);
  2. A general written description of how the cipher functions and why (this is not what the user does to operate the cipher; it is what the cipher does to enforce security);
  3. A written list of each specific "feature" or aspect of the system which was deliberately added to the core cipher (e.g., key management features, message key randomness, authentication, etc.), where that feature occurs in the source code, and what each was intended to accomplish; plus,
  4. A very specific written argument as to why the core ciphering function itself should be considered secure.

Since the ultimate reference for the cipher is computer source code, massive documentation may not be helpful.

A written description of design sub-goals gives us both a better way to see the design, and a way to check that each sub-structure does what it is intended to do.

Assumptions Made

In cipher analysis it is conventional to assume that the opponents have lots of information. They have:

What the opponents do not have is the key. Essentially, the opponents have everything but the key. This is essentially the well-known Kerckhoff requirements.

We assume the opponents can get the cipher, because if the cipher is widely-used and widely-distributed, there should be various opportunities to acquire the cipher, no matter how closely it is supposed to be held. It is also convenient to have a cipher which is not secret, and thus can be distributed without extraordinary care.

About Ciphers

Modern cryptography is based on the "needle in a haystack" concept: Each possible key creates a different enciphering or ciphertext for the plaintext message. If the only way to find the right key is to try keys one-by-one (a brute force attack), and if we have a multitude of keys, we would seem to be statistically safe (but not absolutely safe) from anyone trying to find the right key.

But for each key to count as a different "straw," each key must create a completely distinct ciphertext. For example, if we have a stream cipher in which changing one bit of the key changes only every nth character, closely-related keys are not really "different." A cipher which allowed an opponent to relate deciphering error patterns to key bits would be particularly disturbing.

Unfortunately, ciphers are rarely attacked by brute force (with the recent exception of special parallel hardware to attack DES). Instead, most real attacks work at finding clever ways to discard huge numbers of keys at once, thus eventually producing a final result, or perhaps just a keyspace of searchable size.

Example Attacks

Many stream ciphers are proposed as practical implementations of a one time pad (OTP), which is theoretically secure. However, an implemented OTP is not the same as the theoretical OTP described in cryptography texts, and so does not carry the same security guarantee. In the theoretical OTP, each number in the pad or confusion sequence is random and independent of every other number. But, in practice, the confusion sequence is often produced by some form of random number generator (RNG), and the numbers thus produced are related to the values in the internal state. Many sequences are impossible: an RNG simply cannot produce most sequences longer than the amount of the internal state. This is a lever opponents can use in an attack. Many sequence possibilities can be discarded quickly, and those which remain can help reconstruct the internal RNG state, which is usually the keyed unknown.

We normally assume the opponents have substantial amounts of known plaintext, and that is a serious problem for conventional (additive) stream ciphers. When a confusion value is added to a plaintext value, the result is a ciphertext value, but when both the plaintext and ciphertext are known, the confusion value is exposed. That means opponents can start to reconstruct the RNG state.

Even when the confusion sequence is isolated in some way, the RNG sequence will have internal structure or correlation. From the view of the opponent, this is essentially another equation which the cipher system executes in operation. Since the equations are never violated, premises about RNG state and the isolation subsystem may be made and checked, and massive sets of keys discarded when the premises do not fit. This approach can be vastly more efficient than brute force attacks.

Cipher Keying

Secure ciphers require keys and they must be used.

Secure ciphers must have many possible keys, but once beyond the number which could be brute force searched, having more keys does not help very much. But neither does it cost very much.

To enjoy the full security benefits of a keyspace, it must be possible to select and actually use any of the keys with equal probability. Ideally, keys should be selected "at random" from among all possible keys. And where that is not possible (such as user-chosen keyphrases) the keyspace must be larger, so the opponents cannot capitalize on any non-randomness of selection.

Most cipher systems will have some sort of message key, changed at random for each message. That makes the collection of randomness to make the random selection also an issue.

Keys must be changed periodically, because using the same key forever means that any single exposure of that key will expose all messages, past and future. Keys also must be changed when employees leave. So the cipher system must support selecting new keys and removing old ones.

It is unrealistic to expect humans to simply remember multiple long random keys. In practice, users will write down long keys (or key phrases), which means the opponents need only find the written keys to defeat the entire cipher system. A secure cipher system thus must address the issue of secure key storage, whether in encrypted files, or external hardware, or some other solution.

If a simple human mistake can compromise all messages past and future, the cipher probably cannot be considered secure.

Software Architecture

The normal purpose of a cipher review is not to improve the software implementation; indeed, that might change the cipher, thus making the review obsolete. However, if the software construction acts to hide cryptographic weakness, that is itself a cryptographic weakness.

If the code is not written clearly, analysis certainly will be much, much harder, that will take longer, and the outcome will be more problematic. The difficulty in analyzing unclear ciphers is one reason cryptography prefers conceptually simple ciphers. And, from the software viewpoint, it is difficult to control the development of a system where changing one part affects other parts.

Ideally, source code would:

  1. be written in a modular, structured manner, with major sections collected in testable (and tested) stand-alone modules (e.g., subroutines or functions or procedures);
  2. minimize the number of variables, especially globals; and
  3. variable names should imply what the data in that variable means or how it is used; abstract or arbitrary names are not helpful in understanding a cipher.

The Classic File Cipher

The classic organization of a file cipher might be casually described as:

  1. open files (handle possible errors),
  2. call ciphering (the encrypt / decrypt subroutine), and
  3. close files.

A major advantage of this organization is the ability to replace the ciphering with a simple copy; this allows the file and block operations to be tested and developed independent of the cipher. This is important to support handling and testing of file error conditions. It also allows the cipher to be developed and changed independent of the file operations.

Ordinarily there will be both a header and trailer section, perhaps to carry file name or size or message key, and to check against malicious modification.

Streams versus Blocks

In my study of different ciphering technologies, I have found it useful to distinguish between stream and block concepts. Fundamentally, a "stream" is a repetition: one element, then another, then another. In contrast, a "block" is an accumulation of multiple elements treated as one unit. The distinction between "one element" and "more than one element" has various practical consequences.

Block Ciphering

Accumulating multiple bytes into a block produces a huge alphabet which requires a huge transformation. Since only full blocks can be transformed in a block cipher, padding is often added when there is not enough data to fill another block, which means there must be some way to distinguish the original data from padding which must be removed. A huge transformation also supports using the same transformation repeatedly, since only a tiny part of it is exposed with each use.

Stream Ciphering

In contrast, if we transform bytes one-by-one, it will not be long before every part of that transformation is exposed. In practice, that means stream ciphers must change their transformation in operation, which leads to the concept of confusion sequences and combiners.

One consequence of using simple additive combiners is that knowing both plaintext and ciphertext values reveals the confusion value. Since knowing both plaintext and ciphertext is the known plaintext condition which is what we assume anyway, the obvious attack is to attempt to reconstruct the RNG state.

Stream ciphers become insecure when they re-use a previous confusion sequence, and that adds a requirement for a message key or some similar technique.

Consequences Imply Features

In practice, these highly-distinctive consequences allow us to clearly distinguish between stream ciphering and block ciphering, and then identify the unique security features which must be present in each.

Block-Oriented File Ciphers

In a little more detail, a block-oriented file cipher might be casually described as:

  1. open files (and handle possible errors),
  2. write header
  3. While not eof(input file) do:
    1. read block,
    2. if not full, pad to end
    3. cipher block,
    4. write block
  4. write trailer
  5. close files.

Part of the design problem with block ciphers is that only complete blocks can be ciphered. So if a file ends with less than a complete block, that block seemingly must be padded to a full block, and there must be some way to remove padding when deciphering. Often this involves placing correct length information in the header, or adding information to the last block, which then may overflow and create a new "last block."

Various clever schemes may avoid this approach, for example:

Stream-Oriented File Ciphers

In contrast, a stream-oriented file cipher might be casually described as:

  1. open files (and handle possible errors),
  2. write header,
  3. While not eof(input file) do:
    1. read up to a full buffer,
    2. cipher buffer (however much is there), and
    3. write buffer
  4. write trailer, and
  5. close files.

Since a stream cipher does not need to collect a full block to function, no padding is needed, so no padding need be removed when deciphering. In general the data size remains unchanged, and so the length may not need to be particularly specified in the header. Even if some sort of padding is added to hide message length, this value generally can be computed on both ends and need not be sent.

But stream ciphers absolutely do require some form of message key or other similar feature to prevent re-use of the ciphering sequence.

Terry Ritter, his current address, and his top page.

Last updated: 2002 Jun 02