PUBLISHED: Ritter, T. 1990. Substitution Cipher with Pseudo-Random Shuffling: The Dynamic Substitution Combiner.

Cryptologia. 14(4): 289-303.

To read the complete article off-line, save these graphics files: Figure 1 (TEST1CGM.GIF), Figure 2 (TEST2CGM.GIF), and Figure 3 (TEST6CGM.GIF).

ADDRESS: Blue Jean Software, 2609 Choctaw Trail, Austin, Texas 78745.

ABSTRACT: A cipher mechanism or process which can be viewed as a modified substitution cipher. A translation table is used to replace plaintext symbols with ciphertext symbols; the modification consists of changing the contents of the translation table after each substitution. The dynamic translation table acts to confuse symbol frequency statistics and so frustrate the usual cryptanalytic attacks.

The same mechanism can also be viewed as a cryptographic combiner, and can replace the exclusive-OR combining function used in Vernam stream ciphers. The dynamic translation table acts as one-way function to protect the pseudo- random sequence, and consequently helps to prevent cryptanalysis.

KEYWORD LIST: cryptography, computer cryptography, cipher, stream cipher, product cipher, substitution, dynamic substitution, combiner, cryptographic combiner, mixer, shuffle

This paper discloses an apparently new cryptographic
mechanism which can be described as dynamic substitution.
Although structurally similar to simple substitution, dynamic
substitution has a second data input which acts to re-arrange the
contents of the substitution table. The mechanism *combines*
two data sources into a complex result; under appropriate
conditions, a related inverse mechanism can then *extract*
one of the data sources from the result.

A dynamic substitution combiner can directly replace the exclusive-OR combiner used in Vernam stream ciphers. The various techniques used in Vernam ciphers can also be applied to dynamic substitution; any cryptographic advantage is thus due to the additional strength of the new combiner.

This paper develops a particular form of dynamic
*substitution*; a related paper develops a form of dynamic
*transposition*. The substitution version is generally easier
to understand, more efficient, and more easily applied as a stream
cipher. However, the slower transposition version may have
greater cryptographic strength.

For a general background in cryptology see Kahn
[15], and
for details on the classical systems and their analysis see Gaines
[12].
More modern statistical approaches are given by Sinkov
[45]
and Deavours
[6].
A good partly-technical anthology is Deavours *et. al.*
[5].
There is a nice but older survey by Mellen
[22],
a major effort by Diffie and Hellman
[8],
and a newer one by Massey
[20]
(also see the other papers in that issue). A rigorous but not
always applicable theoretical base starts with Shannon
[41]
and is extended by Hellman
[14].
Beker and Piper
[1]
is a good technical reference. Denning
[7]
and Pfleeger
[32] present
cryptography in the broader context of computer security issues.

The Vernam cipher [47], although originally implemented with electromechanical relays, may well mark the start of modern cryptography. A Vernam cipher directly combines a stream of plaintext data with a pseudo-random confusion stream using what we now know of as mod 2 addition. This same combining function is also known as the Boolean logic exclusive-OR, and is widely available in digital integrated circuits and as an instruction on most computers and microcomputers.

Since each ciphertext element from a Vernam combiner is
the (mod 2) sum of two unknown values, the plaintext data would
seem to be well hidden. Such appearances are deceptive, however,
and a Vernam cipher is susceptible to several cryptanalytic
attacks, including *known-plaintext* and *probable*
words
[37];
if some part of the plaintext is known (or even guessed), the
cryptanalyst can directly obtain some of the confusion stream
[24, 25].
And if the confusion sequence can be penetrated and
reproduced, the cipher is broken
[34, 43,
21].
Similarly, if the
same confusion sequence is ever re-used, and the overlap
identified, it becomes simple to break that section of the cipher
[37].

For these reasons, the modern Vernam cipher generally relies on an analysis-resistant pseudo-random sequence generator for security [e.g. 13]. But the design of such a generator is non-obvious [3], and is even more difficult than it might seem, since the cryptanalyst might well possess analytical knowledge and capabilities superior to those of the designer of the generator. Future analysts may be even more capable. Accordingly, constructs which may seem complex to the designer [13, 33, 4] may well yield, eventually, to the superior knowledge and computational resources of a cryptanalyst [43, 38, 31].

An alternate approach to the design of a secure stream
cipher is to seek combining functions which can resist attack; such
functions would act to hide the pseudo-random sequence from
analysis
[42, 43,
44].
Such *cryptographic combining functions*
could be used to replace the Vernam exclusive-OR combiner (if
they have an inverse)
[40], or they might just combine pseudo-
random sequences to make a more complex sequence
[28] which
is harder to analyze.

A cryptographic combiner is not intended to be an ultimate cipher; indeed, it is not at all clear that such a thing is possible. An improved combiner is intended to force cryptanalysis to be more difficult, time consuming, and expensive than would be the case with a simple combiner. The cryptographic worth of a combiner can be described as the improvement in security when compared to the now-standard Vernam exclusive-OR combiner.

The mechanism of this paper is a new combining function which extends the weak classical concept of simple substitution into a stronger form suitable for computer cryptography.

Classical simple substitution replaces each letter of the alphabet with one fixed substitute [12, 45]. Simple substitution is normally considered to be a very weak cryptographic operation [22], mainly because substitution in no way obscures the letter- frequency distribution of the source text. That is, for a particular language and topic, a statistical analysis of the enciphered data will tend to match the general statistics for that language.

The fundamental operation of substitution is pervasive in
cryptography
[10].
But in all known previous systems the
substitution is *static*. That is, each substitution table
is fully defined (either by the designer or the key) before
starting encryption, and the contents of each substitution table
remain unchanged for the duration of that particular cipher or
message.

This paper is concerned with the cryptographic
strengthening of the fundamental substitution operation through
*dynamic* changes to a substitution table. The substitution
table can be changed under the control of a separate data stream,
usually originating from a pseudo-random sequence generator.
The combination of substitution and a strategy for changing the
contents of the substitution tables yields a cryptographic
combining function; such a function may be used to combine
plaintext data with a pseudo-random sequence to generate
enciphered data. A particular strategy is presented which is
applicable to computer hardware or software implementations. In
many cases, such a substitution is desired to be invertible, and the
presented scheme supports the efficient maintenance of an
inverse.

In mathematical set-theoretic terms, a substitution is a
relation between two sets, which is illustrated here using sets
*X* and *Y*.
We define a mapping function
*f*
from set
*X*
into set
*Y*,
and for each element
*x*
in set
*X*,
we have
*y = f(x)*.
In many cases, function
*f*
is arranged to be one-to-one (such that for each
*x*
in
*X*,
each corresponding y is unique), so that there will be an
inverse function
*f^-1*.
If there is an inverse, for each element
*x*
in set
*X*
we have
*x = f^-1(f(x))*.
Thus, we can encipher with
*y = f(x)*
and decipher with
*x = f^-1(y)*.
Often, the domain (set
*X*)
is identically equal to the range (the subset of
*Y*
covered by
*y = f(x)*),
but this is not necessary. The symbols
*xi*
and
*yi*
will be used as a convenient notation to represent some particular
data element before and after enciphering.

The simple substitution function is a monoalphabetic
substitution cipher, and is easily penetrated since a mapping
preserves the frequency distributions of the source material (the
message). That is, for each occurrence of element
*xi*
from set
*X*,
the corresponding element
*yi = f(xi)*
appears in the ciphertext.
Thus, whatever proportions exist among the elements
*x*
in the message, those same proportions will be retained among the
elements
*y*
after substitution. It might be said that the frequency
distribution characteristics are *invariant through* or
*preserved by* the static substitution function.

But suppose the mapping function, now
*f1*, is permuted to
a new, slightly changed mapping function
*f2*
after the first element
of the message (which we take to be
*xi*).
In particular, suppose that the mapping which takes
*xi*
to
*yi*
is transposed or exchanged with some mapping at random, say
*j*.
After the exchange, the map takes
*xi*
to
*yj*
and also
*xj*
to
*yi*.
Note that simply permuting a map preserves its one-to-one
characteristics (assuming it originally had any), so that a
different but unambiguous inverse will still exist. In this way,
the just-used map element can be changed to any of the map values,
and this can be done after each element of the message is enciphered.
The act of exchanging an element with another selected at random is
the basis for the well-known shuffle algorithm
[9, 17, p. 139];
this is one of many possible strategies for changing the contents
of the substitution table. The general principle of a changing
substitution table can be termed *dynamic substitution*.

It seems clear that repeated application of pseudo-random
map exchange operations destroys the fixed correspondence
between
*xi*
and
*yi*
so that the frequency distribution invariance no longer holds.
That is, since the mapping from
*xi*
to
*yi*
is shuffled whenever
*xi*
is enciphered, it is unlikely that the next occurrence of
*xi*
will also map to
*yi*.
This change occurs for all elements
*x*
in set
*X*
whenever an element is enciphered. Thus, this scheme would seem to
obscure the frequency distribution statistics of the message. And,
since the pseudo-random sequence acts only indirectly (through the
exchange operation), there is some hope that it will remain hidden
in any case.

A dynamic substitution function might be used to encipher
data, or it might be used to further complicate a random-number
sequence. But if it is to be used to encipher data, it will be
necessary to create and maintain an inverse map to support
deciphering. Assuming that there are no repeated values in
*f*,
the initial inverse
*f^-1*
of
*f*
is easily obtained by stepping through
*X*:
For each value
*x*
in set
*X*,
the original map
*f*
generates a corresponding
*y*
value; that
*y*
value is the position in the inverse map
*f^-1*
where the original
*x*
value is placed.

Before the original map
*f*
is permuted, it takes
*xi*
to
*yi*
and also
*xj*
to
*yj*;
the inverse map
*f^-1*
will function in the reverse direction to take
*yi*
to
*xi*
and also
*yj*
to
*xj*.
After the exchange operation, the original map
*f*
takes
*xi*
to
*yj*
and also
*xj*
to
*yi*;
accordingly the inverse map
*f^-1*
must take
*yi*
to
*xj*
and also
*yj*
to
*xi*.
Thus, it is also necessary to permute the inverse map
*f^-1*
appropriately after each mapping operation, and this is done
simply by exchanging elements
*yi*
and
*yj*
in the inverse map
*f^-1*.

In practice, permuting the inverse map
*f^-1*
is slightly more complicated than permuting
*f*
alone, because the pseudo-random value (which we call
*j*
and which selected
*xj*
in
*f*)
must be mapped through
*f*
to find
*yj*,
which is one of the elements to exchange in
*f^-1*.
This scheme requires that both
*f*
and
*f^-1*
maps be present during deciphering, even if only the
*f^-1*
map is actually used to decipher data. But this scheme also keeps
the inverse map up to date without re-generating the full inverse
after every substitution operation. The other element of the
"exchange pair" is
*yi*,
which is just the enciphered data element, and is thus already known.

In cryptologic terms, dynamic substitution is a sort of extended substitution cipher. A substitution table is used to translate each data value into an enciphered value. But after each substitution, the table is re-ordered. At a minimum, it makes sense to exchange the just-used substitution value with some entry in the table selected at random. This generally changes the just-used substitution value to help prevent analysis, and yet retains the existence of an inverse, so that the cipher can be deciphered.

To a potential cryptanalyst, the substitution table starts out completely unknown. When a data value is translated through the table, that particular substitution is at least potentially known. But that substitution value is immediately changed, so the substitution table is again completely unknown. In this way, external cryptanalysis of the arrangement of the substitution table is greatly complicated.

Whenever a particular substitution is used, it is changed,
and the more often it is used, the more often it is changed; this
seems to be the *optimal* characteristic for hiding usage
frequency statistics, and this is automatic and inherent in the
combiner. Moreover, the pseudo-random number sequence does not
directly select any data for output, it only changes the table
"behind the scenes"; this gives us some grounds for asserting
that the pseudo-random sequence remains hidden, to some unknown
extent.

A substitution which has an inverse is normally thought to
be weaker than one which does not, so it is worth noting that
dynamic substitution does not have an inverse, it has a
*changing* inverse, and that affects the analysis.

Dynamic substitution is one way to build a cryptographic combiner; it is not a complete cipher. However, when combined with a strong cryptographic random number generator, message keys, and other extensions, dynamic substitution can be a major part of a strong cryptographic system. It might be used simply to replace a Vernam combiner in existing equipment. It can also be used to complicate a random-number stream, or as one module in a complex multi-module ciphering system.

Dynamic substitution may be considered to be a *black box*,
with two input ports ("Data In" and "Random In"), and one output
port ("Combiner Out"). In the simple version, each data path has
similar width; evidently the mechanism inside the box in some
way *combines* the two input streams to produce the output
stream. It seems reasonable to analyze the output statistically, for
various input streams.

Figure 1 charts the frequency distributions measured on
two ports of a Simple Substitution Combiner. The "Data In" data
was a sizable text file (a book chapter) with all spaces and
punctuation deleted, and lower case converted to upper, leaving
a 26-element alphabet of 18,135 capital letters. No random
stream is used in simple substitution. Note that the "Data In" and
"Combiner Out" distributions are the same, just re-arranged.

Figure 2 charts the frequency distributions measured for
a Dynamic Substitution Combiner. The "Data In" stream was the
same as in Figure 1. The "Random In" stream is random, with a
26-element capital letter alphabet. Note that the "Data In" and
"Combiner Out" distributions are very different.

Figure 3 charts the frequency distributions for a Vernam
Combiner. "Data In" was the same as before. Because the
Vernam system operates on a binary representation, a Vernam
"Combiner Out" stream is inherently a binary-power alphabet (in
this case 32 elements); the "Random In" stream was given the
same range.

Two additional experiments were conducted on Dynamic Substitution, using a constant value ('E') into one port, and a pseudo-random stream into the other port; the results were a little surprising, but their graphs provided little insight: In both cases the "Combiner Out" stream resembled a random data stream. This was unexpected because the design was intended to randomize a stream of predictable data, and not necessarily to handle a non-random "random" input, which it does. Apparently this happens because the exchange process (functioning after each substitution) takes two streams; if either one has random characteristics, the substitution table will be randomized, and this will randomize the output stream. It may be that this sort of statistical independence of the input ports is necessary for a strong cryptographic combiner. Similar results are obtained using what would normally be considered radically different inputs to the mechanism.

The last experiment was the randomization effect of a standard Vernam exclusive-OR combiner on the same constant input.

Results from these experiments were collected using common cryptographic statistical measures.

The black box test results can be summarized in the form
of cryptographic *delta IC*
[23],
and *Z-coefficient*
[6, 18]
computations. In both cases, a count is made of the number of
occurrences of each value in the stream being analyzed. Then a
"Phi" value is computed, which is conceptually the sum of the
squares of each element count. (In practice, instead of the square
of the element counts n * n, the value n * (n - 1) is used
[1, 45].)

The index of coincidence (IC) is conceptually *the sum of
the squares* (of the element counts) *over the square of the
sum*
(the total element count). Because the IC depends on the size of
the cryptographic alphabet, it is useful to normalize it to a delta
IC by multiplying by the size of the alphabet. A delta IC value of
1.0 indicates a random distribution.

The Phi value has been computed, and an "expected" value
of Phi (for a random data stream) can be derived. Similarly, a
statistical variance (for a *monographic* cipher) can also be
computed. The Z-coefficient is just the difference between the
actual and expected Phi values, normalized by dividing by the
variance. Thus, the Z-coefficient represents the extent (in
statistical standard deviations) to which a particular sample
differs from the expected value of a random sample.
Consequently, a value of 0 would be expected, and a value
between -2 and 2 would be most probable for a random sequence.
The probability that a truly random sequence would produce any
other Z-coefficient value can be interpreted as the area under a
*bell shaped* standard normal probability curve.
Table 1 summarizes the statistical results.

Table 1. SUBSTITUTION DISTRIBUTION STATISTICS (delta IC / Z-coefficient) Data In Random In Combiner Out The Data are 26-Letter Text Static Substitution 1.66 / 1684 ---- / ---- 1.66 / 1684 Dynamic Substitution 1.66 / 1684 1.00 / -0.9 1.00 / 1.1 Vernam (exclusive-OR) 2.04 / 2393 1.00 / -0.4 1.00 / -1.0 The Data are One Value Repeated Dynamic Substitution 26.0 / 35347 1.00 / -0.3 1.00 / -0.2 Same, Inputs Reversed 1.00 / -0.2 26.0 / 35347 1.00 / -0.1 Vernam (exclusive-OR) 32.0 / 39360 1.00 / 0.0 1.00 / 0.0

Apparently, dynamic substitution does *randomize* a
statistically non-random input, with results similar to the standard
Vernam system.

A simple dynamic substitution combiner might conceivably allow
some sort of insight into the pseudo-random sequence with a
known-plaintext attack. For example, after
*xi*
is enciphered to
*yi*,
an exchange occurs as a result of a particular pseudo-random value
(*j*).
If another
*xi*
occurs immediately, the resulting
*yj*
mapped value might somehow provide some insight into the value
(*j*)
taken from the pseudo-random sequence. Note that the random sequence
value
*j*
is not directly available in this way, but only the mapped value.
And if
*xi*
does not recur immediately, the mapping it eventually reveals may be
the result of multiple exchanges, and so would be much less useful.

Another approach would be to concentrate on the next occurrence
of
*yi*
in the ciphertext, which, if it recurred immediately, might
somehow provide a similar insight to the preceding swap value
(*j*).
Since both a subsequent
*xi*
and a subsequent
*yi*
might somehow provide insight to the same swap value, it may be
possible to achieve a likely confirmation. Again, this would be the
*mapped* value
(*xj*),
and not directly the desired pseudo-random value
(*j*)
itself. It is not clear how this information could be used to
penetrate the cipher.

Penetration would seem to be easier if the known-plaintext
*x*
values were close together, because this would make a re-mapped
mapping less likely. Thus, there is some reason to suspect that an
uneven plaintext distribution (which is normal) would have some
effect in making the system somewhat more vulnerable. But a good
value distribution could be enforced by first randomizing the data
with a Vernam (exclusive-OR) combiner (and a random-number stream)
prior to dynamic substitution, or by using an additional level of
dynamic substitution.

In any case, it is the responsibility of the rest of the system
to assure that the same pseudo-random sequence will "never" be
re-used. This is a normal requirement for Vernam stream ciphers,
and is often handled with a *message key*
[1].

An obvious countermeasure to known-plaintext and chosen-plaintext attacks would be to use multiple different dynamic substitution maps (a polyalphabetic dynamic substitution cipher) [e.g. 1], and to select between them using a hidden pseudo-random sequence. Since consecutive xi elements would generally be enciphered in different maps, the use of repeated xi elements leads to the probability that some maps will be entered multiple times (on the same mapping) before all of the maps have been entered once, and this seems to substantially complicate cryptanalysis.

Moreover, the normal attacks on a polyalphabetic cipher are also statistical, and these seem likely to be complicated by the anti-statistical properties of the underlying permuting maps. Because the multiple maps are to be used at pseudo-random instead of in rotation, it would seem to be difficult for an analyst to isolate any particular map on which to work.

In broad terms, a Vernam stream cipher consists of an exclusive-OR combiner and some sort of confusion (random number) generator. However, real systems must be considerably more complex than this, since any cipher is only as strong as its weakest link.

Even a strong combiner and confusion generator can be made
irrelevant if the system supports only a small number of keys,
since simply trying all the keys (a *key search* attack) would
be sufficient to penetrate such a cipher. Any real system based
on keys must support enough keys to prevent this attack, but this
is really more of an issue for the confusion generator than for
the combiner.

There is a large body of literature on various sorts of pseudo-random number generators and some amount of work on their abilities to resist cryptanalysis. However, this literature is equally applicable to systems containing either dynamic substitution or Vernam exclusive-OR combiners, and so is essentially irrelevant to comparisons between the two.

Dynamic substitution combiners inherently contain internal
*state* data (in the finite automata sense
[e.g. 8, p. 415]),
while the exclusive-OR does not. This internal state data must
be initialized before ciphering, and is continuously re-ordered as
a consequence of both incoming data streams; thus, the internal
state is a function of initialization and all subsequent data and
confusion values. Consequently, if data errors occur during
communication of the ciphertext, the deciphering process will
deviate from the expected sequence, and will not re-synchronize.
This problem is mitigated by the widespread use of the
error-detection codes (e.g., CRC) and the error-correcting block
re-transmissions now commonly used in data communications.
Although there can be no such thing as an error rate of absolutely
*zero*, computer data can be stored or communicated with
*arbitrarily*
low error rates based upon the relative economic impacts of errors
versus error-protection. In practical terms, even very simple
"binary protocols" (e.g., XMODEM) only rarely allow any errors to
survive uncorrected, so error sensitivity is not nearly the
problem it was in the days of manual telegraph operators or
mechanical typing-telegraph machines.

The changing internal state of dynamic substitution is the source of its strength, and that state is affected by both input sequences. Thus, it should come as no surprise that the new combiner is not as tolerant of data errors as its weaker cousin, the exclusive-OR.

Any substitution is a mapping (from input to output), and a mapping is the most general function possible; thus, it may be used to replace a network of simple logic operations. In particular, dynamic substitution might be designed to replace substitution-permutation functions [10, 11, 16] (although large S-P functions might imply an exceedingly large map). In some cases, dynamic permutations of existing S-P maps may be used to strengthen those designs.

One use for dynamic substitution would be as a combining or mixing function on data streams. For example, dynamic substitution might easily replace the exclusive-OR logic function in a Vernam cipher. Or dynamic substitution could be used to combine two pseudo-random sequences [27, 28], in which case an inverse would be unnecessary.

Alternately, by making the domain and range of the substitution maps (f and f^-1) the same, dynamic substitution can be used in a product cipher [41], as one element in a chain or network of ciphering functions or modules.

The simple substitution cipher--normally one of the weakest in cryptography--becomes substantially stronger when the substitution table is changed during operation. By re-mapping at least the just-used symbol, the cipher acts to hide letter-frequency statistics, which have previously been the main avenue of entry into such a cipher. The polyalphabetic version seems even stronger.

Many thanks to the referees for pointing out ambiguous discussions and important unmentioned issues. This work has improved because of their efforts.

1. Beker, H. and F. Piper. 1982. *Cipher Systems.*
New York: John Wiley & Sons.

2. Beker, H. and F. Piper. 1985. *Secure Speech
Communications.* London/Orlando: Academic Press.

3. Blum, L., M. Blum, and M. Shub. 1983. Comparison of Two
Pseudo-Random Number Generators. *Advances in
Cryptology--Proceedings of Crypto 82.* New York: Plenum
Press. 61-78.

4. Ciarcia, S. 1986. Build a Hardware Data Encryptor.
*Byte.* September. 97-111.

5. Deavours, C., D. Kahn, L. Kruh, G. Mellen, and B. Winkle.
1987. *Cryptology Yesterday, Today, and Tomorrow.*
Norwood, Mass: Artech House.

6. Deavours, C. 1987. *Cryptanalytic Programs for the
IBM PC.* Laguna Hills, CA: Aegean Park Press.

7. Denning, D. 1982. *Cryptography and Data Security.*
Reading, Mass: Addison-Wesley.

8. Diffie, W. and M. Hellman. 1979. Privacy and Authentication:
An Introduction to Cryptography. *Proceedings of the IEEE.*
67: 397-427.

9. Durstenfeld, R. 1964. Algorithm 235, Random Permutation,
Procedure SHUFFLE. *Communications of the ACM.* 7:420

10. Feistel, H. 1973. Cryptography and Computer Privacy.
*Scientific American.* 228: 15-23.

11. Feistel, H., W. Notz, and J. L. Smith. 1975. Some
Cryptographic Techniques for Machine-to-Machine Data
Communications. *Proceedings of the IEEE.* 63: 1545-1554.

12. Gaines, H. 1956 (original publication 1939).
*Cryptanalysis.* New York: Dover Publications.

13. Geffe, P. 1973. How to protect data with ciphers that are
really hard to break. *Electronics.* January 4. 99-101.

14. Hellman, M. 1977. An Extension of the Shannon Theory
Approach to Cryptography. *IEEE Transactions on Information
Theory.* IT23: 289-294.

15. Kahn, D. 1967. *The Codebreakers.* New York:
Macmillan.

16. Kam, J. and G. Davida. 1979. Structured Design of
Substitution-Permutation Encryption Networks. *IEEE Transactions
on Computers.* 28: 747-753.

17. Knuth, D. 1981. *The Art of Computer Programming,*
Vol. 2 *Seminumerical Algorithms.* 2nd ed. Reading, Mass:
Addison-Wesley.

18. Kullback, S. 1976 (original publication 1938). *Statistical
Methods in Cryptanalysis.* Laguna Hills, CA: Aegean Park Press.

19. MacLaren, M. D. and G. Marsaglia. 1965. Uniform Random
Number Generators. *Journal of the ACM.* 12: 83-89.

20. Massey, J. 1988. An Introduction to Contemporary Cryptology.
*Proceedings of the IEEE.* 76: 533-549.

21. Meier, W. and O. Staffelbach. 1988. Fast Correlation Attacks
on Stream Ciphers (extended abstract). *Advances in
Cryptology--Eurocrypt 88.* New York: Springer-Verlag. 301-314.

22. Mellen, G. 1973. Cryptology, Computers, and Common Sense.
*National Computer Conference, 1973, Proceedings.* 569-579.

23. Mellen, G. 1983. Cryptanalysts' Corner. *Cryptologia.*
7: 371.

24. Meyer, C. and W. Touchman. 1972. Pseudorandom codes can be
cracked. *Electronic Design.* 23: 74-76.

25. Meyer, C. 1973. Design considerations for cryptography.
*National Computer Conference, 1973, Proceedings.* 603-606.

26. Meyer, C. and S. Matyas. 1982. *Cryptography: A New
Dimension in Computer Data Security.* New York: John
Wiley & Sons.

27. Michener, J. 1985. The "Generalized Rotor" Cryptographic
Operator and Some of Its Applications. *Cryptologia.*
9: 97-113.

28. Michener, J. 1987. The Use of Complete, Nonlinear, Block
Codes for Nonlinear, Noninvertible Mixing of Pseudorandom
Sequences. *Cryptologia.* 11: 108-111.

29. Michener, J. 1987. The Application of Key Dependent and
Variable Rotor Sets to Generalized Rotor Cryptographic Systems.
*Cryptologia.* 11: 166-171.

30. Michener, J. 1988. A Tool for Secret Key Cryptography.
*Dr. Dobb's Journal.* August. 50-55, 96.

31. Pearson, P. 1988. Cryptanalysis of the Ciarcia Circuit
Cellar Data Encryptor. *Cryptologia.* 12: 1-9.

32. Pfleeger, C. 1989. *Security in Computing.*
Englewood Cliffs, New Jersey: Prentice Hall.

33. Pless, V. 1977. Encryption Schemes for Computer
Confidentiality. *IEEE Transactions on Computers.*
C26: 1133-1136.

34. Reeds, J. 1977. "Cracking" a Random Number Generator.
*Cryptologia 1.* (also
[5, pp. 509-515]).

35. Retter, C. 1984. Cryptanalysis of a MacLaren-Marsaglia
System. *Cryptologia.* 8: 97-108. (Also see letters and
responses: *Cryptologia.* 8: 374-378).

36. Retter, C. 1985. A Key Search Attack on MacLaren-Marsaglia
Systems. *Cryptologia.* 9: 114-130.

37. Rubin, F. 1978. Computer Methods for Decrypting Random
Stream Ciphers. *Cryptologia 2.* (also
[5, pp. 493-508]).

38. Rubin, F. 1979. Decrypting a Stream Cipher Based on J-K
Flip-Flops. *IEEE Transactions on Computers.* C28: 483-487.
(also *Cryptologia* Vol. 5, and
[5, pp. 283-293]).

39. Rubin, F. 1987. Foiling an Exhaustive Key-Search Attack.
*Cryptologia.* 11: 102-107.

40. Sancho, J. 1987. Enumeration of Multivariable Decipherable
Boolean Functions. *Cryptologia.* 11: 172-180.

41. Shannon, C. 1949. Communication Theory of Secrecy Systems.
*Bell System Technical Journal.* 28: 656-715.

42. Siegenthaler, T. 1984. Correlation-Immunity of Nonlinear
Combining Functions for Cryptographic Applications. *IEEE
Transactions on Information Theory.* IT30: 776-780.

43. Siegenthaler, T. 1985. Decrypting a Class of Stream Ciphers
Using Ciphertext Only. *IEEE Transactions on Computers.*
C34: 81-85.

44. Siegenthaler, T. 1986. Design of Combiners to Prevent
Divide and Conquer Attacks. *Advances in Cryptology--CRYPTO
'85, Proceedings.* New York: Springer-Verlag. 273-279.

45. Sinkov, A. 1966. *Elementary Cryptanalysis: A
Mathematical Approach.* Washington, DC: The Mathematical
Association of America.

46. Stahl, F. 1973. A homophonic cipher for computational
cryptography. *National Computer Conference, 1973,
Proceedings.* 565-568.

47. Vernam, G. 1926. Cipher Printing Telegraph Systems.
*Transactions AIEE.* 45: 295-301.

Terry Ritter is a registered Professional Engineer, a member of IEEE and ACM, with a background in computer architecture, hardware, software, and now, library research. He has enjoyed spending the past few years being Blue Jean Software and Blue Jean Computer Engineering.