DAGGER is a set of ciphering routines which add serious information security to a larger overall program. DAGGER is available in "portable C" and 80x86 assembly language.
DAGGER is a unique example of a re-originating stream cipher, a design which benefits greatly from our patented Dynamic Substitution technology. DAGGER is specifically intended to provide very high performance along with significant strength, and low impact on the rest of the system.
DAGGER handles variable-length data and does not expand data at all, and so can cipher variable-length database records "in place." Since DAGGER can "re-originate" on each ciphering, it can handle LAN data frames or network packets at the data-transport level where they may arrive out-of-sequence. Of course, DAGGER can also be used for conventional application-level ciphering, and the exact same cipher can be used for high-performance disk ciphering as well.
As a trade-off for extreme speed, DAGGER is constructed in a way known to be vulnerable to defined-plaintext attack. Thus, DAGGER is most appropriate for situations where an Opponent cannot conduct a defined-plaintext attack, and many such situations exist. Appropriate situations include local storage protection and most forms of end-to-end session ciphering (since intermediate nodes see at most known-plaintext).
DAGGER would be appropriate in a star topology network using a different key for each user. In a more general network, DAGGER can be appropriate given centralized delivery of session keys. DAGGER is less appropriate for network hops which carry messages from many users over a long time without changing keys.
Any real cipher design must necessarily be a compromise. Ideally, a cipher could never be broken, would be infinitely fast, and would impose no requirements upon the rest of the system. This is clearly impossible. Therefore, we must be content with slightly more modest achievements.
To make a general design which does not impose on existing systems, various strange requirements are necessary:
Within these very significant Constraints, DAGGER was designed to provide a good tradeoff between strength and speed.
Instead of using a serious cipher, it is always possible to simply confuse the data to hide it from casual snooping.
One way to hide data from casual snooping is to add some constant value to each character or byte value in a file. This can be very fast, and is probably sufficient to protect a file from nine out of ten people who might want to look at the it.
Unfortunately, even if only one out of ten could penetrate the confusion, this could be a lot of people. In certain groups of ordinary law-abiding people (such as programmers), those who could reverse the confusion might even be a majority. Worse, the task of writing a program to reverse the confusion might be a hour's activity, and with such a program anyone could unconfuse files at will, in almost no time at all.
Worse, the nine out of ten who could not penetrate the confusion are almost certainly the most innocent and the least threat. The people we really have to worry about are those few who actively prey on information, and who can cause significant damage and liability. Certainly no data-theft criminal would have any problem at all penetrating simple file confusion.
Simple confusion is certainly fast, but not widely used. This is because simple confusion is effective only against those who pose the least threat, and is totally ineffective against any real attack. Thus, simple file confusion cannot be regarded as a prudent way to protect someone else's data.
Within cryptography there are codes and ciphers. Codes deal with information at the level of words or phrases, and so are difficult to apply to arbitrary binary data. In contrast, ciphers deal with individual symbols (for example, bytes) or groups of symbols.
There are basically two different types of cipher: block ciphers and stream ciphers. Conventional block ciphers accumulate data in a fixed-size block and encipher the whole block as a single unit. In contrast, stream ciphers encipher data symbol-by-symbol (typically byte-by-byte).
The two types of cipher are distinct: To be effective, a block cipher generally must deal with blocks which are large enough to prevent analysis, typically 8 bytes or larger. Thus, a stream cipher which works on bytes really cannot be understood as a small block cipher.
Because block ciphers work on fixed-size blocks, they have problems handling continuously-variable amounts of data. As long as a full block can be filled for enciphering everything works well, but if there is some data left over -- less than a full block -- that data must be expanded to a whole block for enciphering. (Some designs do switch to stream-cipher mode for the last block only.) Variable-sized data is required by Constraint 1, and data expansion is disallowed by Constraint 2, so conventional block ciphers are not usable under these Constraints.
The conventional stream cipher uses some form of addition to combine data with confusion values generated by some pseudo-random mechanism. Normally, a fairly complex pseudo-random generator can produce a sequence which is hard to predict, so the resulting cipher can be strong. But we cannot expect to operate a cipher in just any arbitrary way and yet still retain all the strength of which a cipher is capable. Most ciphers have weaknesses, and so are used in ways which hide or minimize those weaknesses. A conventional stream cipher consists of a confusion generator and a combiner: Both parts are potentially vulnerable.
The usual stream cipher combiner is the simple exclusive-OR function (addition mod 2). Unfortunately, an exclusive-OR combiner is weak because it is so easily reversed: If someone can come up with some of the original plaintext, and then find the corresponding ciphertext, they can easily recover the original confusion sequence. Given some amount of the confusion sequence and a knowledge of the design of the confusion generator, an opponent may be able to fully-define the state of the confusion generator, and so "break" the cipher. This is the "known plaintext" or "probable plaintext" attack, and is very effective as well as being very well known.
Normally, a stream cipher is arranged to "never" re-use the confusion sequence. This can be accomplished by using a confusion generator with an astronomical sequence length, and by using a "message key" so that each and every enciphering will start at a different point in the sequence. Unfortunately, Constraints 2 (no data expansion) and 3 (no other information) prevent the use of message keys. Worse, Constraint 4 allows deciphering in random order, which apparently means that each and every ciphering must start at the same point in the confusion sequence. When a conventional stream cipher repeatedly uses the same part of its confusion sequence, knowledge of that sequence (from whatever source) will be sufficient to decipher subsequent messages. Thus, because of the given Constraints, the previously formidable stream cipher has become almost useless.
Re-using the same confusion sequence is a very serious problem for stream ciphers. In fact, this problem is so serious that, for most authors, it is almost unthinkable. Consider the comments made by Davies and Price in their 1984 text on computer network security:
It is generally recommended that a running key be used only once for encipherment, because certain cryptanalytic attacks can take advantage of repeated use. [3:25]
...it is important to avoid using the same key for more than one message.... Use of the same key in encipherment for a number of plaintexts allows attacks either by a Kerckhoffs' superimposition or by elimination of the key by an exclusive-OR combination of the two ciphertexts. [3:40]Similar comments come from Meyer and Matyas, in their 1982 text on cryptography:
The stream cipher must not start from the initial conditions in a predictable way, and thereby regenerate the same cryptographic bit-stream at each iteration of the algorithm. In other words, the stream cipher must not reoriginate. [6:56]And also Beker and Piper (in their 1982 text on stream ciphers), in the context of a confusion sequence produced by a linear feedback shift register (LFSR):
If a modulo 2 adder is used to obtain the ciphertext from the plaintext and the shift register sequence, then any one of the three sequences is the modulo 2 sum of the other two. So if the interceptor knows the plaintext equivalent of m consecutive positions of the ciphertext then he knows m consecutive elements of the shift register sequence. ...if m = 2n then the interceptor can easily work out the entire key. [1:206]
But we are forced by Constraints 2 (no data expansion), 3 (no other information) and 4 (decipher in random order) to have our cipher at least start out in the same state. Thus, it will take a very special design to handle this situation.
Because one of the main weaknesses of a conventional stream cipher is its simple and easily reversed exclusive-OR combiner, we might think to simply use a better combiner. The best, and perhaps the only serious alternative is Dynamic Substitution [4,5].
In the typical Dynamic Substitution combiner, each character is translated through a substitution table. Then, after each translation, the just-used entry is exchanged with some entry selected by a confusion value; this changes the "state" of the translation table.
Thus, whenever a particular plaintext character is translated, the translation for that character changes, and the more often a character appears, the more often the translation is changed.
Dynamic Substitution gives us a non-linear transformation which changes through time. The transformation is balanced, so that it is difficult for an attacker to separate plaintext data from confusion. And Dynamic Substitution is not directly subject to a simple known-plaintext attack, or at least not in the same trivial way as exclusive-OR.
But Dynamic Substitution is a tool, and not a panacea. Simply using an improved combiner in a situation which re-uses the confusion sequence is not likely to produce a very strong cipher. For example, someone with access to the system could encipher a lot of messages, and, simply by using each possible character, develop the original state of the substitution table. This could be done character-after-character, and thus establish every change in the table. The changes in the table would implicitly reveal a fixed confusion sequence. Then, given the original table and the known confusion sequence, the cipher is broken. Thus, even a Dynamic Substitution stream cipher is vulnerable to Constraint 5 (known-plaintext attack), if the same confusion sequence is used repeatedly.
Because of Constraints 1 (variable-size data) and 2 (no data expansion), a conventional block cipher is not acceptable. Because of Constraints 3 (no other information) and 4 (decipher in random order) we apparently must re-use the confusion sequence, which should make a conventional stream cipher unacceptable. Fortunately, we can improve on the conventional stream cipher.
If we are going to re-use the same confusion sequence (for every message ciphered under a single key), there clearly is little reason to re-generate that sequence every time. We might as well save it in a table and just step through it. This can be very fast. It also opens up other alternatives.
Once we have the confusion sequence in a table, we can modify access to that table. Instead of just running through the same sequence every time, we can step through the table in random steps. For example (assuming a conventional stream cipher), we might think to add the ciphertext value to a current table pointer to select the next ciphertext value. But the ciphertext value is obviously exposed to any opponent, and this would reveal the distance between each confusion value used by the cipher. Since we must always start out in the same place, the opponent could know the input data, the corresponding ciphertext, and the distances between table values. One would expect such a system to fall fairly quickly.
To prevent the table offset value from being exposed, we might think to use more than one combiner. Of course, having multiple exclusive-OR combiners in sequence are not going to be much help, because they can be treated as a single exclusive-OR under a "known plaintext" attack. So we are pretty well forced into using at least one nonlinear combiner, and that pretty much means Dynamic Substitution.
By using two combiners in sequence, first Dynamic Substitution and then exclusive-OR, we can take the value between two combiners as the RNG table offset value. Two bytes from the table will be required for every byte enciphered: One byte for the Dynamic Substitution combiner, and another for the exclusive-OR combiner. As long as the confusion value used by the exclusive-OR combiner is unknown, so is the RNG table offset value. And "known plaintext" cannot be used to attack the exclusive-OR, because the value out of the Dynamic Substitution combiner (and into the exclusive-OR) will not be known.
In the past, the use of a table to generate the confusion sequence for a stream cipher would have been considered laughably weak. But the real weakness of such a design depends upon the ease with which the table can be reconstructed by a cryptanalyst. Previous stream ciphers could only use a single trivial linear combiner, thus allowing a "known plaintext" attack to completely reveal the original confusion sequence. But we can use a Dynamic Substitution combiner.
While it is not possible for Dynamic Substitution to have a "strength" independent of the rest of a cipher, it is possible to compare particular attacks against particular designs. When we use an RNG table, the "known plaintext" attack is trivial and absolutely effective against an exclusive-OR combiner. The exact same attack is not sufficient when the combiner is Dynamic Substitution.
The plaintext data in the array buf[] are sent one-by-one through Dynamic Substitution and exclusive-OR and the resulting ciphertext is placed back into buf[]. Each combiner uses a byte from the RNG table as selected by index value RNGCUR. The value produced by the Dynamic Substitution combiner is added to RNGCUR after each byte is ciphered. Note that Dynamic Substitution operates directly upon plaintext data; this allows the dynamic combiner to respond to statistical variations in the plaintext.
The ciphertext data in the array buf[] are sent one-by-one through exclusive-OR and then Inverse Dynamic Substitution, and the resulting plaintext is placed back into buf[]. Each combiner uses exactly the same element from RandAR[] as was used during enciphering; this happens because both modes start at the same place in the RNG table, and update RNGCUR with the same intermediate value.
Perhaps the fundamental basis for security in a cryptosystem is the expectation that each different user key will produce a different -- arbitrary -- enciphering. The original plaintext hides like a needle in a haystack of other keys. But if the key is for some reason obvious or apparent, we can expect an opponent to try that first, and so penetrate the cipher with less than the expected effort. At a minimum, everything depends upon having extremely unique or arbitrary keys.
Clearly, a textual key phrase will not be arbitrary, in the sense that some words follow others preferentially and some letters do likewise. Thus, it is important to process or "randomize" the key to some apparently arbitrary result value. Of course, such processing cannot compensate for a fundamentally obvious key, so we want to see long, strange phrases. Text keys should contain some numbers, and unusual punctuation, capitalization and spacing, to make the phrase different from anything ever placed on paper.
DAGGER uses multiple degree-31 CRC computations (using different polynomials) for randomizing the key. While this does not increase the "uncertainty" in a key, it does distribute the most likely keys arbitrarily among all possible cipherings. If we assume that normal text may have an uncertainty of perhaps 2 bits per byte, and that 80 bits is enough, key phrases of 40 characters should be "long enough."
The randomizing Hash scans the key array once for each required hash word, using a different CRC polynomial each time. The CRC results are saved in a word array for subsequent processing.
DAGGER uses exactly the same CRC mechanism to process any type of key, so a key server could deliver different sorts of values at different times. The ideal key would be some sort of "really random" or "physically random" binary value, but good, strong keys can be developed from an accumulation of second-by-second details (such as LAN usage), using CRC or a cryptographic hash. On the other hand, simple reliance on time-of-day -- even if cryptographically hashed -- would have to be considered insecure.
Some government agencies seem interested in having ciphers which are used for international communication be weak enough to be broken by an agency with world-class computation resources. To help these people, DAGGER can be weakened to almost no strength at all, simply by selecting a low strength-limit value during start-up.
The processed user key is always placed at the start of the RNG table, and will be expanded to fill the table. But before expansion, we can limit the number of unique bytes, and in this way vary the strength of the cipher. Since only a limited number of bytes are used to produce the expanded data, there can be only that much uniqueness, no matter how many bytes were previously in place. This means that the total number of keys cannot be larger than 256**n, for a strength-limit value of n bytes.
For example, a strength-limit value of five bytes would use only the first 40 bits of randomized key material, no matter how much key was actually available; this would reduce the number of different cipherings to 2**40. Although this would still be a fairly difficult brute-force problem for an individual, it might be an acceptable value for a government agency with huge computing resources.
The Strength Limit function takes randomized words and repeatedly copies the first selected number of bytes to expand the data to a total of nine words (144 bits). Thus, 144 bits is the DAGGER "design strength."
An interesting consequence of strength limitation is that the key-distribution mechanism could also distribute the strength-limit value. Thus, as time passes and government approval allows, the DAGGER cipher can be strengthened, in the field, without software change. Simply by sending a larger strength-limit value from the key server, the cipher could automatically be made stronger. No other changes would be required.
Would the strength parameter be liable to abuse? Note that many modern cipher systems are implemented in software, and any particular system remains as it is simply because programmers refrain from changing it. An equivalent strength value could, of course, be "hardwired" into the code. But the "users" of this package are not users in the normal sense: The users of this package are programmers. And programmers have access even to supposedly "hardwired" elements. Any programmer with access who wants to change the strength of a software cipher can do it. They can even do this without altering the cipher code proper simply by double-enciphering.
It is probably impossible to build a software cipher which could absolutely prevent double-enciphering; in practice, a simple agreement may suffice. And a similar agreement could be made with respect to the strength value.
So, if "hard wiring" a particular strength into the code will not prevent misuse, why ruin the advantage of flexibility for the vast majority of users who will not misuse the code?
Of course, the ultimate strength of DAGGER cannot increase beyond the effort required for some effective attack which is not based on "brute force." Since the largest DAGGER key is nine words, DAGGER's strength could not exceed 144 bits. However, just 80 bits is generally thought secure against "brute force," although this obviously depends on the effort involved to generate the system state for each possible key. Note that DAGGER requires that the entire RNG table be developed -- and jitterized -- before the cipher can function.
After a key is randomized and strength-limited, the resulting data must be expanded to fill the whole RNG table. This is done using Additive RNG designs of increasing polynomial degree.
One of the most reliable ways of generating random-like sequences based on an initial value or "seed" is the Linear Feedback Shift Register (LFSR). More recent mechanisms work on bytes or words in parallel (the General Feedback Shift Register or GFSR), and use integer addition instead of the mod-2 add or exclusive-OR (the Additive RNG) [8].
As described earlier, the DAGGER strength-limitation stage expands the randomized key to 9 words (18 bytes). From this point, DAGGER uses a sequence of Additive RNG [4:27] designs with different mod 2 primitive feedback polynomials [3:62-63; 1:359] of increasing size. Each takes two or more of the defined values, adds them, places the result at the next undefined position, then moves ahead one word. The last, a degree-1279 trinomial, uses two values from the preceding 1279 words to produce each result value. This happens repeatedly until the RNG array is filled.
The Expand block takes the nine-word array produced by Strength Limit and expands it to fill the array. Each expansion polynomial uses two or more already-defined word values to produce a resulting word value until the array has been filled. The Expand output always leads in advance of the input locations.
Each of the Additive RNG's is a linear mechanism; we would expect that if a cryptanalyst could somehow find as many elements as the polynomial degree, they could solve and reproduce the sequence. Of course, the degree-1279 polynomial implies that at least 1279 two-byte words would be required to initiate this part of an attack, and both halves of each word are used to encipher each single character, making identification difficult. But to further avoid such an attack, DAGGER uses a relatively-fast "jitterizer" mechanism [8:111], which steps through the table and deletes various numbers of bytes (the rest of the table is copied down) at various times. The jitterizer also randomizes each "take" group of values by exclusive-ORing one of the deleted values to each element. The whole process is fairly quick, and should prevent easy attacks on the table as a linear sequence.
On average, about 10% of the data are deleted from the table. This means that we must create more data originally, just in order to guarantee enough jitterized material for RNG table operation.
The Jitterize function takes the filled word array produced by Expand and deletes segments of the sequence at random by simply not-copying sections of the sequence. Note that the Jitterize output will always lag at or behind the current input location.
We can relate the resulting array to the fields used in the software mechanism:
PreAR, RandAR, and ShufAR are the various areas of the DRT structure defined to hold DAGGER state. This structure is about 13KB in size, with the RandAR itself consuming 8KB. The deg 9 polynomial fills only a tiny portion of the array to the left of "deg 89"; deg 89 fills into PreAR, and deg 1279 fills through the end of the structure. All of this happens before Jitterize, of course. Jitterizing deletes some of the table data and moves the rest down; something like the last 1KB of the DRT structure (about half of the ShufAR) will remain unchanged. The jitterized part of the ShufAR data is used to shuffle the Dynamic Substitution array located in one of the boxes at the start of the DRT structure.
The four small boxes on the left of the array represent 256-byte substitution boxes. Two of these are the forward original and inverse original which are used to reset the system for a new data frame. The other two boxes are the working forward and working inverse tables which change during processing.
Designing a cryptosystem has quite a lot in common with the design of a currency: As a currency designer tries to prevent counterfeiting, a cryptographer tries to prevent cryptanalysis. But a counterfeiter at least produces positive evidence of any design failure (if you can find it); in contrast, a cryptanalyst generally leaves no technical tracks. Because of the lack of feedback, it is hard to know which of the parties (cryptographer or cryptanalyst) is winning. But various sorts of well-known attacks have proven effective against other cryptosystems, so it only makes sense to think about how effective these attacks would be against DAGGER.
Most attacks consist of the opponent trying to develop the complete internal state of a cipher system in order to decipher other unknown messages enciphered under the same key.
We also want to consider two common alternatives: a conventional stream cipher, and also a cipher like DAGGER but without the exclusive-OR second combiner.
Perhaps the most obvious attack would be to aim at the use of the relatively small static table as the source of the confusion sequence.
The main problem for the opponent is the fact that Dynamic Substitution itself contains substantial state. Unless the opponent can develop or interpret that state, it would seem to be almost impossible to develop the upper byte of the RNG table values which is used to permute the nonlinear combiner. And only if both bytes of each RNG table entry are found can the opponent hope to break arbitrary messages.
Another problem for the opponent is that the value between Dynamic Substitution and exclusive-OR is unknown. This is a problem for two reasons: First, this value specifies the distance to the next used RNG table entry. If the opponent is using statistical techniques to develop information about a particular table entry, it would seem to be necessary to identify when that entry comes up again. Next, knowing the middle value would seem to be a necessary step on the way to finding the lower byte in a RNG table entry.
Yet another problem is the fact that the table values themselves, while at one time related in a linear way, have been processed by the jitterizer to no longer be linearly related. This means that, even if a few entries are developed, they cannot be used to extrapolate other entries in the table.
The main form of brute-force attack consists of trying all possible keys in the hope of eventually finding one which deciphers the message. The designer should make the key-space large enough so this could be a long, hard search. Normally, an 80-bit key is large enough. Thus, ideally, we would like to see at least 80 bits of "uniqueness" in each of our keys. Since language is amazingly redundant, it is necessary to have much longer text keys than those which are random binary values. For best strength, we would normally like to see the strength-limit value as large as possible, and at least 10 bytes. If the strength-limit value is less than 10 bytes, opponents with substantial resources may be able to make brute-force an effective attack. This is not a flaw; it is the reason for having a strength-limit value.
A "ciphertext only" attack consists of having only the ciphertext and the design, and no other information. Raw ciphertext is one of the hardest ways to break a cipher, and yet is probably the way most of us mistakenly imagine that ciphers are broken. (Brute-force is also a ciphertext-only attack, but is considered separately because there is nothing particularly clever about it.)
It is not clear that there is any effective attack on DAGGER using ciphertext only.
On the other hand, consider the conventional stream cipher: in our applications such a cipher necessarily must re-use its confusion sequence. Because it has only a simple exclusive-OR combiner, any two ciphertext messages (enciphered under the same key) can be exclusive-ORed to "cancel out" the confusion sequence. The result is a far simpler cipher: two plaintext messages combined by exclusive-OR. By doing this for a number of different messages (and combinations of those messages), character coincidences, especially space characters, can be located. Then other characters in those positions can be revealed, and we can fill in the blanks, just like a newspaper recreational cipher.
The "known-plaintext" attack consists of "obtaining" or guessing some amount of plaintext and the corresponding ciphertext. Getting some of the plaintext may not be too hard to do, since most users occasionally cipher otherwise innocuous messages, and then see no need to dispose of those messages in a secure manner. Alternately, it may be possible to get the sender to encipher a particular message for some reason, perhaps a particular contract, for example. Graphic logos and signatures contain a huge amount of data which can be fairly easily obtained. And even if the full text of a message is not known, the messages may follow a particular format or pattern which can be anticipated, and some of the message words or phrasing may be guessed.
The known-plaintext attack is the major attack on conventional stream ciphers because it can be devastatingly effective. Given both the plaintext and the ciphertext for a given message, the conventional exclusive-OR combiner immediately reveals the confusion sequence. If that sequence were produced by a Linear Feedback Shift Register (LFSR), a relatively small amount of the sequence would suffice to define the LFSR and break the rest of the sequence.
Fortunately, DAGGER is not limited to an exclusive-OR combiner, nor does it directly use an LFSR confusion sequence generator. In DAGGER, the Dynamic Substitution combiner produces some unknown middle value which the exclusive-OR processes into ciphertext. To find the confusion sequence one must assume some value from the nonlinear combiner, but any value is equally possible. Similarly, we could gain information about the nonlinear combiner if we could assume the confusion sequence value into the exclusive-OR, but, again, any value is possible.
Note that DAGGER uses two confusion bytes for every data byte enciphered. Certainly a single byte of ciphertext cannot possibly identify both confusion bytes, no matter what technique is used; there is simply not enough information contained in one byte to define two bytes. Therefore, any effective known-plaintext technique would likely have to be statistical or iterative in nature, a situation substantially different from the classical approach.
Now consider the conventional stream cipher: In our applications such a cipher necessarily must re-use its confusion sequence, and has only a simple exclusive-OR combiner. With one suitably-long known-plaintext message, the confusion sequence can be immediately recovered simply by exclusive-ORing the plaintext and ciphertext. Then, any other message (under the same key) can be deciphered simply by exclusive-ORing that ciphertext with the now-known confusion sequence. This is an immediate and trivial operation, which is a complete "break" for this type of cipher, when used in our applications.
Now consider DAGGER without exclusive-OR. Clearly, if the initial Dynamic Substitution table and the RNG table were known, the cipher would be broken. By examining many different messages, most or all the initial Dynamic Substitution table state (or the state after some fixed header) can be absolutely known. This leaves the RNG table.
Now we work on defining the initial part of the RNG table: Different messages will use the Dynamic Substitution table entries in different sequences, but the first time each entry is used, it should generally produce the previously-defined result. If it does not, this implies that that entry has been exchanged, and this both reveals a hidden confusion value and restricts to a few locations where that value must have occurred.
Since the table-step value is not hidden in this cipher, but is just the ciphertext value itself, we know immediately when a particular table element is re-used, and this must be a big help in reconstructing the table. In an average of only 32 characters, the RNG index will wrap around the 4k table, and then we have an opportunity to work on the initial table area again. Possibly, with enough messages, we may be able to mostly define that area. Then, given known RNG table values, we could extend the known Dynamic Substitution state beyond the first character, which would be the next step in defining the next area of the RNG table.
It is not currently known how much effort would be involved in a complete known-plaintext break of DAGGER without exclusive-OR, but the above scenario does not look good.
A "defined-plaintext" attack means that the opponent gets to specify messages which are enciphered under the unknown key and returned to the opponent. We assume that it is possible for the opponent to cipher huge numbers of messages in an attempt to get some information about the internal cipher state.
Consider a "defined plaintext" attack on DAGGER: Somehow, the opponent gets to encipher arbitrary messages under the target key. We assume that the key does not change during the attack.
Note that the initial Dynamic Substitution state and next Dynamic Substitution state are related in that there are probably (and at most) two different entries, the result of a single exchange. So there are at least 254 entries in each state which are exactly the same, and related by exclusive-OR with the same single value. This value is the exclusive-OR "difference" between the initial and next exclusive-OR value, and should be easily developed.
Since we assume a defined plaintext attack, the input values are known absolutely; the RNG table values are supposedly hidden. But because we can find the two different (exchanged) elements in the Dynamic Substitution table, and know which element was selected by the input value in the previous step, we now absolutely know the "hidden" confusion value from the RNG table. Of course, we still do not know the RNG table step value, which is hidden by the exclusive-OR.
If we work interactively, it is not always necessary to scan the entire Dynamic Substitution table each time to find the single unknown exchange element. Sometimes this will be necessary, but other times we will find it quickly, so we can assume an average somewhat more than 128 tries. But if the attack is not interactive, all 256 possibilities must be checked.
In addition to the Dynamic Substitution table, we also know the exclusive-OR difference between the two states. Thus, we know the high byte of one RNG table entry, and also the low byte, in the sense that all our known low bytes will be exclusive-ORed with some particular other value. Since all low-bytes will be exclusive-ORed with the same "other value," there are only 256 possibilities. This also gives us the distance between entries, in the same exclusive-OR sense.
Suppose we are lucky and only have to do the identification exactly 4k times to find each entry in the RNG table; this means that we incur the cost of at least 4k x 128 = 512k messages. (Then we will finally have to try an average of 128 possible exclusive-OR offsets to find the correct low-byte values.)
But the table is still unordered. The absolute Dynamic Substitution output is not known, and this is the RNG element increment. We do know that each next element is somewhere in the array from the last element to 255 elements further on. Thus, we can try to group each element, in the sense that from a particular known element, another known element is at most 255 elements away. Probably if we go through the table many times (as a guess, 128 times), we may be able to generally group and order the entire table. This means 65M messages.
Under an interactive defined-plaintext attack, something like 512k specially-defined messages could produce a complete break. Of course, something less than that number could produce a partial RNG table, so that bits and pieces of plaintext could be recovered (which is probably not particularly useful unless the RNG table is almost completely defined).
There is no need to further consider the conventional stream cipher, which was completely broken under the far easier known-plaintext attack.
Now consider a similar attack on DAGGER without exclusive-OR. In this case, the Dynamic Substitution output is the ciphertext output and also the RNG table increment. This would support the easy and straightforward reconstruction of the RNG table, as well as identification of element re-use. This would avoid the need for any technique to order the RNG table, but would still apparently take about 4k scans (512K messages).
A better alternative would seem to exist in defining messages which are particularly useful in extending the logical sort of attack described under known-plaintext. This should take substantially less material.
Note that a "defined plaintext" attack requires the target key to encipher a huge number of specially-constructed messages from the opponent (and the ability to identify the resulting ciphertext). Such an attack generally could not be applied to data storage ciphering, because the target key would not be present during the attack.
For LAN ciphering, it is important to recognize a possible security risk if the system allows remote nodes to submit huge numbers of arbitrary messages for enciphering under a single local key. One possibility would be to absolutely prevent any such ciphering, another would be to limit the number of such messages enciphered under a single key. The universal use of short-term session keys (no week-long sessions!) would also help to minimize this risk.
If the opponent does manage a "defined plaintext break," then she can read any messages enciphered under the broken key. Messages enciphered under another key are still hidden; the attack must be repeated for each new target key. To reduce the amount of data enciphered under a single key, we should use more keys, each for a shorter period. A secure LAN would generally use a "key server" to provide new random keys for each session.
The potential threat of a defined-plaintext attack can be essentially eliminated through the use of message keys, which are completely-random values transmitted with each message. However, message keys would violate Constraint 2 (no data expansion) or 3 (no other information) in our proposed applications. In a sense then, it is these particular requirements which open windows of vulnerability in the cipher. It is instructive to compare the vulnerability of the DAGGER design to other alternatives under the exact same Constraints.
Note that this cipher weakness applies exclusively to a "defined plaintext" attack. The approach does not apply to arbitrary messages, even if millions of messages of "known plaintext" (associated plaintext and ciphertext) are available. "Defined plaintext" requires a particular type of strongly-related messages as an "attack suite."
In a "defined-key" attack, the opponent chooses keys for their particular effects. If these effects are in any way "linear" and the key bits independent, it may be possible to develop the correct key bit-by-bit, and so break the cipher.
Because in DAGGER the keys are processed by multiple CRC polynomials, virtually every resulting bit depends upon each and every bit of the external key. Thus, it seems rather unlikely that any defined-key attack could succeed.
Because the other ciphers under discussion (conventional synchronous stream cipher, and DAGGER without exclusive-OR) do not define the key-expansion stage, we have nothing to say about a defined-key attack on them.
Even a cipher which is technically secure can be used in ways which make it easy to penetrate security. Such use could even confuse the situation, laying the fault at the door of the cipher itself, when the real fault lie in procedures and practices.
Obtaining a paper copy of the plaintext, either before sending or after reception, is an ideal way to "break" a cipher. Using Electro-Magnetic Field (EMF) emissions to reproduce a CRT screen showing the plaintext is also an excellent way to "break" a cipher. Gaining access to a computer with the plaintext inside, or planting a "virus" program to obtain that plaintext are also practical "attacks." Or perhaps we could just bribe someone who had access.
The extent of such possible attacks is so great that it is almost impossible to rule them all out, and this is yet one more reason why a realistic cipher does not need to be "absolutely unbreakable." Not only could we never prove such a thing, nor afford the processing needed to use it, but the usual procedures and practices of business provide any number of insecurities that cryptography cannot touch.
In practice, a cipher only needs to require substantially more effort to "break" than would be necessary to compromise security in some other way.
In software, all functionality requires processing, and this requires time. The fastest possible cipher is no cipher at all, but such a cipher is not very effective at hiding information. Conversely, ciphers which are designed to be extremely strong inherently demand substantial amounts of processing. Our goal here is to get "the most bang for the buck."
The "strength" of a cipher is the ability to resist attack. Unfortunately, there is no external test which can tell how strong a particular cipher design really is. What we can do is to look at the design with respect to the various known attacks, and try to see how difficult and effective they would be. It is certainly much more difficult to mount a successful "known plaintext" attack against DAGGER than it would be against a conventional exclusive-OR stream cipher designed for the same application.
Currently, the only known effective attack on the DAGGER design is a lengthy defined-plaintext attack. That said, DAGGER is a relatively simple cipher (as these things go), and may be vulnerable to the extent that an opponent can reconstruct the RNG table.
The issue in the DAGGER design is not absolute strength: The issue is: maximum strength for minimum execution cost. A direct comparison between DAGGER and a conventional stream cipher -- operating under similar Constraints -- will show a significant strength advantage for DAGGER, at a relatively modest cost. DAGGER is exceptionally fast and comparatively strong.
Last updated: 1996-09-17