Dynamic Transposition Ciphering


A Ciphers By Ritter Page



Terry Ritter


This massive set of discussions was triggered by my retrospective article about Dynamic Transposition, a serious cipher in which the known weaknesses of transposition have been identified and eliminated. The result is a practical, fairly-simple and believable block cipher based on fundamentally different principles than conventional block ciphers like DES.

Dynamic Transposition

A Dynamic Transposition cipher is conceptually simple:

  1. We collect plaintext data in bit-balanced (or almost bit-balanced) blocks.
  2. We shuffle the bits in those blocks under the control of a keyed pseudorandom sequence.
(The cipher designer decides on block size, the bit-balancing approach, and how the shuffling sequence is generated and keyed.) Each block is shuffled, independently and differently, and should be shuffled twice. In general, the shuffling sequence generator should have a huge internal state; such generators are both practical and fast.

Dynamic Transposition operates at practical speeds, if generally slower than conventional ciphers. The bit-balancing time is probably less than the shuffling time, and data expansion from bit-balancing can be limited to 25 percent, worst case.

Bit-Balancing

In Dynamic Transposition, a bit-balancing process assures that each and every plaintext block has the same count of 1's and 0's. As a result, every possible plaintext or ciphertext block is just a bit-permutation away from every other block. So if an opponent has a ciphertext, that block could be produced from any possible plaintext block, given the right permutation. At least potentially, this is Shannon-style, Latin square, perfect secrecy, albeit on a block-by-block basis.

Actually using keys large enough to select from among all possible block bit-permutations (for any single block) is a realistic possibility with smaller blocks. But even with smaller keys, the generator which shuffles the block can and should have sufficient internal state to make two uniform, unbiased selections among all possible bit-permutations. By double-shuffling the block, we can assure that any particular bit-permutation can be produced by unsearchably many different generator sequences, and so protects the actual sequence from exposure.

Ambiguous Ciphering Transformations

In a conventional block cipher, a key selects a particular emulated huge Simple Substitution "table," and each block ciphered thereafter potentially exposes part of that table.

Dynamic Transposition operates by re-arranging the bits in a data block which has equal counts of 1-bits and 0-bits. Because there are many different 1's and many different 0's, unsearchably many re-arrangements will take exactly the same plaintext block to exactly the same ciphertext block. So even if the plaintext and ciphertext blocks are both known, the actual re-arrangement (permutation) used is still not exposed. Ciphering ambiguity would seem to be the primary strength of Dynamic Transposition. The problem of finding which one of all possible ciphering permutations actually occurred is central to every discussion of strength.

Dynamic Operation

In general, Dynamic Transposition uses a different ciphering transformation for each ciphered block. Thus, there simply is no one particular transformation to find. The closest remaining possibility would seem to be recovering the internal state of the generator which shuffles the block. But that seems to require knowing each actual ciphering permutation, and they are not exposed.

In contrast, a conventional block cipher is keyed once, which selects a particular emulated "table," and then used repeatedly. The emulated "table" is a fixed solution, and if that can be found, it will last until the next keying.

Dynamic Transposition is also keyed, but the ciphering transformation is ambiguous and each block gets a new one. Consequently, each particular permutation must be resolved independently, because different blocks are only distantly related.

A Block-Complete Keyspace

With a conventional block cipher, just one or two known-plaintext pairs are generally sufficient to exclude every possible key but the right one. Since we do assume that some amount of plaintext will be available, there generally is sufficient information for a unique solution, if only someone was clever enough to find it. Cipher users thus find themselves in the position of continually betting that nobody is that smart.

In contrast, a Dynamic Transposition bit-permutation is a complete implementation of the permutation model. Having a known-plaintext pair only reduces the uncertainty in knowing the ciphering permutation to within a particular unsearchably-large subset. That subset consists of all possible permutations which will produce the given result.

If the opponent could assume that the same permutation was used on the next block, different data might further reduce the set of possible permutations, thus eventually resolving the particular permutation involved. But since different permutations are used for different blocks, apparently the only thing which might be resolved is the shuffling sequence. And when we shuffle each block twice, unsearchably many different sequences will produce any particular permutation.

A Fully-Implemented Model

Dynamic Transposition supports any possible permutation of the data block. As a result, what we know about the mathematical distribution of random permutations should apply directly to a Dynamic Transposition cipher.

In contrast, conventional block ciphers seek to emulate huge Simple Substitution "tables," but can implement only a tiny part of their potential keyspace. (For example, DES has a potential keyspace of 2**(10**21) keys, of which 2**56 keys are actually selectable.) The result is that what we know about the distribution of random substitution tables probably can not be applied to a conventional block cipher.

Just Like a Block Cipher

One of the main arguments against Dynamic Transposition in these conversations is that transposition can be considered a limited subset of all possible substitution transformations. So it is argued that Dynamic Transposition is also limited to a subset of transformations, supposedly "just like" conventional block ciphers.

But Dynamic Transposition is not intended to produce general substitutions, so there should be no surprise when it does not do that. Dynamic Transposition is limited to using bit-permutations, and does indeed implement every one of those. In contrast, conventional block ciphers implement almost none of the potential keyspace available to them. And while they also do not intend to do better, the mathematical implications of using some pre-selected subset, as opposed to every possibility, are distinctly different. That is not "just like" Dynamic Transposition.

In essence, the use of bit-balanced blocks represents a particular coding of information, and Dynamic Transposition ciphering can indeed transform any element to any other element of that same code. So far, one could say something similar about exclusive-OR (although that is not a keyable transform), but even this is by no means guaranteed in a conventional block cipher. There is no reason to believe that we can always find a key which will take one arbitrary block to another in a conventional block cipher.

By using only a tiny selection for the actual ciphering operation, a conventional block cipher becomes potentially solvable with a practical amount of known-plaintext.

Dynamic Transposition may have tiny original keys, but the keyspace actually involved in ciphering is large enough to permit any possible permutation. Only the knowledge of a sequence of permutations would seem to address the hidden internal shuffling sequence. But since Dynamic Transposition ciphering permutations are not exposed, and since unsearchably many different sequences produce the same permutation, the sequence is protected.

The One-Time Pad (OTP)

Another issue touched-on here is -- yet again -- the One-Time Pad (OTP). Many people find it difficult to accept what should be obvious: in general, the OTP which is proven secure probably cannot exist in practice, and so cannot be used to hide any real data.

It is indisputable that if an OTP pad or sequence is predictable, the system is insecure. Here, OTP proponents may protest that no system with a predictable sequence can be called an OTP. But our problem with any real system is how to know whether or not any particular sequence is predictable. If distinguishing between an "OTP" and something less requires knowing what we cannot know, the whole concept is a joke.

The unpredictability of a sequence cannot be guaranteed by test. And, even though some sequence generators may be based on fundamental physical randomness, they must be implemented in a world in which unexpected problems often exist. Since the unpredictability of a generator also cannot be guaranteed by test, it is generally impossible to prove that any implemented generator is as perfectly unpredictable as one might hope.

The conditions assumed for the theoretical OTP security proof generally cannot be met in practice, so I offer Dynamic Transposition as one example of a superior cipher in practice. In any reasonable cipher, the keying sequence need not be unpredictable in the same way as an OTP, and is in any case deliberately hidden by layers of strong puzzles.

In contrast, the OTP sequence is immediately exposed by known-plaintext, so that any predictability in that sequence (whether known to our side or not) is also exposed, and that is a very dangerous situation. Using an OTP is dangerous because the strength proof we want does not apply exactly when we need it most.


Contents


Subject: Dynamic Transposition Revisited (long) Date: Thu, 18 Jan 2001 23:02:31 GMT From: ritter@io.com (Terry Ritter) Message-ID: <3a67737e.15537635@news.io.com> Newsgroups: sci.crypt Lines: 483 Terry Ritter 2001 Jan 18 ABSTRACT A novel approach to block ciphering is remembered for relevance to modern ciphering issues of strength, analysis and proof. A rare example of a serious transposition cipher, it is also an unusual example of the simultaneous use of block and stream techniques. INTRODUCTION Over a decade ago, before the web, before PGP, and before "Applied Cryptography," I began to study the different forms of ciphering. (Actually I was re-kindling a technical interest from my Army years: in 1968 I was a cipher machine repairman at the Nha Trang AUTODIN site.) A decade ago there were a few -- but only a few -- crypto books, and virtually nothing on serious computer crypto. Research often involved reading the original academic literature, from Shannon on, references which I later included in my articles. One of the things which caught my interest was an apparent dichotomy between substitution and transposition. (Currently, I have a different view: I came to see transposition as a limited, algorithmic form of substitution. More precisely, "transposition" can be seen as a way to form a permutation using a sequence of exchanges: literally "transpositions." But a mere permutation of the existing symbols is a very limited subset of substitution. And it is exactly those limitations which form the characteristics of transposition we know so well.) Emulated huge substitution (in DES) had been the dominant basis for ciphering, but no comparable transposition-based designs existed (at least in the open literature). That begged the question of whether transposition was simply unsuitable for secure ciphering, and if so, why. Subsequent investigations and development produced an article titled "Transposition Cipher with Pseudo-Random Shuffling: The Dynamic Transposition Combiner," published in the January 1991 issue of "Cryptologia" (see: www.io.com/~ritter/ARTS/DYNTRAN2.HTM ). With continuing sci.crypt discussions on stream ciphering, the one-time-pad (OTP) and cipher security proofs, it may be worthwhile to re-visit Dynamic Transposition to see if it has something to offer. As described here, Dynamic Transposition is a block cipher without S-boxes or rounds, and which neither has nor needs avalanche or diffusion. Like a stream cipher, it uses a keyed random number generator (RNG) to perform ciphering on a block-by-block basis. Dynamic Transposition is notable for design clarity, ease of understanding and analysis, and scalability for testing. The general concept of Dynamic Transposition could be expanded into a stand-alone secret-key cipher, or used as the data-ciphering component of a public-key cipher. TRANSPOSITION Specifically, a "transposition" is the simple exchange of the positions of two symbols within a message or ordered array or vector. A sequence of such exchanges can form any possible mathematical "permutation" of the message or array. This is the simple re-arrangement of the existing data symbols. Classically, a transposition cipher re-positions plaintext, letter-by-letter, to produce ciphertext. As a result, every letter of the plaintext is also visible in the ciphertext. And that invites an opponent to re-position or "anagram" the ciphertext letters in an attempt to find the relatively few plaintexts which would make sense. In contrast, modern transposition need not work on whole character units, but can instead permute each binary-coded character bit-by-bit. This is significant because vastly more messages which make sense -- all but one wrong -- can be created from a heap of bits than from a pile of letters. Classically, one of the problems with transposition has been that it does not change the values of the data, and that can leak information. Consider a plaintext block of all-0's: No permutation will change that block at all, so it will be fully-exposed as ciphertext. Clearly, classic transposition has a hidden strength requirement that plaintext data contain a range of different values. In contrast, modern computer processing can create blocks with an exact balance of 1's and 0's (by adding balancing values to the plaintext). Only as much plaintext as can be balanced is accepted for a block, which is then filled with balancing values. The result is that there will be no weak blocks. Classical transposition is often limited to simple processes consistent with hand application, and so tends to traverse only a small subset of all possible permutations. That is a limited keyspace which may support attack. But in a modern computer implementation, efficient permutation algorithms allow us to produce any of the immense number of different permutations with equal probability, provided we have a random sequence to drive the algorithm. One immediate result is a vastly-expanded keyspace. Classically, each block or message of a transposition cipher is permuted in exactly the same way. So if an opponent acquires multiple messages, it may be possible to find a single permutation which makes both ciphertexts readable, and thus identifies the correct permutation. That attack is called "multiple anagramming." But a modern version of transposition may simply permute each block independently. That is the "dynamic" part of Dynamic Transposition, and it completely voids the multiple anagramming attack. Overall, the resulting block cipher is a novel combination of both stream and block ciphering technology. DYNAMIC TRANSPOSITION A Dynamic Transposition cipher is conceptually very simple: (1) We collect plaintext data in bit-balanced (or almost bit-balanced) blocks. (2) We shuffle the bits in those blocks under the control of a keyed pseudorandom sequence. (The cipher designer decides exactly how the sequence is generated and keyed. A complete design will use message keys or other techniques to prevent sequence re-use.) The resulting scrambled blocks are the final ciphertext. When every plaintext block is exactly bit-balanced, any possible plaintext block is some valid bit-permutation of any ciphertext block. So, even if an opponent could exhaustively un-permute a ciphertext block, the result would just be every possible plaintext block. No particular plaintext block could be distinguished as the source of the ciphertext. This is a form of balanced, nonlinear combining of the confusion sequence and data block: as such, it is related to XOR, Latin squares, Shannon "perfect secrecy," and the one-time-pad (OTP). The inability to distinguish a particular plaintext, even when every possibility is tried, is basically the advantage claimed for the OTP. It is also an advantage which the OTP cannot justify in practice unless we can prove that the OTP keying sequence is unpredictable, which generally cannot be done. That makes the practical OTP exceedingly "brittle": if the opponents ever do gain the ability to predict the sequence, they may be able to attack many messages, both future and past. That would occur in the context of a system supposedly "proven" secure; as usual, the user would have no indication of security failure. Dynamic Transposition does not need the assumption of sequence unpredictability, because the sequence is hidden behind a multitude of different sequences and permutations which all produce the same result. And if the sequence itself cannot be exposed, exploiting any predictability in the sequence will be difficult. (This of course does not mean that Dynamic Transposition cannot be attacked: Brute-force attacks on the keys are still imaginable, which is a good reason to use large random message keys.) In particular, shuffling the bits in a bit-balanced block means that a huge number of different permutations will produce exactly the same ciphertext result. Where one ciphertext may have a '1' coming from the start of the plaintext block, another may have a '1' from near the end, both of which produce exactly the same ciphertext result. So, even if the opponents have the plaintext and the associated ciphertext, the ciphering permutation is still not revealed, and that protects against attacks on the pseudorandom sequence and the RNG. Bit-permutation does have, of course, a substantial execution cost. Dynamic Transposition is conceptually as simple as, say, "simply" exponentiating a value modulo the product of two large primes. Of course, actual the implementation details in any real cipher can be complex. But the feeling of complexity often goes away after running through a couple of designs where the problems are all fully solved. And testing the implementation may be easier with Dynamic Transposition. A Dynamic Transposition system is far more transparent than most alternatives. The components are all well-known cryptographic technology, each of which can be independently understood, evaluated, tested -- and even upgraded, when necessary. And the simple 8th-grade combinatorics involved in Dynamic Transposition analysis (which, alas, I occasionally get wrong anyway) support more believable conclusions than graduate-level number-theoretic asymptotic "proofs." DYNAMIC TRANSPOSITION CIPHERING Most of the components used in Dynamic Transposition have been discussed many times; see, for example: http://www.io.com/~ritter/KEYSHUF.HTM and http://www.io.com/~ritter/CLO2DESN.HTM One of the needed components which has not been described is the bit-balancing process. Bit-Balanced Blocks Some of the analytic and strength advantages of Dynamic Transposition depend upon having the same number of 1's and 0's in each block, which is called "bit-balance." Exact bit-balance can be achieved by accumulating data to a block byte-by-byte, only as long as the block can be balanced by adding appropriate bits at the end. Then the block is ciphered, and another block filled. We will always add at least one byte of "balance data," at the end of a block, and only the first balance byte, immediately after the data, will contain both 1's and 0's. We can thus transparently remove the balance data by stepping from the end of the block, past the first byte containing both 1's and 0's. In general, bit-balancing may expand ASCII text by as much as 1/3. We can reduce that if we have random-like data. A pre-processing data compression step would not only reduce plaintext size, it would also reduce the expansion factor. Approximate or statistical bit-balance can be achieved by creating a sort of Vernam cipher to "randomize" the plaintext. The result is almost balanced blocks with no expansion at all. Bit Shuffling This is ordinary cryptographic shuffling -- algorithmic permutation -- of bits. It is of course necessary that this be done properly, to achieve a balanced probability for each result, but that is well-known cryptographic technology. (Again, see: http://www.io.com/~ritter/KEYSHUF.HTM .) The usual solution is the well-known algorithm by Durstenfeld, called "Shuffle," which Knuth II calls "Algorithm P (Shuffling)," although any valid permutation generator would be acceptable. We can treat the accumulated and balanced block as a bit vector and walk through it, shuffling just like we might do with an array of bytes. If we shuffle each block just once, an opponent who somehow knows the correct resulting permutation can use that information to reproduce the shuffling RNG sequence, and thus start to attack the RNG. And even though we think such an event impossible (since the correct permutation is hidden by a plethora of different bit-permutations that each produce exactly the same ciphertext from exactly the same plaintext), eliminating that possibility (by shuffling each block twice) is probably worthwhile. This does not produce more permutations, it just hides shuffling sequence. Deciphering We can easily return an encrypted block to plaintext by applying Shuffle in reverse order. The keyed pseudorandom sequence used for encryption is accumulated in a buffer and used last-value-first. And if we shuffle twice to encipher, we must unshuffle twice to decipher. ANALYSIS Dynamic Transposition is clearly a block cipher: Data must be accumulated into blocks before ciphering can begin. It is not, however, a conventional block cipher. It does not emulate a large substitution. Unlike conventional block ciphers, Dynamic Transposition has no data diffusion, nor is that needed. It has no S-boxes, and so has no weak S-boxes. The only mixing involved is the single mathematically uniform distribution of permutations, so there is no weak mixing. And whereas conventional block ciphers need "modes of operation" to randomize plaintext and thus minimize the ability to mount codebook attacks, Dynamic Transposition does not. While conventional block ciphers generally do not scale down to exhaustively testable size, Dynamic Transposition scales well. Presumably, it could be made to operate on blocks of variable size on a block-by-block basis. Each component also can be scaled down and tested independently. It is easy to sink into a morass of arithmetic in the strength argument. Rather than do that here, I will just highlight the major issues: The main idea is to hide the RNG sequence (actually the nonlinear sequence of jitterized values), so an opponent cannot attack the deterministic RNG. Strength is provided by the block size and guaranteed bit-balance, since, when shuffled, a plethora of different permutations will take the plaintext block to exactly the same ciphertext block. There simply is no one permutation which produces the given ciphertext. Since a plethora of different permutations will produce the given ciphertext, trying them all is impractical. So the opponents will not know the permutation -- even with known plaintext -- and will not have the information to attack the RNG. If the first level of strength fails and the ciphering permutation is discovered, more strength occurs in the fact that another plethora, this time different RNG sequences, will create that exact same permutation. When each block is shuffled twice, this prevents an opponent from finding the sequence needed to attack the RNG. Then, of course, we get to whatever strength is in the RNG system itself. That includes having an internal state which significantly exceeds the amount of information used for any block permutation. It also includes the unknown strength of nonlinear filtering (e.g., "jitterizing") of the raw RNG data. Large Numbers It is common, in cipher analysis, to disdain the use of large-number arguments. That is because, in practice, opponents do not even attempt to attack well-designed ciphers by straightforward brute force, because such an approach would be hopeless. Ciphers are instead attacked and broken by finding some pattern which will discard huge numbers of possible keys with every test. But such patterns generally occur in the context of complex iterative cipher systems which *can* have defects. In contrast, here we use mathematically-correct permutation algorithms, and no such defect should be present. In the current Dynamic Transposition design, even with RNG nonlinear processing, we do expect correlations to exist in the RNG shuffling sequence. But if the sequence is hidden and cannot be isolated, it would seem to be very difficult to find and use such relationships. In a real sense, using a combiner more complex than XOR provides protection against the disaster of a predictable sequence -- protection which is simply unavailable in conventional OTP designs. The simple structure of the Dynamic Transposition combiner (a simple selection from among all possible bit-permutations) would seem to not lend itself as well to statistical attack as conventional complex block cipher designs. Key Re-Use and Message Keys In any real system which uses a RNG confusion sequence, it is important that any particular encrypting sequence "never" be re-used. It is also important to be able to use a master key to send multiple messages without reducing the security of the system. This issue is known as "key re-use." We can solve both issues by introducing "message keys." One way to support the re-use of a master key is to create some large, random unknowable value which we call a "message key" and use as the key for data ciphering. Because this can be a completely arbitrary random value (e.g., from a noise generator source), we can repeatedly use the same master key securely. We send message key values to the other end by enciphering each under an unchanging master key. At the other end, the message key is first deciphered, and the exposed random value is used as the key to decipher the data. This is standard stream-cipher technology. The necessary advantage of a message key is to support key-reuse. Another advantage is to prevent attacks which depend upon having the same key across multiple messages. But an even greater advantage is to skip past the small and (probably) structured master or user key to a large, flat-distributed message key. For the past decade I have often used random 992-bit message keys. The concept of a message key pre-dates the similar if more published term "session key." But a "session" refers to the establishment of logical connection in a network or system, a concept which is both unnecessary and extraneous to ciphering per se. And even when a logical session exists, we might be wise to use different message key on every transmission in a potentially-long session. Thus, the term "session key" simply does not capture the cryptographic essence of the problem. ATTACKS Complex and hard-to-refute conventional block-cipher attacks, such as Differential and Linear Cryptanalysis, would seem to be completely irrelevant to ciphers of the Dynamic Transposition type. Where there are no S-boxes, there are no linear or differential relations in S-boxes. And where there is no Feistel structure, there is no ability to peel off Feistel layers. All this is a serious advantage. The classic attack on transposition, "multiple anagramming," is avoided by having a different permutation for every block. The use of random message keys forces each message to use a different encryption sequence. This also prevents the "bit flipping" sort of defined-plaintext attacks which attempt to reveal the permutation. The classic sort of codebook attack is avoided first by having a huge block size, and next by not attempting to re-use permutations. Note that conventional block ciphers have the first advantage only to a lesser extent, and the second advantage not at all, which is why they need "modes of operation." Known-plaintext attacks would be a first step in an attempt to attack the RNG as in a stream cipher. But with Dynamic Transposition, known-plaintext does not disclose the enciphering permutation, because a plethora of different bit-permutations will produce exactly the same ciphertext from exactly the same plaintext. A successful attack would thus require some way to distinguish more probable permutations from less probable ones. Such an attack would seem to be avoided by proper shuffling. And should the opponents somehow find the correct permutation, attempts to find the shuffling RNG sequence would be thwarted by double-shuffling. SUMMARY A novel, relatively easy-to-design and easy-to-analyze type of cipher has been presented. There would seem to be ample reason to consider Dynamic Transposition to be yet another "unbreakable" cipher, when properly implemented. In marked contrast to most "unbreakable" ciphers, Dynamic Transposition strength arguments are exactly the same sort of arguments we use for keys: A correct key exists, but is safe when hidden among many others just like it. These arguments are simple, fundamental to all keyed ciphers of every type, and do not depend upon unproven assumptions. --- Terry Ritter ritter@io.com http://www.io.com/~ritter/ Crypto Glossary http://www.io.com/~ritter/GLOSSARY.HTM
Subject: Re: Dynamic Transposition Revisited (long) Date: Fri, 19 Jan 2001 01:07:18 GMT From: jsavard@ecn.ab.SBLOK.ca.nowhere (John Savard) Message-ID: <3a678cf2.21708539@news.powersurfr.com> References: <3a67737e.15537635@news.io.com> Newsgroups: sci.crypt Lines: 19 I thank you for an interesting post. Some people will be quite shocked at the unconventional nature of transposition, though. I will just content myself with noting that, particularly if the units being transposed are larger than a bit, if one relies exclusively on transposition, some aspects of the plaintext are preserved in the ciphertext. (Even if one transposes single bits, the output has the same number of 1 bits as the input.) Hence, I am going to recommend what you probably realize quite well in any case for the benefit of other readers: that while transposition is a worthy encipherment method that is unjustly neglected, it should not be used completely alone; it is worthwhile to use it in combination with substitution. John Savard http://home.ecn.ab.ca/~jsavard/crypto.htm
Subject: Re: Dynamic Transposition Revisited (long) Date: Fri, 19 Jan 2001 03:41:40 GMT From: Benjamin Goldberg <goldbb2@earthlink.net> Message-ID: <3A67B775.D599976A@earthlink.net> References: <3a678cf2.21708539@news.powersurfr.com> Newsgroups: sci.crypt Lines: 28 John Savard wrote: > > I thank you for an interesting post. > > Some people will be quite shocked at the unconventional nature of > transposition, though. > > I will just content myself with noting that, particularly if the units > being transposed are larger than a bit, if one relies exclusively on > transposition, some aspects of the plaintext are preserved in the > ciphertext. (Even if one transposes single bits, the output has the > same number of 1 bits as the input.) You seem to be ignoring that before shuffling, the data is accumulated into bit-balanced blocks. Did you read the whole paper? > Hence, I am going to recommend what you probably realize quite well in > any case for the benefit of other readers: that while transposition is > a worthy encipherment method that is unjustly neglected, it should not > be used completely alone; it is worthwhile to use it in combination > with substitution. > > John Savard > http://home.ecn.ab.ca/~jsavard/crypto.htm -- Most scientific innovations do not begin with "Eureka!" They begin with "That's odd. I wonder why that happened?"
Subject: Re: Dynamic Transposition Revisited (long) Date: Fri, 19 Jan 2001 05:21:13 GMT From: jsavard@ecn.ab.SBLOK.ca.nowhere (John Savard) Message-ID: <3a67c966.290789@news.powersurfr.com> References: <3A67B775.D599976A@earthlink.net> Newsgroups: sci.crypt Lines: 88 On Fri, 19 Jan 2001 03:41:40 GMT, Benjamin Goldberg <goldbb2@earthlink.net> wrote, in part: >You seem to be ignoring that before shuffling, the data is accumulated >into bit-balanced blocks. Did you read the whole paper? I admit that I only skimmed through it. I saw the word 'bit-balanced', but didn't look closely. Upon looking again, I see that he indeed explicitly mentioned this point. However, I note that details of bit-balancing were not given. If the data is *not* pre-processed or compressed or whatever, and if the point at which a block is cut off depends on what bits are in the block (and the extent of the need for balancing) then that has to be indicated somehow, and the indications need to be encrypted... this could get quite messy. Of course, a *simple* way of dealing with this problem would be to code 6 bits of binary input to an 8-bit byte in a 4 of 8 code, using 64 of the 70 possible values. This would be a brute-force approach, but it would mean that one would not have a data-dependent concern with the block lengths. They could be uniform, or chosen randomly. I suppose even with a substitution phase, if one ensures bit balance, the transposition phase is made maximally strong; but _then_ one has a redundancy problem (since after transposition, any simplistic scheme of ensuring bit balance wouldn't be preserved - i.e. after a bit transpose of bytes in a 4 of 8 code, each byte of the result would not necessarily have exactly four 1 bits). Of course, data already _encrypted_ by a substitution phase could be assumed or hoped to have good enough bit balance, or failing that the substitution could be strong enough to ensure security. (If the substitution is on short blocks, then the key for the substitution must vary with each short block, else a plaintext consisting of identical short blocks could send a signal through the transposition...) Basically, the problem is that bit-balanced binary data and raw arbitrary binary data aren't a good fit to each other. Six bits of arbitrary binary data -> 8 bits of 4 of 8 coded data -> 8 bits of binary data having an overall bit balance means that the 8 bits of finally-encrypted data can't be compressed all that much. Of course, the initial means of achieving bit-balance needn't be as inefficient as a 4 of 8 code, but it still *won't* be fully optimal over the entire transposition block length. Whatever means of obtaining bit-balance is used, it will need a smaller "window" than a whole transposition block to be reasonably efficient. This expansion of the data will, I fear, like the autokey nature (making for poor error-recovery properties) of Dynamic Substitution, condemn Dynamic Transposition in the specific form described here to marginality. The market does not want to hear of encryption methods that are awkwards - that can't be used, for example, to encrypt a sector of a disk and fit the encrypted result in a sector of a disk. Combining substitution and transposition - and using a Feistel-like structure to make the transposition key for one half of a block depend on the contents of the other half (thus obtaining the variability Mr. Ritter rightly notes as essential to avoid multiple anagramming) - is capable of producing a much more well-behaved and "conventional" cipher. This is not to say, however, that the concepts presented here are not useful. Terry Ritter has shown how one _could_, even at the cost of inconveniences people aren't usually willing to bear, make a secure cipher based on transposition alone. This is of theoretical significance, and that, all by itself, is genuinely valuable. (It might also be noted that the requirement that each transposition step must uniformly be capable of producing all N! permutations of bits is likely to make the transposition steps computationally inefficient, so I think that this is another unfashionable aspect of the specific proposal given. Although in this case, unfashionable does not mean unsound: that is a desirable ideal.) The fact that bit balancing is not going to produce, by any reasonable algorithm, all N!/(N/2)!^2 possible bit-balanced blocks as input to the transposition, my earlier objection to this scheme, also impinges on the value of this... John Savard http://home.ecn.ab.ca/~jsavard/crypto.htm
Subject: Re: Dynamic Transposition Revisited (long) Date: Fri, 19 Jan 2001 06:26:38 GMT From: Benjamin Goldberg <goldbb2@earthlink.net> Message-ID: <3A67DE08.A9B31B76@earthlink.net> References: <3a67c966.290789@news.powersurfr.com> Newsgroups: sci.crypt Lines: 173 John Savard wrote: > > On Fri, 19 Jan 2001 03:41:40 GMT, Benjamin Goldberg > <goldbb2@earthlink.net> wrote, in part: > > >You seem to be ignoring that before shuffling, the data is > >accumulated into bit-balanced blocks. Did you read the whole paper? > > I admit that I only skimmed through it. I saw the word 'bit-balanced', > but didn't look closely. > > Upon looking again, I see that he indeed explicitly mentioned this > point. However, I note that details of bit-balancing were not given. > If the data is *not* pre-processed or compressed or whatever, and if > the point at which a block is cut off depends on what bits are in the > block (and the extent of the need for balancing) then that has to be > indicated somehow, and the indications need to be encrypted... Not quite. Let's assume that we are using 128 bit blocks. The front of the block contains data, the end of the block contains balancing bits. If the balancing bits are all 1s, then all 0s (or vice versa), and we fill in as many bytes of data as we can into the front of the block, only one of the balancing bytes will contain both ones and zeros, and that will be the first balancing byte. All of the rest of the balancing bytes, from there to the end of the file, will contain all ones, or all zeros. No external data is needed to indicate where the balancing bytes begin. > > this could get quite messy. > > Of course, a *simple* way of dealing with this problem would be to > code 6 bits of binary input to an 8-bit byte in a 4 of 8 code, using > 64 of the 70 possible values. This would be a brute-force approach, > but it would mean that one would not have a data-dependent concern > with the block lengths. They could be uniform, or chosen randomly. Huh? I don't get what you're saying. > I suppose even with a substitution phase, if one ensures bit balance, > the transposition phase is made maximally strong; but _then_ one has a > redundancy problem (since after transposition, any simplistic scheme > of ensuring bit balance wouldn't be preserved - i.e. after a bit > transpose of bytes in a 4 of 8 code, each byte of the result would not > necessarily have exactly four 1 bits). We have a nice big 128 bit block. We shuffle all the bits in it. We are not shuffling the bytes in it. If bits were balanced before shuffling the bits, they will certainly be balanced after we shuffle the bits. [snip] > Of course, the initial means of achieving bit-balance needn't be as > inefficient as a 4 of 8 code, but it still *won't* be fully optimal > over the entire transposition block length. Whatever means of > obtaining bit-balance is used, it will need a smaller "window" than a > whole transposition block to be reasonably efficient. bal = (# bits in block) / 2 while( (unfilled bytes - 1) can be used to balance the block ) get a byte, put it in the block z = num 0s in block o = num 1s in block z = b - z // num 0s needed to balance block o = b - o // num 1s needed to balance block assert( (z<8) || (o<8) ); if( o < z ) { assert( o > 0 && o < 8 ) put the value (0xFF >> o) in block z = z - (8 - o) fill rest of byte with z/8 0xFF values } else if( z < o ) { assert( z > 0 && z < 8 ) put the value (0xFF >> (8-z)) in block o = o - (8 - z) fill rest of byte with o/8 0x00 values } else { assert( z == 4 && o == 4 ) put the value 0xF0 in block } > This expansion of the data will, I fear, like the autokey nature > (making for poor error-recovery properties) of Dynamic Substitution, > condemn Dynamic Transposition in the specific form described here to > marginality. Only if you try to balance very small blocks using the method which you invented, rather than using normal sized blocks (128 or 256 or 512 bits). If one compresses first, there's almost no data expansion, since with any good compression scheme, the output is almost always nearly bit-balanced. > The market does not want to hear of encryption methods > that are awkwards - that can't be used, for example, to encrypt a > sector of a disk and fit the encrypted result in a sector of a disk. Well, considering that with Ritter's scheme for bit balancing, which I described above, there is *always* one byte of message expansion per block, we almost certainly would not want to use this to encrypt disk sectors -- simply because, for disk sector encryption, we want little or no expansion. However, for encrypting a stream, or a file [at user level], this is not as much of a problem. One byte of expansion for every 32 bytes of message is acceptable. > Combining substitution and transposition - and using a Feistel-like > structure to make the transposition key for one half of a block depend > on the contents of the other half (thus obtaining the variability Mr. > Ritter rightly notes as essential to avoid multiple anagramming) - is > capable of producing a much more well-behaved and "conventional" > cipher. The "transposition key," which you speak of as if it were the key to a normal block cipher, is actually the key of a PRNG, whose output is fed into a function which shuffles bits. The PRNG is not rekeyed between blocks. > This is not to say, however, that the concepts presented here are not > useful. Terry Ritter has shown how one _could_, even at the cost of > inconveniences people aren't usually willing to bear, make a secure > cipher based on transposition alone. The inconvenience is that of the expansion due to bit balancing blocks. With his scheme for doing this, there is very little expansion, and thus little inconvenience. Expansion is minimized if one compresses before encrypting, which is done anway with most good schemes. > This is of theoretical significance, and that, all by itself, is > genuinely valuable. > > (It might also be noted that the requirement that each transposition > step must uniformly be capable of producing all N! permutations of > bits is likely to make the transposition steps computationally > inefficient, so I think that this is another unfashionable aspect of > the specific proposal given. Although in this case, unfashionable does > not mean unsound: that is a desirable ideal.) for( i = N-1; i > 0; --i ) { swap( bit[i], bit[ PRNG_ranmod(i+1) ] ) PRNG_ranmod *can* be made unbiased and fairly efficient, and if it is, then we will have an unbiased selection of all N! transpositions. > The fact that bit balancing is not going to produce, by any reasonable > algorithm, all N!/(N/2)!^2 possible bit-balanced blocks as input to > the transposition, my earlier objection to this scheme, also impinges > on the value of this... I thought that the number of bit balanced, N bit blocks was (N/2)!(N/2)! Of course, I haven't done combinatorics in a while, so I might be wrong. Anyway, since every bit-balanced block is a permutation of any other bit-balanced block, it doesn't matter. If all encryption permutations are equally likely (which they are), your point is invalidated. Your statement is akin to saying that since all 2^N plaintexts are not equally likely, OTP is insecure. My response is analogous to saying that if all pads used in OTP are equally likely, then it doesn't matter if the plaintexts are biased. If a Truly Random string of bits is used instead of a PRNG, (and this string is used once), then, informationwise, this system is precisely as secure as OTP, AFAIKS. -- Most scientific innovations do not begin with "Eureka!" They begin with "That's odd. I wonder why that happened?"
Subject: Re: Dynamic Transposition Revisited (long) Date: Fri, 19 Jan 2001 08:56:50 GMT From: jsavard@ecn.ab.SBLOK.ca.nowhere (John Savard) Message-ID: <3a67fe34.1042739@news.powersurfr.com> References: <3A67DE08.A9B31B76@earthlink.net> Newsgroups: sci.crypt Lines: 81 On Fri, 19 Jan 2001 06:26:38 GMT, Benjamin Goldberg <goldbb2@earthlink.net> wrote, in part: >only one of the balancing bytes will contain both ones and zeros, and >that will be the first balancing byte. All of the rest of the balancing >bytes, from there to the end of the file, will contain all ones, or all >zeros. Yes, you're right. I disliked simple padding enough not to think through whether or not it might be workable. quoting me: >> Of course, a *simple* way of dealing with this problem would be to >> code 6 bits of binary input to an 8-bit byte in a 4 of 8 code, using >> 64 of the 70 possible values. This would be a brute-force approach, >> but it would mean that one would not have a data-dependent concern >> with the block lengths. They could be uniform, or chosen randomly. >Huh? I don't get what you're saying. I considered just balancing each byte as I went along, simply because it was more straightforwards. >We have a nice big 128 bit block. We shuffle all the bits in it. We >are not shuffling the bytes in it. If bits were balanced before >shuffling the bits, they will certainly be balanced after we shuffle the >bits. That wasn't the problem I was thinking of. I was thinking of the data expansion needed to balance the bits - and an additional data expansion caused by the transposition messing up any shortcuts we took to do the balancing, so that I couldn't take the balanced output and convert it to the same number of unbalanced bits as I got as input. >> The fact that bit balancing is not going to produce, by any reasonable >> algorithm, all N!/(N/2)!^2 possible bit-balanced blocks as input to >> the transposition, my earlier objection to this scheme, also impinges >> on the value of this... >I thought that the number of bit balanced, N bit blocks was (N/2)!(N/2)! >Of course, I haven't done combinatorics in a while, so I might be wrong. Here, I know I'm right. Arrange the characters of the string abcd1234 in all possible ways, and there are 8! possibilities. Now, replace each of the characters abcd by zeroes. You have eliminated the 4! different ways of arranging those letters, so the number of different possibilities is now 8!/4!. Replace each of 1234 by 1, and you have divided by 4! yet again. Thus, 8*7*6*5/4*3*2*1 = 2*7*5 = 70, the number of characters in a 4 of 8 code. (Which is why I can change 6 bits to 8 bits, get guaranteed bit balance, and not have to do anything funny, at the cost of extra redundancy.) >If a Truly Random string of bits is used instead of a PRNG, (and this >string is used once), then, informationwise, this system is precisely as >secure as OTP, AFAIKS. This is true, if the input is fully balanced, not just approximately, because the method calls for all N! permutations to be equally likely in this case. That's why I agree it is important from the theoretical standpoint, it shows that transposition can be used alone to create a cipher of serious strength. I still ask, would anyone want to do it, because it appears to me this has a bandwidth cost that people will see no need to incur. I assumed that bit-balancing is done over smaller extents than transposition, because in the practical non-OTP case, the largest possible transposition block improves security. John Savard http://home.ecn.ab.ca/~jsavard/crypto.htm
Subject: Re: Dynamic Transposition Revisited (long) Date: Fri, 26 Jan 2001 14:08:19 -0700 From: "Tony T. Warnock" <u091889@cic-mail.lanl.gov> Message-ID: <3A71E743.8F710ACB@cic-mail.lanl.gov> References: <3a67fe34.1042739@news.powersurfr.com> Newsgroups: sci.crypt Lines: 39 The efficiency of the balanced pre-substitutions can be improved by taking more bits. Of course a table-lookup gets too big rather quickly. Output Input Size Length Expansion 6 4 50.0% 8 6 33.3% 10 7 42.9% 12 9 33.3% 14 11 27.3% 16 13 23.1% 18 15 20.0% 20 17 17.6% 22 19 15.8% 24 21 14.3% 26 23 13.0% 28 25 12.0% 30 27 11.1% 32 29 10.3% 64 60 6.7% 128 124 3.2% 256 251 2.0% 512 507 1.0% 1024 1018 0.6% The 17 bit input is probably the largest that could be done by table lookup. I haven't tried to see if there is a function to do the mappings (which would make things more feasible.) If one could live with a 1-bit bias, then the odd-sized outputs could be used. It's more efficient. A 15 bit input block then maps to a 17 bit output block. Or one could use the evenly balanced and 1-off blocks to improve efficiency. Then 15 bit input blocks map to 16 bit output blocks with 7,8, or 9 bits.
Subject: Re: Dynamic Transposition Revisited (long) Date: Thu, 18 Jan 2001 21:46:47 -0800 From: "John A. Malley" <102667.2235@compuserve.com> Message-ID: <3A67D4C7.C6F79499@compuserve.com> References: <3a67737e.15537635@news.io.com> Newsgroups: sci.crypt Lines: 98 Terry Ritter wrote: [snip] > > One of the needed components which has not been described > is the bit-balancing process. > > Bit-Balanced Blocks > > Some of the analytic and strength advantages of Dynamic > Transposition depend upon having the same number of 1's > and 0's in each block, which is called "bit-balance." > Exact bit-balance can be achieved by accumulating data to > a block byte-by-byte, only as long as the block can be > balanced by adding appropriate bits at the end. Then the > block is ciphered, and another block filled. > > We will always add at least one byte of "balance data," > at the end of a block, and only the first balance byte, > immediately after the data, will contain both 1's and 0's. > We can thus transparently remove the balance data by > stepping from the end of the block, past the first byte > containing both 1's and 0's. I had a few immediate hunches about cryptanalysis of dynamic transposition but realized they may spring from misunderstanding of the algorithm. May I paraphrase the description of block balancing to make sure I understand the mechanism as envisioned? And please, correct me if I got this wrong. Given plaintext P, 1) divvy P into bytes as P[1], P[2], P[3] ... P[n], 2) build up (one at a time) blocks of size k bytes, B[1], B[2], B[3] ... B[m] where m < n, and sequential plaintext bytes are assigned to a given block B[i] where B[i] is the concatenation of a few plaintext bytes, followed by a special byte that has 0s and 1s in it, followed by bytes of all zeros or all ones - like P[1] | P[2] | ... | P[L] | a block of 1s and 0s | 00000000 | 11111111 | ... 00000000 = B[i] or P[1] | P[2] | ... | P[L] | a block of 1s and 0s | "0" | "255" | ... "0" = B[i] Is this an accurate description of the proposed bit balancing? I'm curious about fixed bit patterns or expected bit patterns (if any) that end up in the appended plaintext, for they may serve as the chink in the armor for cryptanalysis. For example, consider 11111000 | 11111000 | as plaintext, which is then followed by a "signal byte" 11111000 | 11111000 | 00110011 | to "balance out" the 1s and 0s in the first two bytes of plaintext, a then followed by two bytes of all 0s and all 1s to round off the block as 11111000 | 11111000 | 00101010 | 00000000 | 11111111 or, also acceptable, 11111000 | 11111000 | 00101010 | 11111111 | 00000000 Can that "signal" byte take on only certain values when it "balances" the preceding bytes, and can the bytes of "0" and "255" value following occur in any order, or is it a fixed order? Uh-oh. I just realized I must have this wrong. I came up with an example that won't balance the way I described above - 10000000 | 00000000 | there is no value I can plug into the "signal" byte that's a mixture of 1s and 0s, to follow these, that gets followed by all 0 and all 1s bytes, that balance the resulting block. 10000000 | 00000000 | XXXXXXXX | 00000000 | 11111111 Please help, how does the bit-balancing work? John A. Malley 102667.2235@compuserve.com
Subject: Re: Dynamic Transposition Revisited (long) Date: Fri, 19 Jan 2001 07:23:29 GMT From: Benjamin Goldberg <goldbb2@earthlink.net> Message-ID: <3A67EB6C.E9E2CD64@earthlink.net> References: <3A67D4C7.C6F79499@compuserve.com> Newsgroups: sci.crypt Lines: 118 John A. Malley wrote: > > Terry Ritter wrote: > [snip] > > > > One of the needed components which has not been described > > is the bit-balancing process. > > > > Bit-Balanced Blocks > > > > Some of the analytic and strength advantages of Dynamic > > Transposition depend upon having the same number of 1's > > and 0's in each block, which is called "bit-balance." > > Exact bit-balance can be achieved by accumulating data to > > a block byte-by-byte, only as long as the block can be > > balanced by adding appropriate bits at the end. Then the > > block is ciphered, and another block filled. > > > > We will always add at least one byte of "balance data," > > at the end of a block, and only the first balance byte, > > immediately after the data, will contain both 1's and 0's. > > We can thus transparently remove the balance data by > > stepping from the end of the block, past the first byte > > containing both 1's and 0's. > > I had a few immediate hunches about cryptanalysis of dynamic > transposition but realized they may spring from misunderstanding of > the algorithm. > > May I paraphrase the description of block balancing to make sure I > understand the mechanism as envisioned? And please, correct me if I > got this wrong. > > Given plaintext P, > > 1) divvy P into bytes as P[1], P[2], P[3] ... P[n], > > 2) build up (one at a time) blocks of size k bytes, B[1], B[2], B[3] > ... B[m] where m < n, and sequential plaintext bytes are assigned to > a given block B[i] where B[i] is the concatenation of a few plaintext > bytes, followed by a special byte that has 0s and 1s in it, followed > by bytes of all zeros or all ones - like > > P[1] | P[2] | ... | P[L] | a block of 1s and 0s | 00000000 | > 11111111 | ... 00000000 = B[i] > > or > > P[1] | P[2] | ... | P[L] | a block of 1s and 0s | "0" | "255" | ... > "0" = B[i] > > Is this an accurate description of the proposed bit balancing? Almost. It's more like if( P[1..L] has more 0s than 1s ) { P[1] | P[2] | ... | P[L] | XXXXXXX | 00000000 | 00000000 = B[i] Where XXXXXXXX is some number of 1s and 0s. } else { P[1] | P[2] | ... | P[L] | XXXXXXX | 11111111 | 11111111 = B[i] Where XXXXXXXX is some number of 0s and 1s. } Only when the plaintext reaches EOF does it look at all like your example. Here's some C code, to [hopefully] make things clear: unsigned char balanced[16]; // 128 bits unsigned int bp = 0; // pointer to balanced data. unsigned int z = 0, o = 0; // zeros, ones. extern unsigned h(unsigned); // hamming weight do { unsigned char rnext = peekc(); unsigned int zn = z+h(rnext); unsigned int on = o+8-h(rnext); if( abs(zn-on) >= 64 ) break; balanced[bp++] = getc(); z = zn; o = on; } while( bp < 15 ); z = 128 - z; o = 128 - o; // change from how many 0s and 1s are there, to how many are needed? if( o < z ) { balanced[bp++] = 0xFF << (8-o); while( bp < 16 ) balanced[bp++] = 0x00; } else { balanced[bp++] = 0xFF >> z; while( bp < 16 ) balanced[bp++] = 0xFF; } Thus, blocks are either raw data, balance byte, 0 or more 0xFF bytes or raw data, balance byte, 0 or more 0x00 bytes. To reverse the transform, we do the following: if the last byte is 0x00, then we remove all 0x00 bytes from the end, and the first non-0x00 byte. if the last byte is 0xFF, then we remove all 0xFF bytes from the end, and the first non-0xFF byte. otherwise, we simply remove the last byte. The data near the EOF is handled slightly differently, of course, but I'm not going to write code for it now (if ever). > I'm curious about fixed bit patterns or expected bit patterns (if any) > that end up in the appended plaintext, for they may serve as the chink > in the armor for cryptanalysis. Do expected bit patterns/fixed bit patterns in plaintext provide a chink in the armor of OTP? -- Most scientific innovations do not begin with "Eureka!" They begin with "That's odd. I wonder why that happened?"
Subject: Re: Dynamic Transposition Revisited (long) Date: Fri, 19 Jan 2001 18:44:20 +0100 From: Mok-Kong Shen <mok-kong.shen@t-online.de> Message-ID: <3A687CF4.F4897FBE@t-online.de> References: <3A67EB6C.E9E2CD64@earthlink.net> Newsgroups: sci.crypt Lines: 45 Benjamin Goldberg wrote: > > John A. Malley wrote: > > May I paraphrase the description of block balancing to make sure I > > understand the mechanism as envisioned? And please, correct me if I > > got this wrong. > > > > Given plaintext P, > > > > 1) divvy P into bytes as P[1], P[2], P[3] ... P[n], > > > > 2) build up (one at a time) blocks of size k bytes, B[1], B[2], B[3] > > ... B[m] where m < n, and sequential plaintext bytes are assigned to > > a given block B[i] where B[i] is the concatenation of a few plaintext > > bytes, followed by a special byte that has 0s and 1s in it, followed > > by bytes of all zeros or all ones - like > > > > P[1] | P[2] | ... | P[L] | a block of 1s and 0s | 00000000 | > > 11111111 | ... 00000000 = B[i] > > > > or > > > > P[1] | P[2] | ... | P[L] | a block of 1s and 0s | "0" | "255" | ... > > "0" = B[i] > > > > Is this an accurate description of the proposed bit balancing? > > Almost. It's more like > if( P[1..L] has more 0s than 1s ) { > P[1] | P[2] | ... | P[L] | XXXXXXX | 00000000 | 00000000 = B[i] > Where XXXXXXXX is some number of 1s and 0s. > } else { > P[1] | P[2] | ... | P[L] | XXXXXXX | 11111111 | 11111111 = B[i] > Where XXXXXXXX is some number of 0s and 1s. > } [snip] I am certainly confused. What if, say, the block size is 4 bytes and one has (1) two bytes and (2) three bytes of information which are all 0's? Thanks. M. K. Shen
Subject: Re: Dynamic Transposition Revisited (long) Date: Fri, 19 Jan 2001 21:53:33 GMT From: Benjamin Goldberg <goldbb2@earthlink.net> Message-ID: <3A68B75C.1FD8DA8D@earthlink.net> References: <3A687CF4.F4897FBE@t-online.de> Newsgroups: sci.crypt Lines: 59 Mok-Kong Shen wrote: > > Benjamin Goldberg wrote: > > > > John A. Malley wrote: > > > > May I paraphrase the description of block balancing to make sure I > > > understand the mechanism as envisioned? And please, correct me if > > > I got this wrong. > > > > > > Given plaintext P, > > > > > > 1) divvy P into bytes as P[1], P[2], P[3] ... P[n], > > > > > > 2) build up (one at a time) blocks of size k bytes, B[1], B[2], > > > B[3] ... B[m] where m < n, and sequential plaintext bytes are > > > assigned to a given block B[i] where B[i] is the concatenation of > > > a few plaintext bytes, followed by a special byte that has 0s and > > > 1s in it, followed by bytes of all zeros or all ones - like > > > > > > P[1] | P[2] | ... | P[L] | a block of 1s and 0s | 00000000 | > > > 11111111 | ... 00000000 = B[i] > > > > > > or > > > > > > P[1] | P[2] | ... | P[L] | a block of 1s and 0s | "0" | "255" | > > > ... "0" = B[i] > > > > > > Is this an accurate description of the proposed bit balancing? > > > > Almost. It's more like > > if( P[1..L] has more 0s than 1s ) { > > P[1] | P[2] | ... | P[L] | XXXXXXX | 00000000 | 00000000 = B[i] > > Where XXXXXXXX is some number of 1s and 0s. > > } else { > > P[1] | P[2] | ... | P[L] | XXXXXXX | 11111111 | 11111111 = B[i] > > Where XXXXXXXX is some number of 0s and 1s. > > } > [snip] > > I am certainly confused. What if, say, the block size is > 4 bytes and one has (1) two bytes and (2) three bytes of > information which are all 0's? Thanks. A 4 byte block is really small. If you have two bytes of 0x00s, then one byte goes into the first block, one balance byte 0x00 is added, and two balance bytes 0xFFFF are added. The next 0x00 byte goes into the next block. With 3 0x00s it's similar. A 5 byte block is better. Two 0x00 bytes go in, then a 0x0F byte, then 0xFFFF. With three 0x00s, two bytes go into the first block, then one into the next. Fortuneatly, we won't be likely to be using 4 byte blocks, but much larger, perhaps 15 or 16 bytes. -- Most scientific innovations do not begin with "Eureka!" They begin with "That's odd. I wonder why that happened?"
Subject: Re: Dynamic Transposition Revisited (long) Date: Fri, 19 Jan 2001 22:05:48 GMT From: ritter@io.com (Terry Ritter) Message-ID: <3a68b916.9441029@news.io.com> References: <3A68B75C.1FD8DA8D@earthlink.net> Newsgroups: sci.crypt Lines: 74 On Fri, 19 Jan 2001 21:53:33 GMT, in <3A68B75C.1FD8DA8D@earthlink.net>, in sci.crypt Benjamin Goldberg <goldbb2@earthlink.net> wrote: >Mok-Kong Shen wrote: >> >> Benjamin Goldberg wrote: >> > >> > John A. Malley wrote: >> >> > > May I paraphrase the description of block balancing to make sure I >> > > understand the mechanism as envisioned? And please, correct me if >> > > I got this wrong. >> > > >> > > Given plaintext P, >> > > >> > > 1) divvy P into bytes as P[1], P[2], P[3] ... P[n], >> > > >> > > 2) build up (one at a time) blocks of size k bytes, B[1], B[2], >> > > B[3] ... B[m] where m < n, and sequential plaintext bytes are >> > > assigned to a given block B[i] where B[i] is the concatenation of >> > > a few plaintext bytes, followed by a special byte that has 0s and >> > > 1s in it, followed by bytes of all zeros or all ones - like >> > > >> > > P[1] | P[2] | ... | P[L] | a block of 1s and 0s | 00000000 | >> > > 11111111 | ... 00000000 = B[i] >> > > >> > > or >> > > >> > > P[1] | P[2] | ... | P[L] | a block of 1s and 0s | "0" | "255" | >> > > ... "0" = B[i] >> > > >> > > Is this an accurate description of the proposed bit balancing? >> > >> > Almost. It's more like >> > if( P[1..L] has more 0s than 1s ) { >> > P[1] | P[2] | ... | P[L] | XXXXXXX | 00000000 | 00000000 = B[i] >> > Where XXXXXXXX is some number of 1s and 0s. >> > } else { >> > P[1] | P[2] | ... | P[L] | XXXXXXX | 11111111 | 11111111 = B[i] >> > Where XXXXXXXX is some number of 0s and 1s. >> > } >> [snip] >> >> I am certainly confused. What if, say, the block size is >> 4 bytes and one has (1) two bytes and (2) three bytes of >> information which are all 0's? Thanks. > >A 4 byte block is really small. If you have two bytes of 0x00s, then >one byte goes into the first block, one balance byte 0x00 is added, and >two balance bytes 0xFFFF are added. The next 0x00 byte goes into the >next block. With 3 0x00s it's similar. > >A 5 byte block is better. Two 0x00 bytes go in, then a 0x0F byte, then >0xFFFF. With three 0x00s, two bytes go into the first block, then one >into the next. > >Fortuneatly, we won't be likely to be using 4 byte blocks, but much >larger, perhaps 15 or 16 bytes. When I think of Dynamic Transposition, I think of 512-byte / 4096-bit blocks. This stuff scales almost linearly, and 512 bytes is just a small buffer. I suspect that if we run the numbers, some modest minimum size -- perhaps 128 bits / 16 bytes -- is required for serious strength. We can scale it down to look at, of course, but we can't expect strength there as well. --- Terry Ritter ritter@io.com http://www.io.com/~ritter/ Crypto Glossary http://www.io.com/~ritter/GLOSSARY.HTM
Subject: Re: Dynamic Transposition Revisited (long) Date: Wed, 24 Jan 2001 02:24:00 GMT From: AllanW <allan_w@my-deja.com> Message-ID: <94lebq$2gj$1@nnrp1.deja.com> References: <3a68b916.9441029@news.io.com> Newsgroups: sci.crypt Lines: 97 ritter@io.com (Terry Ritter) wrote: > When I think of Dynamic Transposition, I think of 512-byte / 4096-bit > blocks. This stuff scales almost linearly, and 512 bytes is just a > small buffer. > > I suspect that if we run the numbers, some modest minimum size -- > perhaps 128 bits / 16 bytes -- is required for serious strength. We > can scale it down to look at, of course, but we can't expect strength > there as well. Really? I think your original paper made a good case for allowing short block sizes. Although the 4-byte block seems to be stretching things too far, a 16-byte block would work well. In degenerate cases such as all 0's, this would allow 7 bytes of data followed by 9 bytes of filler. For typical ASCII data this would allow 10 bytes of data and 6 bytes of filler. For data that's already balanced or very nearly so, we could put 15 bytes of data into a 16-byte block. Larger block sizes would help to make the overhead smaller, though. The best case will always require 1 byte of filler, while the worst case will always require N/2+1 bytes of filler (for N-byte blocks). As I understood the filler bits to work, the data would look like this before the transposition begins: if the data had more 0-bits than 1-bits, it would take the form XX....XX 00..00 11..11 where X bits represent the data, and then there are 1-8 0-bits followed by 1 or more 1-bits. If the data has more 1-bits than 0-bits, simply reverse the filler: XX....XX 11..11 00..00 this time using 1-8 1-bits and then 1 or more 0-bits. Another way to say this is to show what the filler would look like for various imbalances. Here I'll assume that the data has more 0-bits than 1-bits; use the complement if the opposite is true. If there are B more 0-bits than 1-bits, then the filler looks like (in hex): B=0 0F B=2 1F B=4 3F B=6 7F B=8 0FFF B=10. 1FFF B=12. 3FFF B=14. 7FFF B=16. 0FFFFF B=18. 1FFFFF B=20. 3FFFFF B=22. 7FFFFF B=24. 0FFFFFFF ...and so on. (Note that the number of data bits is always divisible by 8 and therefore an even number. This means that the number of 0-bits, minus the number of 1-bits, must also be an even number.) In the worst case for a 16-byte (128-bit) block, the data would be 56 0-bits. Then the program would add 8 more 0-bits and 64 1-bits. In the obvious best case, the data has 60 0-bits and 60 1-bits in any order. Then the filler is 0F -- 4 0-bits and 4 1-bits. The best case (15 bytes in a 16-byte block) is also achieved when there are up to 6 more 1-bits than 0-bits or vice-versa -- the filler still fits in one byte. If there is any weakness at all to this scheme, it's that even though you're using a fixed-size block, you cannot predict in advance how many blocks the message will need because each block contains between N/2-1 and N-1 bytes of data. On the other hand, this could be considered a strength because of the flip side of this observation: once you know how long the ciphertext is, you can calculate how many fixed-size blocks were used, but this does not tell you how long the original plaintext was. The larger the blocksize, the more true this is. Other than that, if there's any reason why a 16-byte block is less secure than a 512-byte block, it sure isn't obvious to me! (For whatever that's worth) -- Allan_W@my-deja.com is a "Spam Magnet," never read. Please reply in newsgroups only, sorry. Sent via Deja.com http://www.deja.com/
Subject: Re: Dynamic Transposition Revisited (long) Date: Wed, 24 Jan 2001 06:37:41 GMT From: ritter@io.com (Terry Ritter) Message-ID: <3a6e77ff.11155622@news.io.com> References: <94lebq$2gj$1@nnrp1.deja.com> Newsgroups: sci.crypt Lines: 166 On Wed, 24 Jan 2001 02:24:00 GMT, in <94lebq$2gj$1@nnrp1.deja.com>, in sci.crypt AllanW <allan_w@my-deja.com> wrote: >ritter@io.com (Terry Ritter) wrote: >> When I think of Dynamic Transposition, I think of 512-byte / 4096-bit >> blocks. This stuff scales almost linearly, and 512 bytes is just a >> small buffer. >> >> I suspect that if we run the numbers, some modest minimum size -- >> perhaps 128 bits / 16 bytes -- is required for serious strength. We >> can scale it down to look at, of course, but we can't expect strength >> there as well. > >Really? I think your original paper made a good case for >allowing short block sizes. Although the 4-byte block >seems to be stretching things too far, a 16-byte block >would work well. Which is what I just suggested. >In degenerate cases such as all 0's, >this would allow 7 bytes of data followed by 9 bytes of >filler. For typical ASCII data this would allow 10 bytes >of data and 6 bytes of filler. For data that's already >balanced or very nearly so, we could put 15 bytes of >data into a 16-byte block. Right. >Larger block sizes would help to make the overhead >smaller, though. Right. >The best case will always require 1 byte >of filler, Right. >while the worst case will always require N/2+1 >bytes of filler (for N-byte blocks). Looks right. >As I understood the filler bits to work, the data would >look like this before the transposition begins: if the >data had more 0-bits than 1-bits, it would take the form > XX....XX 00..00 11..11 >where X bits represent the data, and then there are 1-8 >0-bits followed by 1 or more 1-bits. If the data has >more 1-bits than 0-bits, simply reverse the filler: > XX....XX 11..11 00..00 >this time using 1-8 1-bits and then 1 or more 0-bits. That does not seem to be the way I did it. I don't understand having both 0's and 1's balancing bytes. If we have an excess of 0's, we want any full balancing bytes to be all 1's, with the mixed byte being what it needs to be. Since the mixed byte must be mixed to be a flag, it cannot balance a full 8 bits, but at most 6 (1000 0000 is an excess of 6 0's). >Another way to say this is to show what the filler >would look like for various imbalances. Here I'll >assume that the data has more 0-bits than 1-bits; use >the complement if the opposite is true. > > If there are B more 0-bits than 1-bits, then the > filler looks like (in hex): > > B=0 0F > B=2 1F > B=4 3F > B=6 7F > B=8 0FFF > B=10. 1FFF > B=12. 3FFF > B=14. 7FFF > B=16. 0FFFFF > B=18. 1FFFFF > B=20. 3FFFFF > B=22. 7FFFFF > B=24. 0FFFFFFF > ...and so on. Looks right. >(Note that the number of data bits is always divisible >by 8 and therefore an even number. This means that the >number of 0-bits, minus the number of 1-bits, must also >be an even number.) Right. In a fixed-size array of bits, changing a '1' to a '0' is to decrease the 1's-count by 1, and increase the 0's-count by 1, a difference of 2. >In the worst case for a 16-byte (128-bit) block, the >data would be 56 0-bits. Then the program would add 8 >more 0-bits and 64 1-bits. > >In the obvious best case, the data has 60 0-bits and >60 1-bits in any order. Then the filler is 0F -- 4 >0-bits and 4 1-bits. The best case (15 bytes in a >16-byte block) is also achieved when there are up to >6 more 1-bits than 0-bits or vice-versa -- the filler >still fits in one byte. Right. Thus the advantage of this particular balancing scheme: The flag byte is not wasted (or even counter-productive), but instead can participate in balancing. >If there is any weakness at all to this scheme, it's >that even though you're using a fixed-size block, you >cannot predict in advance how many blocks the message >will need because each block contains between N/2-1 and >N-1 bytes of data. Yes, but I think that's a strange thing to do anyway. Why would we predict the size in advance, when we obviously will traverse it soon? One might, I guess, allocate a buffer to hold the whole thing, but an alternative is to allocate big enough chunks as needed to make the allocation computation a nit. >On the other hand, this could be >considered a strength because of the flip side of this >observation: once you know how long the ciphertext is, >you can calculate how many fixed-size blocks were used, >but this does not tell you how long the original >plaintext was. The larger the blocksize, the more true >this is. Yes, I think so. >Other than that, if there's any reason why a 16-byte >block is less secure than a 512-byte block, it sure >isn't obvious to me! (For whatever that's worth) Right now, I don't see the issue as security. My main reason for using a large block would be to minimize balancing overhead, which would here be 1 byte in 16, or over 6 percent, minimum. Even 5 balancing bytes out of 512 would be under 1 percent. Also the larger block tends to reduce relative variation in balance, which is especially useful with random-like data. Another reason might be to reduce OS overhead in manipulating data. We certainly don't want to be asking the OS for 16-byte blocks, but of course we might well have a tight local buffering system. --- Terry Ritter ritter@io.com http://www.io.com/~ritter/ Crypto Glossary http://www.io.com/~ritter/GLOSSARY.HTM
Subject: Re: Dynamic Transposition Revisited (long) Date: Fri, 26 Jan 2001 20:49:45 GMT From: AllanW <allan_w@my-deja.com> Message-ID: <94snt5$da4$1@nnrp1.deja.com> References: <3a6e77ff.11155622@news.io.com> Newsgroups: sci.crypt Lines: 87 > AllanW <allan_w@my-deja.com> wrote: > >As I understood the filler bits to work, the data would > >look like this before the transposition begins: if the > >data had more 0-bits than 1-bits, it would take the form > > XX....XX 00..00 11..11 > >where X bits represent the data, and then there are 1-8 > >0-bits followed by 1 or more 1-bits. I see now that I should have said, 1-7 0-bits and then 1 or more 1-bits. > >If the data has > >more 1-bits than 0-bits, simply reverse the filler: > > XX....XX 11..11 00..00 > >this time using 1-8 1-bits and then 1 or more 0-bits. And here I should have said: 1-7 1-bits and then 1 or more 0-bits. > >ritter@io.com (Terry Ritter) wrote: > That does not seem to be the way I did it. That's what I got out of it. Not word-for-word, of course. > I don't understand having both 0's and 1's balancing bytes. Surely you do...? It is so that we can always remove the balancing bytes without removing any meaningful data. What if the block has 16 more 0-bits than 1-bits, but the last byte of plaintext is 0x0F? You could balance the block by adding 2 more bytes of 0xFF, but then after decryption we could not identify the first byte of filler (as you say below: the mixed byte must be mixed to be a flag). > If we have an excess of 0's, we want any full balancing bytes to > be all 1's, with the mixed byte being what it needs to be. Since > the mixed byte must be mixed to be a flag, it cannot balance a > full 8 bits, but at most 6 (1000 0000 is an excess of 6 0's). Yes, exactly. And then following this, we have bytes with all 1-bits. I suppose that what I wrote above (1-7 0-bits, followed by 1 or more 1-bits) was stronger than absolutely needed. The mixed byte could be randomly mixed as well, so instead of using 1000-0000 we could just as easily have used 0001-0000. Is there any reason to do this randomly? If not, then my description fits the same pattern but is easier to describe. > >Another way to say this is to show what the filler > >would look like for various imbalances. Here I'll > >assume that the data has more 0-bits than 1-bits; use > >the complement if the opposite is true. > > > > If there are B more 0-bits than 1-bits, then the > > filler looks like (in hex): > > > > B=0 0F > > B=2 1F > > B=4 3F > > B=6 7F > > B=8 0FFF > > B=10. 1FFF > > B=12. 3FFF > > B=14. 7FFF > > B=16. 0FFFFF > > B=18. 1FFFFF > > B=20. 3FFFFF > > B=22. 7FFFFF > > B=24. 0FFFFFFF > > ...and so on. > > Looks right. Good, because in every case the first balance byte starts with 1-4 0-bits followed by 1-7 1-bits. -- Allan_W@my-deja.com is a "Spam Magnet," never read. Please reply in newsgroups only, sorry. Sent via Deja.com http://www.deja.com/
Subject: Re: Dynamic Transposition Revisited (long) Date: Sat, 27 Jan 2001 05:14:06 GMT From: ritter@io.com (Terry Ritter) Message-ID: <3a7258b2.3638786@news.io.com> References: <94snt5$da4$1@nnrp1.deja.com> Newsgroups: sci.crypt Lines: 27 On Fri, 26 Jan 2001 20:49:45 GMT, in <94snt5$da4$1@nnrp1.deja.com>, in sci.crypt AllanW <allan_w@my-deja.com> wrote: >[...] >> >ritter@io.com (Terry Ritter) wrote: >> That does not seem to be the way I did it. > >That's what I got out of it. Not word-for-word, of course. As John Savard pointed out, my description was insufficient. As it turns out, when I implemented this a decade ago, I included -- in addition to (hex) ff and 00 -- also a balanced padding value, which could be 55. Something like that is needed to fill out the last block which is generally only partly filled with data. The padding strictly follows the ff or 00 balancing bytes. So for decoding, we strip off the padding first, and then the ff's or 00's plus the flag byte. Near the end of accumulating data, if we have a condition such that if we don't put in a byte we have a short block, but if we do, we exceed the block, we can just put in a pad byte instead. --- Terry Ritter ritter@io.com http://www.io.com/~ritter/ Crypto Glossary http://www.io.com/~ritter/GLOSSARY.HTM
Subject: Re: Dynamic Transposition Revisited (long) Date: Fri, 19 Jan 2001 12:53:08 -0600 From: Mike Rosing <rosing@physiology.wisc.edu> Message-ID: <3A688D14.3CAC917E@physiology.wisc.edu> References: <3a67737e.15537635@news.io.com> Newsgroups: sci.crypt Lines: 24 Terry Ritter wrote: > > Dynamic Transposition Revisited > > Terry Ritter > > 2001 Jan 18 > > ABSTRACT > > A novel approach to block ciphering is remembered for > relevance to modern ciphering issues of strength, analysis > and proof. A rare example of a serious transposition > cipher, it is also an unusual example of the simultaneous > use of block and stream techniques. [...] Thanks Terry, that was nice writing. I agree that double transposition is useful, otherwise one could accumulate lots of plaintext-ciphertext pairs and begin to attack the RNG. Doubling the transpositions squares the number of possible states, so brute force is probablely easier. Patience, persistence, truth, Dr. mike
Subject: Re: Dynamic Transposition Revisited (long) Date: Fri, 19 Jan 2001 22:18:40 GMT From: ritter@io.com (Terry Ritter) Message-ID: <3a68ba93.9822316@news.io.com> References: <3A688D14.3CAC917E@physiology.wisc.edu> Newsgroups: sci.crypt Lines: 99 On Fri, 19 Jan 2001 12:53:08 -0600, in <3A688D14.3CAC917E@physiology.wisc.edu>, in sci.crypt Mike Rosing <rosing@physiology.wisc.edu> wrote: >Terry Ritter wrote: >> >> Dynamic Transposition Revisited >> >> Terry Ritter >> >> 2001 Jan 18 >> >> ABSTRACT >> >> A novel approach to block ciphering is remembered for >> relevance to modern ciphering issues of strength, analysis >> and proof. A rare example of a serious transposition >> cipher, it is also an unusual example of the simultaneous >> use of block and stream techniques. >[...] > >Thanks Terry, that was nice writing. I agree that double transposition >is useful, otherwise one could accumulate lots of plaintext-ciphertext >pairs and begin to attack the RNG. Doubling the transpositions squares >the number of possible states, so brute force is probablely easier. Thanks, but unfortunately it seems the author (that would be me) was unable to make his point. Maybe I can try again: The Point The main lesson is not about double-transposition, but balance. If all we do is double-shuffle, the ciphertext still contains only the symbols present in the plaintext, and that leaks information. By balancing the block, we eliminate that leak. If we accept that a properly-hidden key of a few hundred bits is "long enough," Dynamic Transposition should be considered stronger in practice than a one-time-pad. Why? Because we generally cannot prove that the OTP sequence is unpredictable, while Dynamic Transposition hides predictability behind several closed doors. Main strength is provided by the vast number of permutations -- and thus keys -- which produce the exact same ciphertext, thus cutting off attack on the RNG before we get to the double-shuffling issue. Also note that because many keys produce the same result, in some sense, the keying is not "efficient," which here is an advantage. The Number of Identical Results The numbers scale factorially, and I normally think of having a 4096-bit block size (that is only 512 bytes, a small buffer). But suppose we have a tiny block of just 128 bits, so N = 128: We have 64 - 1's and 64 - 0's, so many permutations will produce exactly the same result value. How many? Well, imagine scanning across any particular ciphertext block: the first symbol we get can have come from the 64 different positions of that symbol in the original ciphertext. The next time we find that same symbol, it can have come from any position other than that already used. Clearly, there are 64 * 63 * ... * 2 or (N/2)! possibilities for the 1's, and the same number for the 0's. (We can evaluate huge factorials on my combinatorics page: http://www.io.com/~ritter/JAVASCRP/PERMCOMB.HTM#Factorials . Typically I use the base-2 log value plus 1, which is the number of bits in the result value.) The number of different permutations which produce identical results (with a bit-balanced block) is (N/2)! * (N/2)!. 64! is a value of 296 bits, and in log form we multiply by adding, so the result is 592 bits long. Exactly one permutation from among this number is correct, and only that one could provide exact entry into the shuffling sequence. All this assumes the opponents know both the plaintext and the ciphertext for the block. Given known-plaintext, there is no problem in finding permutations which satisfy the transformation. The problem is that there are 2**592 of these easy-to-find alternatives, and there is no easy test to find which is correct. Only *after* the correct permutation is found does double-shuffling come into play as it further hides the shuffling sequence. Summary The reason Dynamic Transposition is strong is that opponents cannot identify the correct permutation from among all possible permutations which produce the exact same ciphertext from the exact same plaintext. --- Terry Ritter ritter@io.com http://www.io.com/~ritter/ Crypto Glossary http://www.io.com/~ritter/GLOSSARY.HTM
Subject: Re: Dynamic Transposition Revisited (long) Date: Sat, 20 Jan 2001 00:07:26 GMT From: jsavard@ecn.ab.SBLOK.ca.nowhere (John Savard) Message-ID: <3a68d589.14676219@news.powersurfr.com> References: <3a68ba93.9822316@news.io.com> Newsgroups: sci.crypt Lines: 25 On Fri, 19 Jan 2001 22:18:40 GMT, ritter@io.com (Terry Ritter) wrote, in part: >The main lesson is not about double-transposition, but balance. If >all we do is double-shuffle, the ciphertext still contains only the >symbols present in the plaintext, and that leaks information. By >balancing the block, we eliminate that leak. I missed that too on my first reading; a reply to my initial post by Emmanual Goldberg got me to look more carefully and see that. You are indeed correct: if one processes plaintexts which have equal numbers of 0 and 1 bits because of processing, and one produces all possible transpositions with, in a sense, 'equal probability', one can do as well with transposition as the one-time-pad does with substitution. This is important to note theoretically. I've noted, though, that I don't think people will see it as terribly attractive or convenient in practice. The ICE block cipher is already a cipher which makes a more limited use of transposition in a way that more readily 'fits in' with conventional practice, so transposition isn't totally neglected. John Savard http://home.ecn.ab.ca/~jsavard/crypto.htm
Subject: Re: Dynamic Transposition Revisited (long) Date: Sat, 20 Jan 2001 05:45:57 GMT From: ritter@io.com (Terry Ritter) Message-ID: <3a69259b.5953269@news.io.com> References: <3a68d589.14676219@news.powersurfr.com> Newsgroups: sci.crypt Lines: 47 On Sat, 20 Jan 2001 00:07:26 GMT, in <3a68d589.14676219@news.powersurfr.com>, in sci.crypt jsavard@ecn.ab.SBLOK.ca.nowhere (John Savard) wrote: >On Fri, 19 Jan 2001 22:18:40 GMT, ritter@io.com (Terry Ritter) wrote, >in part: > >>The main lesson is not about double-transposition, but balance. If >>all we do is double-shuffle, the ciphertext still contains only the >>symbols present in the plaintext, and that leaks information. By >>balancing the block, we eliminate that leak. > >I missed that too on my first reading; a reply to my initial post by >Emmanual Goldberg got me to look more carefully and see that. > >You are indeed correct: if one processes plaintexts which have equal >numbers of 0 and 1 bits because of processing, and one produces all >possible transpositions with, in a sense, 'equal probability', one can >do as well with transposition as the one-time-pad does with >substitution. > >This is important to note theoretically. I've noted, though, that I >don't think people will see it as terribly attractive or convenient in >practice. The ICE block cipher is already a cipher which makes a more >limited use of transposition in a way that more readily 'fits in' with >conventional practice, so transposition isn't totally neglected. Based on long experience, I do not try to predict what people will or will not see. I just present the material. But if the material does turn out to gain some acceptance, well, the crypto text authors and others who have ignored this published work for more than a decade just may have some 'splainin' to do. Dynamic Transposition is not just another block cipher, and also not just an example of a serious transposition cipher. In practice, Dynamic Transposition is stronger than an OTP, yet uses keys of reasonable size. That may be of fundamental real interest. And Dynamic Transposition strength arguments are not based on unproven assumptions. --- Terry Ritter ritter@io.com http://www.io.com/~ritter/ Crypto Glossary http://www.io.com/~ritter/GLOSSARY.HTM
Subject: Re: Dynamic Transposition Revisited (long) Date: Sat, 20 Jan 2001 07:33:40 GMT From: jsavard@ecn.ab.SBLOK.ca.nowhere (John Savard) Message-ID: <3a693da4.3605731@news.powersurfr.com> References: <3a69259b.5953269@news.io.com> Newsgroups: sci.crypt Lines: 200 On Sat, 20 Jan 2001 05:45:57 GMT, ritter@io.com (Terry Ritter) wrote, in part: >Based on long experience, I do not try to predict what people will or >will not see. I just present the material. But if the material does >turn out to gain some acceptance, well, the crypto text authors and >others who have ignored this published work for more than a decade >just may have some 'splainin' to do. Right now, I think that all that will be said is "Yes, transposition is a valid way to transform the set of N-bit strings containing N/2 one bits onto itself such that any member of that set can be transformed to any other member...we mathematicians knew it all along, but found it to be quite trivial". >Dynamic Transposition is not just another block cipher, and also not >just an example of a serious transposition cipher. I have not yet plumbed the depths of your paper deeply enough to see the truth in this statement. Just by being a 'serious transposition cipher', it puts out in the open a possible case that no one has troubled to explicitly mention. That's already of genuine value. It has provoked some thoughts on my part. Thought 1: The NSA could actually have used a degenerate case of Dynamic Transposition. Let us take a data stream, and convert it 6 bits at a time to symbols in a 4 of 8 code. Let us use a sophisticated stream cipher mechanism to generate successive permutations of the 8 bits from the set of 8! possibilities. The advantage of enciphering our 64 symbols to symbols from a set of 70? The combiner recieves always 4 live signals out of 8, and just switches the 8 input paths to 8 output paths. This, unlike a conventional substitution, which converts live signals to dead and vice versa, could have a much quieter TEMPEST signature. Thought 2: What I disliked about Dynamic Transposition is that it seemed to impose bandwith losses. If one converts the input data stream once to N-bit blocks, and then does all one's transposition on N-bit blocks, not larger ones, these losses are avoided. But it would be nice to transpose between larger blocks. (I'm fond of fractionation.) I've thought of a way to do it. Each N-bit block; divide it into 2-bit symbols. Leave 11 and 00 alone. The symbols 01 and 10 found in successive blocks can be pooled together, and transposed _between blocks_ while keeping each individual block balanced. This is *not* particularly strong in itself, I admit. But the N-bit transpositions can still meet your conditions for perfection. Of course, when I avoid cumulative bandwidth loss religiously, I am causing Dynamic Transposition to degenerate into a polyalphabetic block cipher, which just happens to use blocks of N/2 ones and N/2 zeroes instead of blocks of N arbitrary bits. From your later comments, it appears I am losing something important by so doing. >In practice, Dynamic Transposition is stronger than an OTP, yet uses >keys of reasonable size. That may be of fundamental real interest. What you mean by this claim may be perfectly true. All I can say, though, is that it will be read - even if misread - as saying things that cannot be true. My main criticism of Dynamic Transposition is that it has certain minor practical inconveniences that will cause it to be neglected. But this paragraph is going to inspire some people to say you've 'flipped your lid' - as I think you are perfectly well able to anticipate. As I don't really believe you enjoy that, I fear I must be so bold as to remind you that when you make bold claims, it is incumbent upon you to explain them carefully. You are well within your rights not to be overly solicitous of those who spring into action whenever they feel the Great God OTP has been blasphemed; but since the reputation of the OTP _is_ based on a mathematically-valid insight, if you do not explain yourself carefully, it will not be only the ignorant who will be moved to dismiss what you say. >And Dynamic Transposition strength arguments are not based on unproven >assumptions. Why substituting a block with 2^n values for another of those 2^n values is unproven, while substituting one with (2n)!/(n!*n!) values for another of those (2n)!/(n!*n!) values is not is less than clear to me at this point. All I see is isomorphism - over universes of possible blocks which happen to have a less convenient number of elements. Of course, the algebraic structure of bit-balanced blocks subjected to transpositions - as opposed to that of regular binary strings XORed with values - is such that every change nicely propagates through the whole block. At least, as long as the details of how one's stream cipher output is converted to a bit-permutation are not specified. That's the only thing I can think of right now that could be called a 'strength argument' uniquely applicable to Dynamic Transposition. Of course, Dynamic Transposition does have an advantage over static block ciphers; and the need to avoid multiple anagramming forces one to change the cipher with each block. I just don't see why one needs to be 'forced' in this manner to do something; one can use 8-round DES, with subkeys generated by a sophisticated stream cipher, and different for every block, and avoid all the inconveniences of Dynamic Transposition, and gain all the advantages (64 bits of the 96 bits of any two successive subkeys - the middle 4 bits of each 6 - meet the classical OTP condition). The only thing I see as of *practical* interest in Dynamic Transposition right now is that the algebra of N-bit bit-balanced blocks under bit transposition is quite different from the algebra of ordinary binary blocks, so if one could mix Dynamic Transposition with more conventional encryption without bandwidth losses, one could frustrate analysis. For interest's sake, I will provide the following short table here: Balanced bit strings 2 2 4 6 6 20 8 70 10 252 12 924 14 3432 16 12870 18 48620 20 184756 22 705432 24 2704156 26 10400600 28 40116600 30 155117520 32 601080390 34 2333606220 36 9075135300 38 3 5345263800 40 13 7846528820 42 53 8257874440 44 210 4098963720 46 823 3430727600 48 3224 7603683100 50 12641 0606437752 52 49591 8532948104 54 194693 9425648112 56 764869 0600760440 58 3006726 6499541040 60 11826458 1564861424 62 46542835 3255261088 64 183262414 0942590534 66 721942843 4016265740 68 2845304147 5240576740 70 1 1218627781 6662845432 72 4 4251254027 6836779204 74 17 4613056433 5626209832 76 68 9262064869 3261354600 78 272 1701486919 9032015600 80 1075 0720873333 6176461620 82 4247 8458084879 1721628840 84 16789 1048621189 1090247320 86 66375 5308502375 5473070800 88 262485 0538168485 1188961800 90 1038274 2128755341 1369671120 92 4107954 4944205914 9332177040 94 16257011 4034517025 0548615520 96 64350670 1386629890 8421603100 98 254776122 5898085690 2730428600 100 1008913445 4556419333 4812497256 Thus: There are 252 strings of 10 bits with five one bits and five zero bits. So, take a message composed of binary bytes, selecting four values to be ignored - skipped for this encryption step. Convert the other values, and then perform a Dynamic Transposition on the 10 bit blocks, then convert everything back to normal bytes. One has performed a distinct kind of combiner operation, something genuinely different from XOR or modulo-256 add. That in itself is of some value, although, as you doubtless will note, it is a woefully degenerate case of what you are presenting. John Savard http://home.ecn.ab.ca/~jsavard/crypto.htm
Subject: Re: Dynamic Transposition Revisited (long) Date: Sat, 20 Jan 2001 07:01:28 GMT From: Benjamin Goldberg <goldbb2@earthlink.net> Message-ID: <3A6937C9.3094768A@earthlink.net> References: <3a68d589.14676219@news.powersurfr.com> Newsgroups: sci.crypt Lines: 13 John Savard wrote: [snip] > I missed that too on my first reading; a reply to my initial post by > Emmanual Goldberg got me to look more carefully and see that. ^^^^^^^^ Where'd this name come from? That's not me. And I don't see how my name, "Benjamin" could be mispelled or confused with "Emmanual," which probably doesn't even belong to any poster here on sci.crypt. -- Most scientific innovations do not begin with "Eureka!" They begin with "That's odd. I wonder why that happened?"
Subject: Re: Dynamic Transposition Revisited (long) Date: Sat, 20 Jan 2001 13:31:09 GMT From: jsavard@ecn.ab.SBLOK.ca.nowhere (John Savard) Message-ID: <3a6992ff.221801@news.powersurfr.com> References: <3A6937C9.3094768A@earthlink.net> Newsgroups: sci.crypt Lines: 11 On Sat, 20 Jan 2001 07:01:28 GMT, Benjamin Goldberg <goldbb2@earthlink.net> wrote, in part: >Where'd this name come from? That's not me. And I don't see how my >name, "Benjamin" could be mispelled or confused with "Emmanual," which >probably doesn't even belong to any poster here on sci.crypt. Sorry, a Freudian slip. John Savard http://home.ecn.ab.ca/~jsavard/crypto.htm
Subject: Re: Dynamic Transposition Revisited (long) Date: Sat, 20 Jan 2001 01:41:33 GMT From: "Matt Timmermans" <matt@timmermans.nospam-remove.org> Message-ID: <h16a6.124700$f36.5169286@news20.bellglobal.com> References: <3a67737e.15537635@news.io.com> Newsgroups: sci.crypt Lines: 51 I do think that this makes for an interesting cipher. It is "perfect" in exactly the same way as OTP is perfect. The benefit over OTP is that it's somewhat resistant to bit-flipping, though not as resistant as block ciphers are. If you have an unbreakable keyed RNG, though, then there are lots of easy ways to make secure ciphers. It would take a lot of work to show that the system is secure with a real RNG, though. In particular, this bit doesn't work: > Known-plaintext attacks would be a first step in an attempt > to attack the RNG as in a stream cipher. But with Dynamic > Transposition, known-plaintext does not disclose the > enciphering permutation, because a plethora of different > bit-permutations will produce exactly the same ciphertext > from exactly the same plaintext. A successful attack > would thus require some way to distinguish more probable > permutations from less probable ones. Such an attack would > seem to be avoided by proper shuffling. If you don't know anything about the RNG, then there's no such thing as a known-plaintext attack. That works for XOR stream ciphers too. When you do know something about the RNG, then a known-plaintext attack against a dynamic transposition cipher does not necessarily start by guessing the permutation for a single block. With 4096-bit blocks, one block of known plaintext gives you over 4000 bits of information about the state of the generator -- there may be 2^39000 permutations that give you the same output, but there are 2^43000 possible permutations, so you get 4000 bits or so about the state of the generator itself, and this is only marginally less that you would get with an XOR stream cipher and a block of the same size. In this case, though, the actual number of plaintext bits you need to make the block might be less that 4096. How useful is this information? As with an XOR combiner, it depends on the RNG. The real question is whether or not I can extract a reasonably concise statement about the generator given this information, and whether I can combine such statements from multiple blocks until I know the generator state. In theory, your combiner is like using the plaintext to select some generator bits that you throw away. It's probably a good idea for amplifying the security of a stream cipher, but it's not provably secure in any strong way, and there are easier/faster ways to do it that also let you choose how many bits to toss (90% is a lot).
Subject: Re: Dynamic Transposition Revisited (long) Date: Sat, 20 Jan 2001 05:43:52 GMT From: ritter@io.com (Terry Ritter) Message-ID: <3a692576.5916670@news.io.com> References: <h16a6.124700$f36.5169286@news20.bellglobal.com> Newsgroups: sci.crypt Lines: 196 On Sat, 20 Jan 2001 01:41:33 GMT, in <h16a6.124700$f36.5169286@news20.bellglobal.com>, in sci.crypt "Matt Timmermans" <matt@timmermans.nospam-remove.org> wrote: >I do think that this makes for an interesting cipher. It is "perfect" in >exactly the same way as OTP is perfect. The benefit over OTP is that it's >somewhat resistant to bit-flipping, though not as resistant as block ciphers >are. You need to be more explicit. In Dynamic Transposition, each block is ciphered independently under a running keystream starting from a random message key. The keystream does not re-originate so that one can run various attempts at identifying which bits to flip. One can change bits in transit if one wishes, but since one does not know what effect that will have, it is unclear what one could do with it. >If you have an unbreakable keyed RNG, though, then there are lots of easy >ways to make secure ciphers. Dynamic Transposition is not about having an unbreakable keyed RNG. If we had that, nobody would need anything more than a standard OTP additive combiner. But we *don't* have an unbreakable keyed RNG, and that is the problem. Dynamic Transposition is about building an unbreakable cipher *without* needing an unbreakable RNG. This is done by hiding the breakable sequence behind multiple levels of haystack, each so massive that they cannot be searched. So the sequence in question cannot be revealed, which makes it somewhat difficult to attack. >It would take a lot of work to show that the >system is secure with a real RNG, though. In particular, this bit doesn't >work: > >> Known-plaintext attacks would be a first step in an attempt >> to attack the RNG as in a stream cipher. But with Dynamic >> Transposition, known-plaintext does not disclose the >> enciphering permutation, because a plethora of different >> bit-permutations will produce exactly the same ciphertext >> from exactly the same plaintext. A successful attack >> would thus require some way to distinguish more probable >> permutations from less probable ones. Such an attack would >> seem to be avoided by proper shuffling. > >If you don't know anything about the RNG, then there's no such thing as a >known-plaintext attack. Allow me to teach what a known-plaintext attack is: Known-plaintext is nothing more than the situation of the opponent having one or more ciphertext blocks with the associated plaintext blocks. It is quite possible to have that situation without knowing anything of the RNG. A known-plaintext attack is any attack which capitalizes on this information situation (as opposed, say, to ciphertext-only). Known-plaintext is a danger to many cipher systems, because to some extent it reveals the ciphering transformation. In an additive stream cipher or conventional OTP, known-plaintext reveals the enciphering sequence immediately and completely. In a block cipher, it shows what happens when plaintext is enciphered; it exposes the transformation, which can be very useful. Dynamic Transposition is unusual in that knowing the plaintext and the associated ciphertext does not reveal the enciphering permutation. The reason for this is that many different bit-permutations will produce the bit-for-bit exact same transformation between plaintext and ciphertext. Therefore, having known plaintext does not reveal the enciphering permutation, and thus cannot be exploited to begin to expose the RNG sequence. Note that even if the correct permutation were found, the RNG sequence which created it would not be exposed to any extent, if double-shuffling was used to create the permutation. The reason for this is that double-shuffling would use twice the amount of RNG sequence as needed for a selection among all possible permutations. Double-shuffling is literally an arbitrary permutation from a known state, and then another arbitrary permutation from that to any possible permutation. Any particular result permutation could be the result of any possible first permutation, with the appropriate second permutation, or vise versa. Accordingly, one knows no more about what sequence was involved than before one knew the correct permutation. >That works for XOR stream ciphers too. When you do >know something about the RNG, then a known-plaintext attack against a >dynamic transposition cipher does not necessarily start by guessing the >permutation for a single block. Known plaintext means having one or more blocks for which one knows both the ciphertext and the associated plaintext, only that. This is an informational constraint, not an algorithmic direction for attack. Normally, though, the attack direction is obvious given this information. The structure of the RNG is of course known. The content of the RNG state is expanded from a large random message key. The sequence produced by the RNG is not known, nor does the cipher allow it to be exposed. >With 4096-bit blocks, one block of known plaintext gives you over 4000 bits >of information about the state of the generator Not true, as far as I know. Certainly, as a completely unsupported assertion, it is not believable on faith alone. Known plaintext simply does not identify the correct permutation produced by the confusion sequence. It certainly does not identify the sequence which produced that permutation. >-- there may be 2^39000 >permutations that give you the same output, but there are 2^43000 possible >permutations, so you get 4000 bits or so about the state of the generator >itself, We need many different permutations, and of course, we get them. We use a mathematically-proven permutation system, with an RNG which has sufficient internal state to produce any possible permutation. Each one of every possible permutation is equally probable. This exposes no information at all. The number of different permutations that will produce the exact same ciphertext block from the same plaintext block is (N/2)! * (N/2)! The number of possible permutations is N! The ratio between these two values is the average number of different permutations which produce a different ciphertext block from the same plaintext block. This effect does divide the permutation space into "clumps: of permutations which all produce the same ciphertext block from a particular plaintext block. But the next block will have a completely different permutation, a different plaintext, and thus a completely different clump structure. That is an innocuous effect. >nd this is only marginally less that you would get with an XOR >stream cipher and a block of the same size. In this case, though, the >actual number of plaintext bits you need to make the block might be less >that 4096. > >How useful is this information? As with an XOR combiner, it depends on the >RNG. No. That is simply wrong. With any additive combiner, having known-plaintext reveals the confusion sequence sent to the combiner exactly, completely and immediately. If the confusion sequence comes straight from the RNG, we thus have the exact sequence needed to attack the RNG. In contrast, with Dynamic Transposition, having known-plaintext reveals nothing about the RNG sequence, so there is no information available to attack the RNG. >he real question is whether or not I can extract a reasonably concise >statement about the generator given this information, and whether I can >combine such statements from multiple blocks until I know the generator >state. > >In theory, your combiner is like using the plaintext to select some >generator bits that you throw away. No. >It's probably a good idea for >amplifying the security of a stream cipher, No. Dynamic Transposition is a block cipher. >but it's not provably secure in >any strong way, I suspect it is provably secure, and furthermore does not depend upon unproven mathematical assumptions. >and there are easier/faster ways to do it that also let you >choose how many bits to toss (90% is a lot). I believe there are other unbreakable ciphers. Still, people do keep talking about the OTP. Dynamic Transposition gives us a way to achieve believable practical security beyond that available from a classic OTP, while using keys of reasonable size. --- Terry Ritter ritter@io.com http://www.io.com/~ritter/ Crypto Glossary http://www.io.com/~ritter/GLOSSARY.HTM
Subject: Re: Dynamic Transposition Revisited (long) Date: Sat, 20 Jan 2001 18:31:35 +0100 From: Mok-Kong Shen <mok-kong.shen@t-online.de> Message-ID: <3A69CB77.D6ABA21B@t-online.de> References: <3a692576.5916670@news.io.com> Newsgroups: sci.crypt Lines: 69 Terry Ritter wrote: > [snip] > Dynamic Transposition is not about having an unbreakable keyed RNG. > If we had that, nobody would need anything more than a standard OTP > additive combiner. But we *don't* have an unbreakable keyed RNG, and > that is the problem. > > Dynamic Transposition is about building an unbreakable cipher > *without* needing an unbreakable RNG. This is done by hiding the > breakable sequence behind multiple levels of haystack, each so massive > that they cannot be searched. So the sequence in question cannot be > revealed, which makes it somewhat difficult to attack. [snip] > Known-plaintext is a danger to many cipher systems, because to some > extent it reveals the ciphering transformation. In an additive stream > cipher or conventional OTP, known-plaintext reveals the enciphering > sequence immediately and completely. In a block cipher, it shows what > happens when plaintext is enciphered; it exposes the transformation, > which can be very useful. > > Dynamic Transposition is unusual in that knowing the plaintext and the > associated ciphertext does not reveal the enciphering permutation. > The reason for this is that many different bit-permutations will > produce the bit-for-bit exact same transformation between plaintext > and ciphertext. Therefore, having known plaintext does not reveal the > enciphering permutation, and thus cannot be exploited to begin to > expose the RNG sequence. > > Note that even if the correct permutation were found, the RNG sequence > which created it would not be exposed to any extent, if > double-shuffling was used to create the permutation. The reason for > this is that double-shuffling would use twice the amount of RNG > sequence as needed for a selection among all possible permutations. > Double-shuffling is literally an arbitrary permutation from a known > state, and then another arbitrary permutation from that to any > possible permutation. Any particular result permutation could be the > result of any possible first permutation, with the appropriate second > permutation, or vise versa. Accordingly, one knows no more about what > sequence was involved than before one knew the correct permutation. [snip] I agree with what you said. Basically, it is the 'indirectness' that makes the exploitation of the output sequence of the (non-perfect) PRNG practically infeasible (the PRNG output is used to reorder the bits, it is not directly exposed to the opponent in case he possesses the plaintext). Using double permutation reduces the probability of prediction of the PRNG even further. Thus, as you said (later), the scheme can achieve very high 'believable practical security'. On the other hand, I wouldn't use the term 'provably secure' on it, because (1) one 'needs' in practice only sufficiently high practical security and (2) a 'proof' would be problematical in the following sense: One could ask: If double permutation improves the matter, then triple permutation certainly should improve it still further (there can be no reason why the number 2, corresponding to 'double', has a 'magic' power in the present issue). It is also apparent that (very naturally) the higher security is (like a number of other schemes proposed) obtained at the cost of efficiency (here due to the many minute operations at the bit level), though efficiciency issue does not preclude in my conviction the usefulness of a scheme in circumstances where lower efficiency is well tolerable. M. K. Shen
Subject: Re: Dynamic Transposition Revisited (long) Date: Sat, 20 Jan 2001 22:47:26 GMT From: "Matt Timmermans" <matt@timmermans.nospam-remove.org> Message-ID: <2Aoa6.127512$f36.5314097@news20.bellglobal.com> References: <3a692576.5916670@news.io.com> Newsgroups: sci.crypt Lines: 141 "Terry Ritter" <ritter@io.com> wrote in message news:3a692576.5916670@news.io.com... > You need to be more explicit. In Dynamic Transposition, each block is > ciphered independently under a running keystream starting from a > random message key. Each block is ciphered independently only if the RNG sequence for multiple blocks is independent. Otherwise, there will be correllations between blocks in the permutation chosen. > Dynamic Transposition is about building an unbreakable cipher > *without* needing an unbreakable RNG. This is done by hiding the > breakable sequence behind multiple levels of haystack, each so massive > that they cannot be searched. So the sequence in question cannot be > revealed, which makes it somewhat difficult to attack. Are you using "unbreakable" in two different senses, here? OTP can be provably unbreakable, because there is as much key as message. No cipher is unbreakable when you have more message than key. > >If you don't know anything about the RNG, then there's no such thing as a > >known-plaintext attack. > > Allow me to teach what a known-plaintext attack is: > > Known-plaintext is nothing more than the situation of the opponent > having one or more ciphertext blocks with the associated plaintext > blocks. It is quite possible to have that situation without knowing > anything of the RNG. A known-plaintext attack is any attack which > capitalizes on this information situation (as opposed, say, to > ciphertext-only). Yes, I know what known plaintext is, and the condescension is unnecessary. I said there were no known plaintext _attacks_ if you know nothing about the RNG. If you know nothing about the RNG, then it could be perfect (i.e., OTP perfect), no matter what bits you have in your known plaintext/ciphertext pair. A perfect RNG is not attackable (i.e, no way to do the _capitalize_ part of your paragraph above), because nothing you know about it can be used to make predictions about its future behaviour. Therefore, a known-plaintext attack against Dynamic substitution or an XOR stream cipher is impossible when you know nothing about the RNG, because you cannot gather any information that would allow you to distinguish the RNG from a perfect one. > In an additive stream > cipher or conventional OTP, known-plaintext reveals the enciphering > sequence immediately and completely. It reveals part of the sequence, and that part is useful only if you can attack the generator with it. Dynamic substitution is the same, but the part of the sequence revealed doesn't align nicely on bit boundaries -- you provide more obfuscation, but not the provable security you claim. > Dynamic Transposition is unusual in that knowing the plaintext and the > associated ciphertext does not reveal the enciphering permutation. > The reason for this is that many different bit-permutations will > produce the bit-for-bit exact same transformation between plaintext > and ciphertext. Therefore, having known plaintext does not reveal the > enciphering permutation, and thus cannot be exploited to begin to > expose the RNG sequence. It doesn't reveal the entire enciphering permutation, but it _does_ reveal an amount of information about the sequence that is roughly equivalent to the block size. > >With 4096-bit blocks, one block of known plaintext gives you over 4000 bits > >of information about the state of the generator > > Not true, as far as I know. Certainly, as a completely unsupported > assertion, it is not believable on faith alone. I left that out, because it's easy. As you said, the generator state is big enough to produce any possible permutation, i.e., it's at least 43000 bits big, 4096! is about 2^43000. There are 2^39000 permutations that will produce the same output (that's (2048!)^2). If output permutations are evenly distrubuted among possible generator states, that means that only one out of any 2^4000 (2^43000/2^39000) possible generator states could have produced any given output, and so a single known plaintext provides 4000 bits about the state of the generator. > Known plaintext simply does not identify the correct permutation > produced by the confusion sequence. It certainly does not identify > the sequence which produced that permutation. It rules out all but one of every 2^4000 possible sequences. > Each one of every possible permutation is equally probable. This > exposes no information at all. Each possible permutation is equally probable only if your generator is unknown. It's easy for me to devise generators that will produce the same permutation for every block. I can also make generators that produce related permutations for every block. Clearly, Dynamic Substitution would not be secure with generators of these types. If dynamic substitution is to be secure, then, it's security must rely in some way on the security of the generator. What generator properties are required before this cipher is safe? How do these requirements differ from the requirements for safe XOR ciphers? > >It's probably a good idea for > >amplifying the security of a stream cipher, > > No. Dynamic Transposition is a block cipher. It's a block cipher that relies on a secure keyed RNG. I would call it a block-oriented combiner for stream ciphers. > I suspect it is provably secure, and furthermore does not depend upon > unproven mathematical assumptions. > > Dynamic Transposition gives us a way to achieve believable practical > security beyond that available from a classic OTP, while using keys of > reasonable size. Classic OTP is provably secure. Dynamic Transposition is provably secure when you use an OTP key as your RNG stream. It's not a cipher, though, until you specify an RNG. When you specify an RNG that uses keys of "reasonable size", you will not be able to prove DT secure without taking the security of the RNG as axiomatic. In fact, once you specify a bounded key size K, then it's provably breakable by brute force in randomized O(2^K) with O(K) of known plaintext. That works for all ciphers, of course, and that's why there are no ciphers with bounded key sizes that are provably secure in the same way that OTP is. There is just no getting around the fact that a cipher is breakable in principle as soon as you have more message than key material. Now, even though there's nothing "provably secure" about anything you've said, I do belive that DT is a useful combining mode. It's likely that using DT as a combiner for any standard stream cipher RNG would go a long way towards protecting the resulting cipher from attacks that are found against the XOR version.
Subject: Re: Dynamic Transposition Revisited (long) Date: Sun, 21 Jan 2001 00:24:04 GMT From: Benjamin Goldberg <goldbb2@earthlink.net> Message-ID: <3A6A2C26.6F07F029@earthlink.net> References: <2Aoa6.127512$f36.5314097@news20.bellglobal.com> Newsgroups: sci.crypt Lines: 3 -- Most scientific innovations do not begin with "Eureka!" They begin with "That's odd. I wonder why that happened?"
Subject: Re: Dynamic Transposition Revisited (long) Date: Sun, 21 Jan 2001 01:13:32 GMT From: Benjamin Goldberg <goldbb2@earthlink.net> Message-ID: <3A6A37BE.94BA860@earthlink.net> References: <2Aoa6.127512$f36.5314097@news20.bellglobal.com> Newsgroups: sci.crypt Lines: 161 Matt Timmermans wrote: [snip] > > In an additive stream cipher or conventional OTP, known-plaintext > > reveals the enciphering sequence immediately and completely. > > It reveals part of the sequence, and that part is useful only if you > can attack the generator with it. Dynamic substitution is the same, > but the part of the sequence revealed doesn't align nicely on bit > boundaries -- you provide more obfuscation, but not the provable > security you claim. You are forgetting about the effects of double transposition: If I select a permutation p1, from among one of N! possibilities, and another permutation, p2, also from one of N! possibilities, and then take the composition p3 = (p1 o p2), then the resulting permutation is also one of the N! possibilities. Additional transpositions don't change any of the properties of the ciphertext. What they do change, is this: Suppose I have found a way to identify from a known plaintext *precisely* what permutation (p3) was applied to it to get the ciphertext. Now suppose I also have a way to way to go from a single permutation to the generator which, umm, generated it. There is a large difficulty now: To get the generator output, I need p1 and p2. If their selection was unbiased, then there are N! different combinations of (p1, p2) which compose to p3. > > Dynamic Transposition is unusual in that knowing the plaintext and > > the associated ciphertext does not reveal the enciphering > > permutation. > > The reason for this is that many different bit-permutations will > > produce the bit-for-bit exact same transformation between plaintext > > and ciphertext. Therefore, having known plaintext does not reveal > > the enciphering permutation, and thus cannot be exploited to begin > > to expose the RNG sequence. > > It doesn't reveal the entire enciphering permutation, but it _does_ > reveal an amount of information about the sequence that is roughly > equivalent to the block size. But this information is not exploitable. > > >With 4096-bit blocks, one block of known plaintext gives you over > > >4000 bits of information about the state of the generator > > > > Not true, as far as I know. Certainly, as a completely unsupported > > assertion, it is not believable on faith alone. > > I left that out, because it's easy. > As you said, the generator state is big enough to produce any possible > permutation, i.e., it's at least 43000 bits big, 4096! is about > 2^43000. > > There are 2^39000 permutations that will produce the same output > (that's (2048!)^2). If output permutations are evenly distrubuted > among possible generator states, that means that only one out of any > 2^4000 (2^43000/2^39000) possible generator states could have produced > any given output, and so a single known plaintext provides 4000 bits > about the state of the generator. Consider: Although there are 2^39000 permutations which produce a given output, the permutation actually used was the composition of two permutations. Using your reasoning, it seems we have (2^43000) / ((2^39000)^2) or 1/2^35000 of a bit of information from a single known plaintext. > > Known plaintext simply does not identify the correct permutation > > produced by the confusion sequence. It certainly does not identify > > the sequence which produced that permutation. > > It rules out all but one of every 2^4000 possible sequences. Only if you don't do the double transposition. Knowing the correct (composed) permutation doesn't tell you about which two composed it. > > Each one of every possible permutation is equally probable. This > > exposes no information at all. > > Each possible permutation is equally probable only if your generator > is unknown. It's easy for me to devise generators that will produce > the same permutation for every block. I can also make generators that > produce related permutations for every block. Clearly, Dynamic > Substitution would not be secure with generators of these types. True, but those are contrieved examples. > If > dynamic substitution is to be secure, then, it's security must rely in > some way on the security of the generator. Dynamic Substitution relies on the PRNG having a large state, a large period, and giving statistically good output. However, since it's close to impossible for the attacker to see the PRNG output, it doesn't need to be secure against those attacks which require that. In other words, it need not be a CSPRNG. > What generator properties are required before this cipher is safe? > How do these requirements differ from the requirements for safe XOR > ciphers? CSPRNGs need the following things to be true: 1) They need a large state 2) They need a long period 3) The need to be unpredictable The dynamic substitution cipher similarly needs: 1) a large state 2) a long period 3) good statistical properties However, with a CSPRNG, (3) means the following: a) If the attacker knows output, he cannot learn the PRNG state. b) If the attacker knows output, he cannot predict future output. c) If the attacker knows output, he cannot guess prior output. For (b) and (c) without (a), we need good statistical properties. The dynamic substitution cipher DOES NOT need (a) (b) or (c), because the attacker cannot get the generator output. [snip] > > I suspect it is provably secure, and furthermore does not depend > > upon unproven mathematical assumptions. > > > > Dynamic Transposition gives us a way to achieve believable practical > > security beyond that available from a classic OTP, while using keys > > of reasonable size. > > Classic OTP is provably secure. Dynamic Transposition is provably > secure when you use an OTP key as your RNG stream. It's not a cipher, > though, until you specify an RNG. When you specify an RNG that uses > keys of "reasonable size", you will not be able to prove DT secure > without taking the security of the RNG as axiomatic. > > In fact, once you specify a bounded key size K, then it's provably > breakable by brute force in randomized O(2^K) with O(K) of known > plaintext. That works for all ciphers, of course, and that's why > there are no ciphers with bounded key sizes that are provably secure > in the same way that OTP is. Although it is true that any keyed cipher needs O(2^K) work to break, it *might* be possible to prove that O(max(2^K,f(N)) work is needed, where N is the blocksize. If this can be done, we can select N such that Theta(f(N)) = 2^K. This in turn would mean that the amount of work needed is Omega(2^K), or that it cannot be broken in less than 2^K work. > There is just no getting around the fact that a cipher is breakable in > principle as soon as you have more message than key material. > > Now, even though there's nothing "provably secure" about anything > you've said, I do belive that DT is a useful combining mode. It's > likely that using DT as a combiner for any standard stream cipher RNG > would go a long way towards protecting the resulting cipher from > attacks that are found against the XOR version. -- Most scientific innovations do not begin with "Eureka!" They begin with "That's odd. I wonder why that happened?"
Subject: Re: Dynamic Transposition Revisited (long) Date: Sun, 21 Jan 2001 05:33:25 GMT From: "Matt Timmermans" <matt@timmermans.nospam-remove.org> Message-ID: <Fwua6.113862$n%.3804073@news20.bellglobal.com> References: <3A6A37BE.94BA860@earthlink.net> Newsgroups: sci.crypt Lines: 47 "Benjamin Goldberg" <goldbb2@earthlink.net> wrote in message news:3A6A37BE.94BA860@earthlink.net... > Matt Timmermans wrote: > > You are forgetting about the effects of double transposition: Yes, but it doesn't change the argument -- it just throws more generator bits away. log(N!^2/(N!(N/2)!)) = log(N!/(N/2)!) = how many bits of info about the generator state I get from a known plaintext. > > It doesn't reveal the entire enciphering permutation, but it _does_ > > reveal an amount of information about the sequence that is roughly > > equivalent to the block size. > > But this information is not exploitable. That's the question, isn't it? How do you _know_ it's not exploitable? > > What generator properties are required before this cipher is safe? > > How do these requirements differ from the requirements for safe XOR > > ciphers? > > The dynamic substitution cipher similarly needs: > 1) a large state > 2) a long period > 3) good statistical properties When you define "good statistical properties" in some rigourous way, are you sure that I couldn't contrive insecure generators for Dynamic Transposition the meet those criteria? Again, it's not a cipher until you specify the RNG. > Although it is true that any keyed cipher needs O(2^K) work to break, it > *might* be possible to prove that O(max(2^K,f(N)) work is needed, > where N is the blocksize. If this can be done, we can select N such > that Theta(f(N)) = 2^K. This in turn would mean that the amount of work > needed is Omega(2^K), or that it cannot be broken in less than 2^K work. Maybe if Theta(N) = 2^K. Under the reasonable assumption that N is at most polynomial in K, though, it's easy to show that breaking any such cipher is in NP. Proving exponential lower bounds would therefore prove that P!=NP. This is a fine example of an "unproven assumption" that is required for provable secuirty of the cipher.
Subject: Re: Dynamic Transposition Revisited (long) Date: Sun, 21 Jan 2001 14:19:29 GMT From: jsavard@ecn.ab.SBLOK.ca.nowhere (John Savard) Message-ID: <3a6aebea.533544@news.powersurfr.com> References: <Fwua6.113862$n%.3804073@news20.bellglobal.com> Newsgroups: sci.crypt Lines: 26 On Sun, 21 Jan 2001 05:33:25 GMT, "Matt Timmermans" <matt@timmermans.nospam-remove.org> wrote, in part: >That's the question, isn't it? How do you _know_ it's not exploitable? Of course you don't. Of course, had the claim simply been that exploitation is likely to be more difficult than in the case where, say, the output of two PRNGs is successively XORed to plaintext, I'd certainly be in full agreement. Not only is some information irrevocably lost with known plaintext in a way that XOR does not achieve, but the structure of permutations as opposed to bit vectors under XOR is more complicated. The main advantage of Dynamic Transposition is that it seems to achieve over large blocks certain properties that seem as if they are only practical to achieve, with a substitution stream cipher, over, say, single bytes. While this isn't really true, as one could use conventional block ciphers rather than one-byte-wide S-boxes between stream cipher layers, Dynamic Transposition achieves this level of complexity with far fewer steps, and as an automatic consequence of the basic cipher design (rather than as an additional precautionary step that could be omitted). John Savard http://home.ecn.ab.ca/~jsavard/crypto.htm
Subject: Re: Dynamic Transposition Revisited (long) Date: Sun, 21 Jan 2001 22:43:08 GMT From: ritter@io.com (Terry Ritter) Message-ID: <3a6b65de.5747763@news.io.com> References: <2Aoa6.127512$f36.5314097@news20.bellglobal.com> Newsgroups: sci.crypt Lines: 145 I am trying hard to keep up, but that is very difficult if I am to take time to consider what I am saying. /tfr On Sat, 20 Jan 2001 22:47:26 GMT, in <2Aoa6.127512$f36.5314097@news20.bellglobal.com>, in sci.crypt "Matt Timmermans" <matt@timmermans.nospam-remove.org> wrote: >"Terry Ritter" <ritter@io.com> wrote in message >news:3a692576.5916670@news.io.com... >> You need to be more explicit. In Dynamic Transposition, each block is >> ciphered independently under a running keystream starting from a >> random message key. > >Each block is ciphered independently only if the RNG sequence for multiple >blocks is independent. Otherwise, there will be correllations between >blocks in the permutation chosen. Yes, there will be correlations between *the* *permutations* of successive blocks. But the permutation used is not known; it is hidden by the structure of the bit-balanced data. Many different permutations will take the exact same plaintext to the exact same ciphertext. The particular permutation involved is thus hidden, and so the sequence of permutations is also hidden. >> Dynamic Transposition is about building an unbreakable cipher >> *without* needing an unbreakable RNG. This is done by hiding the >> breakable sequence behind multiple levels of haystack, each so massive >> that they cannot be searched. So the sequence in question cannot be >> revealed, which makes it somewhat difficult to attack. > >Are you using "unbreakable" in two different senses, here? OTP can be >provably >unbreakable, because there is as much key as message. No cipher is >unbreakable when you have more message than key. Yes, I probably am using "unbreakable" in two senses, now that I think of it. I need to meditate on that. On the other hand, in practice, the OTP is not -- and generally cannot be -- proven secure, despite endless old-wive's-tales to the contrary. The fact that this is accepted can only mean that "proof" has somewhat less meaning in cryptography than one might expect. No OTP can be secure if the keying sequence is predictable. Here, "predictable" does not refer to whether or not *we* can predict the sequence, or whether someone we know can predict the sequence, but rather whether the opponent can predict the sequence. We cannot know what the opponents can do. [ Let me remind everyone that we are talking about *proven*, or *guaranteed* unbreakability. In practice, it is quite likely that a well-engineered random sequence will be sufficiently unpredictable to be very secure. The issue is that we generally cannot *prove* that a sequence is unpredictable, or measure how unpredictable it is -- we have no such test, and there can be no such test. As a matter of fact, new tests keep being developed as additional types of predictability are found. ] So unless the OTP keying sequence is *proven* unpredictable in practice (something which is normally impossible), no practical OTP can be *proven* secure. Only the theoretical OTP can be proven secure, and that is only good for protecting theoretical data (or, perhaps better: "theoretically protecting data.") >> >If you don't know anything about the RNG, then there's no such thing as a >> >known-plaintext attack. >> >> Allow me to teach what a known-plaintext attack is: >> >> Known-plaintext is nothing more than the situation of the opponent >> having one or more ciphertext blocks with the associated plaintext >> blocks. It is quite possible to have that situation without knowing >> anything of the RNG. A known-plaintext attack is any attack which >> capitalizes on this information situation (as opposed, say, to >> ciphertext-only). > >Yes, I know what known plaintext is, and the condescension is unnecessary. >I >said there were no known plaintext _attacks_ if you know nothing about the >RNG. If you know nothing about the RNG, then it could be perfect (i.e., OTP >perfect), no matter what bits you have in your known plaintext/ciphertext >pair. A perfect RNG is not attackable (i.e, no way to do the _capitalize_ >part of your paragraph above), because nothing you know about it can be used >to make predictions about its future behaviour. Therefore, a >known-plaintext attack against Dynamic substitution or an XOR stream cipher >is impossible when you know nothing about the RNG, because you cannot gather >any information that would allow you to distinguish the RNG from a perfect >one. OK, it becomes apparent that there is serious confusion here. There is a ciphering technology which I created and named "Dynamic Substitution." There is another, different, technology which I also created and named "Dynamic Transposition." Here we discuss Dynamic Transposition. Dynamic Substitution is the idea of enciphering data through a keyed Simple Substitution table, and then changing the contents of that table. When a character is enciphered through a table, that particular table transformation may be exposed. We can prevent that by changing the just-used table entry to some entry in the table (even itself), selected at pseudo-random. We thus get a state-based, dynamic, combiner of data and RNG confusion, which is nonlinear and yet reversible. Dynamic Substitution is a stream cipher combiner. Dynamic Transposition is mainly the idea of bit-balancing a data block, and then bit-permuting that block from a confusion stream. It is thus also a combiner of data and RNG confusion, but it is definitely a block cipher combiner, and not a stream cipher combiner. The unexpected advantage of Dynamic Transposition is that a plethora of different permutations produce exactly the same ciphering result. This would seem to hide the exact permutation used, and thus also hide any attempt to define the shuffling sequence by moving back from a known permutation. We are discussing block ciphers based on the Dynamic Transposition combiner. It is true that in the "Revisited" article, some links showed details of Dynamic Substitution ciphers. Those links were offered for the description of example components which, frankly, should be considered well-known cryptographic technology. Those include: key hashing, a large-state RNG, nonlinear filtering ("jitterizing"), and permutation via Shuffle, all of which are needed for Dynamic Transposition. The bit-balancing process itself was described in the "Revisited" article. Dynamic Transposition is not a particular cipher; it is instead an entire class of ciphers which is not covered in the crypto texts. A cipher designer must choose what sort of RNG to use, how to key it, how to protect it, and how to shuffle. There are various options, the details of which are not particularly relevant to the Dynamic Transposition technology itself, which is the issue. --- Terry Ritter ritter@io.com http://www.io.com/~ritter/ Crypto Glossary http://www.io.com/~ritter/GLOSSARY.HTM
Subject: Re: Dynamic Transposition Revisited (long) Date: Mon, 22 Jan 2001 00:02:04 GMT From: jsavard@ecn.ab.SBLOK.ca.nowhere (John Savard) Message-ID: <3a6b739d.6203320@news.powersurfr.com> References: <3a6b65de.5747763@news.io.com> Newsgroups: sci.crypt Lines: 86 On Sun, 21 Jan 2001 22:43:08 GMT, ritter@io.com (Terry Ritter) wrote, in part: >No OTP can be secure if the keying sequence is predictable. Here, >"predictable" does not refer to whether or not *we* can predict the >sequence, or whether someone we know can predict the sequence, but >rather whether the opponent can predict the sequence. We cannot know >what the opponents can do. >[ Let me remind everyone that we are talking about *proven*, or >*guaranteed* unbreakability. In practice, it is quite likely that a >well-engineered random sequence will be sufficiently unpredictable to >be very secure. The issue is that we generally cannot *prove* that a >sequence is unpredictable, or measure how unpredictable it is -- we >have no such test, and there can be no such test. As a matter of >fact, new tests keep being developed as additional types of >predictability are found. ] >So unless the OTP keying sequence is *proven* unpredictable in >practice (something which is normally impossible), no practical OTP >can be *proven* secure. >Only the theoretical OTP can be proven secure, and that is only good >for protecting theoretical data (or, perhaps better: "theoretically >protecting data.") I fear that I'm not the only person puzzled by this. In principle, what you are saying is sound: nothing physical is susceptible to mathematical proof. However, while we cannot know if our opponents have, say, an easy crack to Blum-Blum-Shub, to object that we cannot know if our opponents might have some way to predict things like *the flip of a coin* seems to be taking caution to a perverse level. >There is a ciphering technology which I created and named "Dynamic >Substitution." There is another, different, technology which I also >created and named "Dynamic Transposition." Here we discuss Dynamic >Transposition. Yes, the two are basically unrelated. Since you named them, though, you must take part of the responsibility for the resulting confusion. >The unexpected advantage of Dynamic Transposition is that a plethora >of different permutations produce exactly the same ciphering result. >This would seem to hide the exact permutation used, and thus also hide >any attempt to define the shuffling sequence by moving back from a >known permutation. But that's not an advantage that can't be obtained with substitution. Suppose we enciphered a message using DES, except that the subkeys are generated by some sort of stream cipher. Each 48-bit subkey could be replaced with any member (including itself) from a set of 2^16 subkeys that give the same result. Because there are more than two rounds, successive pairs of subkeys can produce the same result in many different ways, just like the two permutations in Dynamic Transposition. To me, therefore, Dynamic Transposition doesn't look _all_ that different from using four round DES with subkeys that are generated anew for every block. It might have the advantage of being faster. It might have the advantage of being more 'idiot proof', in that it couldn't be 'streamlined' so as to get rid of the advantage of multiple equivalent permutations. It might have the advantage that successive permutations are harder to unravel than successive XORs, or even additions alternating with XORs. And there is the theoretical interest of showing that, fundamentally, a transposition can be, inherently, just as secure as a substitution. But because it seems to be stuck with a bandwidth problem when taken 'straight', and because its advantages can mostly be matched within the substitution world, while I see it as an interesting and even potentially useful idea, I can't quite see it as a practical alternative to present methods of encipherment, merely as a potential supplement, if it can be somehow shoehorned to 'fit' the world of arbitrary binary sequences. John Savard http://home.ecn.ab.ca/~jsavard/crypto.htm
Subject: Re: Dynamic Transposition Revisited (long) Date: Mon, 22 Jan 2001 07:07:48 GMT From: ritter@io.com (Terry Ritter) Message-ID: <3a6bdbeb.7409784@news.io.com> References: <3a6b739d.6203320@news.powersurfr.com> Newsgroups: sci.crypt Lines: 229 On Mon, 22 Jan 2001 00:02:04 GMT, in <3a6b739d.6203320@news.powersurfr.com>, in sci.crypt jsavard@ecn.ab.SBLOK.ca.nowhere (John Savard) wrote: >On Sun, 21 Jan 2001 22:43:08 GMT, ritter@io.com (Terry Ritter) wrote, >in part: > >>No OTP can be secure if the keying sequence is predictable. Here, >>"predictable" does not refer to whether or not *we* can predict the >>sequence, or whether someone we know can predict the sequence, but >>rather whether the opponent can predict the sequence. We cannot know >>what the opponents can do. > >>[ Let me remind everyone that we are talking about *proven*, or >>*guaranteed* unbreakability. In practice, it is quite likely that a >>well-engineered random sequence will be sufficiently unpredictable to >>be very secure. The issue is that we generally cannot *prove* that a >>sequence is unpredictable, or measure how unpredictable it is -- we >>have no such test, and there can be no such test. As a matter of >>fact, new tests keep being developed as additional types of >>predictability are found. ] > >>So unless the OTP keying sequence is *proven* unpredictable in >>practice (something which is normally impossible), no practical OTP >>can be *proven* secure. > >>Only the theoretical OTP can be proven secure, and that is only good >>for protecting theoretical data (or, perhaps better: "theoretically >>protecting data.") > >I fear that I'm not the only person puzzled by this. No, I think you are the only one puzzled. >In principle, what you are saying is sound: nothing physical is >susceptible to mathematical proof. > >However, while we cannot know if our opponents have, say, an easy >crack to Blum-Blum-Shub, to object that we cannot know if our >opponents might have some way to predict things like *the flip of a >coin* seems to be taking caution to a perverse level. So, basically (as far as I can tell), you support my position that, in practice, no OTP can be proven secure. Well, great. But, as much as you would like to trivialize that insight, even here on sci.crypt it is common to think otherwise. In fact, the only reason it came up at this point was as a direct response to a statement -- which you conveniently omitted -- "OTP can be provably unbreakable, because there is as much key as message." If by "OTP" we mean a cipher intended to provide practical security and protect real data, the sentiment that such a system is proven secure is both common and false. The whole point of desiring a security proof which applies in practice is to provide what is not now available: confidence in the strength of a cipher when there are consequences for failure. Nobody cares about the strength of a cipher which cannot and does not protect data. Proofs which only apply in the abstract are "academic" is the worse possible sense. When abstract results are not plainly qualified to not be applicable to reality, they can only be considered deliberately deceptive. Somewhere there is a reference which continues to corrupt the minds of people coming into cryptography. It deludes them into believing the OTP is mathematically proven to be unbreakable in practice. I would love to find exactly what that reference is. Then maybe we could stop all this nonsense before it starts. >>There is a ciphering technology which I created and named "Dynamic >>Substitution." There is another, different, technology which I also >>created and named "Dynamic Transposition." Here we discuss Dynamic >>Transposition. > >Yes, the two are basically unrelated. Since you named them, though, >you must take part of the responsibility for the resulting confusion. I invented these technologies, so I named them. The names are appropriate, since they make sense and emphasize the distinguishing characteristics. I take no responsibility for the misuse of the terms which are mine to define. The original articles were published in ink-on-paper and are available in some libraries. Approximations to the original articles are on my web pages, as are various detailed technical discussions of these technologies. Most of these terms are in my Glossary. There is not a whole lot more I can do. The real problem is that these fundamentally novel technologies have been uniformly ignored by academics and crypto text authors. In consequence, their readers have not been properly prepared to address these new ideas. Readers who are disappointed by this should contact the publisher of the disappointing reference. >>The unexpected advantage of Dynamic Transposition is that a plethora >>of different permutations produce exactly the same ciphering result. >>This would seem to hide the exact permutation used, and thus also hide >>any attempt to define the shuffling sequence by moving back from a >>known permutation. > >But that's not an advantage that can't be obtained with substitution. The advantage cannot be obtained by substitution. I have seen you say that, but I have absolutely no idea what you could possibly mean by it. >Suppose we enciphered a message using DES, except that the subkeys are >generated by some sort of stream cipher. Each 48-bit subkey could be >replaced with any member (including itself) from a set of 2^16 subkeys >that give the same result. How is it that these different keys give the same result? >Because there are more than two rounds, >successive pairs of subkeys can produce the same result in many >different ways, just like the two permutations in Dynamic >Transposition. There is only one permutation per block in Dynamic Transposition. I do recommend shuffling twice, only to prevent someone who knows the actual permutation from attacking the RNG sequence. But that idea is really getting in the way of comprehension, because that is not the main source of strength in the system. In the end, there is just some permutation. The main source of strength is that a vast number of fundamentally different permutations produce the exact same ciphertext block from the exact same plaintext block. Thus, known-plaintext does not expose the transformation. So it is at least very, very difficult, and perhaps impossible, to even know what the permutation was, since it only occurs once and is only used once. And without the permutation, we cannot even begin to think about finding the shuffling sequence which is what the double-shuffling should prevent. >To me, therefore, Dynamic Transposition doesn't look _all_ that >different from using four round DES with subkeys that are generated >anew for every block. Then you need to look closer. >It might have the advantage of being faster. > >It might have the advantage of being more 'idiot proof', in that it >couldn't be 'streamlined' so as to get rid of the advantage of >multiple equivalent permutations. > >It might have the advantage that successive permutations are harder to >unravel than successive XORs, or even additions alternating with XORs. "Might?" >And there is the theoretical interest of showing that, fundamentally, >a transposition can be, inherently, just as secure as a substitution. Dynamic Transposition is vastly more secure than a substitution. You will have to define what you mean by "substitution" though, since you appear to be describing DES as "substitution." Modern block ciphers do attempt to emulate large simple substitutions. They are given "large enough" keyspaces to prevent brute-force attacks on keys. But nobody should have any delusions about the extent to which they actually produce all N! possible cipherings. Their keys cover only an infinitesimal fraction of the potential substitution keyspace, and we have no indication that any of these designs could be expanded to do vastly better. As a consequence, each known-plaintext block massively reduces the number of keys which could have produced that transformation. Normally, only a few known-plaintext pairs are necessary to reduce the set of keys which could have produced those transformations to exactly one. That we do not currently know how to accumulate that information or use it, is not particularly comforting to me. In contrast, the Dynamic Transposition cipher which I have proposed fully covers a vast keyspace. Assuming we have sufficient state in the RNG (and I have presented large-state examples), every possible permutation can be key selected. So we don't have to guess whether the construction can be expanded; its only limitations are the keys. If and when 3,000-bit message keys become necessary, we can drop them in with no problem at all. In fact, if that is all it takes to achieve a more pleasant analytical result, we could do that now. But the real advantage here -- which I keep mentioning but apparently just goes: Swish! right over people's heads -- is that the particular transformation which occurs in Dynamic Transposition is not revealed by known-plaintext. With substitution, when we have known-plaintext, exactly one transformation which has occurred, and that is completely known. When we have a Dynamic Transposition enciphered block, any one of a vast number of different permutations could have produced that block. So even if we have known-plaintext, the transformation is not revealed. The opponent does not know which permutation occurred. That is a massive difference. >But because it seems to be stuck with a bandwidth problem In my experience with actually running such a cipher, bit-balancing adds 25 percent to 33 percent to simple ASCII text. The 1/3 value was given both in the "Revisited" article, as well as the original Cryptologia article on my pages. And if the text is compressed first, there will be even less expansion. If you see even a 33 percent expansion as a show-stopping amount, a "bandwidth problem," I think you need to pause for re-calibration. >when taken >'straight', and because its advantages can mostly be matched within >the substitution world, Simply false. --- Terry Ritter ritter@io.com http://www.io.com/~ritter/ Crypto Glossary http://www.io.com/~ritter/GLOSSARY.HTM
Subject: Re: Dynamic Transposition Revisited (long) Date: Mon, 22 Jan 2001 13:05:04 GMT From: jsavard@ecn.ab.SBLOK.ca.nowhere (John Savard) Message-ID: <3a6c2bff.1045135@news.powersurfr.com> References: <3a6bdbeb.7409784@news.io.com> Newsgroups: sci.crypt Lines: 120 On Mon, 22 Jan 2001 07:07:48 GMT, ritter@io.com (Terry Ritter) wrote, in part: >On Mon, 22 Jan 2001 00:02:04 GMT, in ><3a6b739d.6203320@news.powersurfr.com>, in sci.crypt >jsavard@ecn.ab.SBLOK.ca.nowhere (John Savard) wrote: >Somewhere there is a reference which continues to corrupt the minds of >people coming into cryptography. It deludes them into believing the >OTP is mathematically proven to be unbreakable in practice. I would >love to find exactly what that reference is. Then maybe we could stop >all this nonsense before it starts. The Codebreakers, David Kahn. Chapter 13: "Secrecy for Sale". >>>The unexpected advantage of Dynamic Transposition is that a plethora >>>of different permutations produce exactly the same ciphering result. >>>This would seem to hide the exact permutation used, and thus also hide >>>any attempt to define the shuffling sequence by moving back from a >>>known permutation. >>But that's not an advantage that can't be obtained with substitution. >The advantage cannot be obtained by substitution. >I have seen you say that, but I have absolutely no idea what you could >possibly mean by it. >>Suppose we enciphered a message using DES, except that the subkeys are >>generated by some sort of stream cipher. Each 48-bit subkey could be >>replaced with any member (including itself) from a set of 2^16 subkeys >>that give the same result. >How is it that these different keys give the same result? The same result _for a specific given input block_, just as for a specific input block in Dynamic Transposition, two bits can both be 1s. Essentially, to obtain a given f-function output from a given f-function input in DES, it is sufficient to control the middle four bits of every six in the 48-bit subkey; the other two bits can have any value. 4 bits -> any one of 4 S-boxes -> XOR with an arbitrary value -> any 4 bits you like. >There is only one permutation per block in Dynamic Transposition. I >do recommend shuffling twice, only to prevent someone who knows the >actual permutation from attacking the RNG sequence. But that idea is >really getting in the way of comprehension, because that is not the >main source of strength in the system. In the end, there is just some >permutation. Four rounds of DES with subkeys that change per block are exactly analogous. >>It might have the advantage that successive permutations are harder to >>unravel than successive XORs, or even additions alternating with XORs. >"Might?" I could have said here 'It is likely to have...', but the point is: a) we don't know how well The Opponent understands permutation groups, and b) some analysis of the mathematical properties involved is needed to say much more. >>And there is the theoretical interest of showing that, fundamentally, >>a transposition can be, inherently, just as secure as a substitution. >Dynamic Transposition is vastly more secure than a substitution. >You will have to define what you mean by "substitution" though, since >you appear to be describing DES as "substitution." >Modern block ciphers do attempt to emulate large simple substitutions. >They are given "large enough" keyspaces to prevent brute-force attacks >on keys. But nobody should have any delusions about the extent to >which they actually produce all N! possible cipherings. Dynamic transposition may produce all n! possible permutations of the bits involved; it DOES NOT produce all n! ( ------------) ! (n/2)!(n/2)! mappings of the set of balanced n-bit strings onto itself any more than DES produces all (2^64)! possible block substitutions. This is a mistake that, frankly, I'm surprised at you for making. But we all slip up, and it's looking like this false assumption is at the root of some of your claims for Dynamic Transposition as against substitution. And with substitution, unlike Dynamic Transposition, instead of being stuck with one set of n! substitutions, one can use steps of different kinds so that instead of just having, say, all 2^n possible mappings obtained by XORing an n-bit block with an n-bit key, one can explore the space of (2^n)! permutations more deeply - depending on how much key we use, and how complicated a structure we give the cipher. >>But because it seems to be stuck with a bandwidth problem >In my experience with actually running such a cipher, bit-balancing >adds 25 percent to 33 percent to simple ASCII text. The 1/3 value was >given both in the "Revisited" article, as well as the original >Cryptologia article on my pages. And if the text is compressed first, >there will be even less expansion. If you see even a 33 percent >expansion as a show-stopping amount, a "bandwidth problem," I think >you need to pause for re-calibration. You would be right, unless >>when taken >>'straight', and because its advantages can mostly be matched within >>the substitution world, >Simply false. I happen to be right _here_. John Savard http://home.ecn.ab.ca/~jsavard/crypto.htm
Subject: Re: Dynamic Transposition Revisited (long) Date: Tue, 23 Jan 2001 04:24:27 GMT From: ritter@io.com (Terry Ritter) Message-ID: <3a6d0765.2630739@news.io.com> References: <3a6c2bff.1045135@news.powersurfr.com> Newsgroups: sci.crypt Lines: 244 On Mon, 22 Jan 2001 13:05:04 GMT, in <3a6c2bff.1045135@news.powersurfr.com>, in sci.crypt jsavard@ecn.ab.SBLOK.ca.nowhere (John Savard) wrote: >On Mon, 22 Jan 2001 07:07:48 GMT, ritter@io.com (Terry Ritter) wrote, >in part: >>On Mon, 22 Jan 2001 00:02:04 GMT, in >><3a6b739d.6203320@news.powersurfr.com>, in sci.crypt >>jsavard@ecn.ab.SBLOK.ca.nowhere (John Savard) wrote: >>Somewhere there is a reference which continues to corrupt the minds of >>people coming into cryptography. It deludes them into believing the >>OTP is mathematically proven to be unbreakable in practice. I would >>love to find exactly what that reference is. Then maybe we could stop >>all this nonsense before it starts. > >The Codebreakers, David Kahn. Chapter 13: "Secrecy for Sale". Thanks. I suppose that may be it. Ironically, though, having just re-read that section, I find it hard to disagree with any particular statement in it. Essentially the problem is one of ambiguity: the idea that Kahn is talking about a real, practical OTP, when in fact he can only be talking about the theoretical OTP. For example, he starts out saying: [The one-time system] provides a new and unpredictable key character for each plaintext character . . . ." But since abstract predictability is something which cannot be measured, it can be only asserted, which takes us back to theory, not practice. He says: "the key in a one-time system neither repeats, nor recurs, nor makes sense, nor erects internal frameworks." Again, the only way to "know" that is to assume it, which is theoretical, not practical. And: "a random key has no underlying system -- if it did, it would not be random," which is must be theoretical, because in practice it would be circular and also useless. All in all, it would seem to be an interesting lesson about the ability to write truth which is easily taken out of its' limited correct context. >>>>The unexpected advantage of Dynamic Transposition is that a plethora >>>>of different permutations produce exactly the same ciphering result. >>>>This would seem to hide the exact permutation used, and thus also hide >>>>any attempt to define the shuffling sequence by moving back from a >>>>known permutation. > >>>But that's not an advantage that can't be obtained with substitution. > >>The advantage cannot be obtained by substitution. > >>I have seen you say that, but I have absolutely no idea what you could >>possibly mean by it. > >>>Suppose we enciphered a message using DES, except that the subkeys are >>>generated by some sort of stream cipher. Each 48-bit subkey could be >>>replaced with any member (including itself) from a set of 2^16 subkeys >>>that give the same result. > >>How is it that these different keys give the same result? > >The same result _for a specific given input block_, just as for a >specific input block in Dynamic Transposition, two bits can both be >1s. > >Essentially, to obtain a given f-function output from a given >f-function input in DES, it is sufficient to control the middle four >bits of every six in the 48-bit subkey; the other two bits can have >any value. 4 bits -> any one of 4 S-boxes -> XOR with an arbitrary >value -> any 4 bits you like. I certainly agree that one can obtain any particular 64-bit ciphertext for any particular 64-bit plaintext. But that is not an arbitrary large substitution. If we had an arbitrary keyspace for a 64-bit block cipher, we could start with plaintext value 00.......00, and then choose 1 from among 2**64 different values. Then, for plaintext 00.......01, we could choose 1 from among 2**64 - 1 values, and so on. The total keyspace is (2**64)! Nor is that the only issue: A conventional block cipher is not re-keyed on a block-by-block basis; Dynamic Transposition is. In a conventional block cipher, known-plaintext completely reveals one particular transformation (from among 2**64); Dynamic Transposition does not. >>There is only one permutation per block in Dynamic Transposition. I >>do recommend shuffling twice, only to prevent someone who knows the >>actual permutation from attacking the RNG sequence. But that idea is >>really getting in the way of comprehension, because that is not the >>main source of strength in the system. In the end, there is just some >>permutation. > >Four rounds of DES with subkeys that change per block are exactly >analogous. Well, the mapping is 2 to the 64th, so -- assuming that each key bit is effective and 2**64 block values can be selected with those bits -- then 64 bits per block would suffice. However, for any one known-plaintext, a particular transformation would be revealed, on the way to attacking the keying sequence. But that does not happen in Dynamic Transposition. >>>It might have the advantage that successive permutations are harder to >>>unravel than successive XORs, or even additions alternating with XORs. > >>"Might?" > >I could have said here 'It is likely to have...', but the point is: a) >we don't know how well The Opponent understands permutation groups, >and b) some analysis of the mathematical properties involved is needed >to say much more. First, I call the set of permutations which produce the same result "clumps," specifically to distinguish them from mathematical "groups." These clumps are not simply key-dependent, they are also data-dependent. Next, each different (balanced) pattern of bits in the plaintext block has a different clump structure, but that "structure" is only apparent when we know the plaintext, or if we were to re-use the same permutation. But Dynamic Transposition does not re-use permutations. >>>And there is the theoretical interest of showing that, fundamentally, >>>a transposition can be, inherently, just as secure as a substitution. > >>Dynamic Transposition is vastly more secure than a substitution. > >>You will have to define what you mean by "substitution" though, since >>you appear to be describing DES as "substitution." > >>Modern block ciphers do attempt to emulate large simple substitutions. >>They are given "large enough" keyspaces to prevent brute-force attacks >>on keys. But nobody should have any delusions about the extent to >>which they actually produce all N! possible cipherings. > >Dynamic transposition may produce all n! possible permutations of the >bits involved; it DOES NOT produce all > > n! > ( ------------) ! > (n/2)!(n/2)! > >mappings of the set of balanced n-bit strings onto itself any more >than DES produces all (2^64)! possible block substitutions. Well, let's take this apart: A: n! is the number of different permutations in an n-element system. B: (n/2)!(N/2)! is the number of different permutations in a balanced n-bit system which have the same effect (that take a plaintext block to the exact same ciphertext block). A / B is the probability that a different permutation will not produce the same effect. I frankly have no idea what (A / B)! could possibly be intended to represent. Perhaps the equation should have been (A/B)**S, where S is the number of blocks, the length of a sequence of permutations. >This is a mistake that, frankly, I'm surprised at you for making. But >we all slip up, and it's looking like this false assumption is at the >root of some of your claims for Dynamic Transposition as against >substitution. Every possible permutation can be constructed with approximately equal probability provided only that we have enough state in the RNG, and in practice we can easily build efficient RNG's with that property. What we cannot do is construct any possible sequence of permutations. But this "sequence of permutations" is just not available for analysis, quite unlike the sequence of transformations in a conventional block cipher. It is different, even if we were to key the conventional block cipher on a block-by-block basis. In Dynamic Transposition, each permutation itself is not exposed and, since it is used only once, cannot be traversed for examination. So if we leap to the conclusion that a particular permutation produced the transformation, our chances of that leap being right are: C: 1 / ((n/2)! (n/2)!) And if we want to know an explicit sequence of permutations of length S, our probability of doing that is C**S. When something is difficult to predict as a single unit, it is exponentially difficult to predict as a sequence. Once again I note that "the" permutation clump -- of size (n/2!)(N/2)! -- changes with each block. Additional blocks thus do not refine an estimate of a previous clump, but instead step into a new clump with a new structure. >And with substitution, unlike Dynamic Transposition, instead of being >stuck with one set of n! substitutions, one can use steps of different >kinds so that instead of just having, say, all 2^n possible mappings >obtained by XORing an n-bit block with an n-bit key, one can explore >the space of (2^n)! permutations more deeply - depending on how much >key we use, and how complicated a structure we give the cipher. > >>>But because it seems to be stuck with a bandwidth problem > >>In my experience with actually running such a cipher, bit-balancing >>adds 25 percent to 33 percent to simple ASCII text. The 1/3 value was >>given both in the "Revisited" article, as well as the original >>Cryptologia article on my pages. And if the text is compressed first, >>there will be even less expansion. If you see even a 33 percent >>expansion as a show-stopping amount, a "bandwidth problem," I think >>you need to pause for re-calibration. > >You would be right, unless > >>>when taken >>>'straight', and because its advantages can mostly be matched within >>>the substitution world, > >>Simply false. > >I happen to be right _here_. I'm sorry, but even if you were right "_here_," you would still not be right about the bandwidth "problem," and you have been trumpeting that for a sequence of responses. --- Terry Ritter ritter@io.com http://www.io.com/~ritter/ Crypto Glossary http://www.io.com/~ritter/GLOSSARY.HTM
Subject: Re: Dynamic Transposition Revisited (long) Date: Mon, 22 Jan 2001 21:26:33 -0800 From: "John A. Malley" <102667.2235@compuserve.com> Message-ID: <3A6D1609.90C2BD34@compuserve.com> References: <3a6d0765.2630739@news.io.com> Newsgroups: sci.crypt Lines: 43 Terry Ritter wrote: [snip] > > Every possible permutation can be constructed with approximately equal > probability provided only that we have enough state in the RNG, and in > practice we can easily build efficient RNG's with that property. > > What we cannot do is construct any possible sequence of permutations. This may be a good place to continue the cryptanalysis of the strength of the DT cipher. A PRNG with N! states to make every permutation of the bits in an N bit block can only generate some of the possible sequences of permutations. There are (N!)! possible sequences of permutations. AFAIK it's safe to say the PRNG generates N! sequences (assuming the set of seed values is equal to the set of possible outputs of the PRNG, both sets are of order N!.) Only N!/ (N!)! of the sequences can ever be seen. There *may* be exploitable relationships between successive permutations due to this, but I can't point to any yet, just a hunch. Permutations with certain characteristics may always follow others with other characteristics - or perhaps permutations with certain characteristics can never follow permutations with other characteristics. Look for dependencies between successive permutations that hold without knowing the exact permutations involved. How about a permutation's cycle decomposition - its type? (If permutation pi() has a_i cycles of length i , where 1 <= i <= N , then the type of permutation pi() is the partition [1^a_1, 2^a_2, 3^a_3, ... N^a_N] of N.) I'd try to find any relationships between the cycle decompositions/types of the successive permutations produced by the PRNG output fed through the shuffling algorithm. I'd look for a way to predict the cycle decomposition/type of the next permutation generated given the current permutation's cycle decomposition/type - and I'd look for any "forbidden" transitions - is it impossible (due to the nature of the shuffling algorithm and the PRNG) for permutations of certain cyclic decompositions/type to be followed by permutations of some other cyclic decomposition/type. John A. Malley 102667.2235@compuserve.com
Subject: Re: Dynamic Transposition Revisited (long) Date: Tue, 23 Jan 2001 08:30:01 GMT From: ritter@io.com (Terry Ritter) Message-ID: <3a6d40d5.1847395@news.io.com> References: <3A6D1609.90C2BD34@compuserve.com> Newsgroups: sci.crypt Lines: 79 On Mon, 22 Jan 2001 21:26:33 -0800, in <3A6D1609.90C2BD34@compuserve.com>, in sci.crypt "John A. Malley" <102667.2235@compuserve.com> wrote: >Terry Ritter wrote: >[snip] >> >> Every possible permutation can be constructed with approximately equal >> probability provided only that we have enough state in the RNG, and in >> practice we can easily build efficient RNG's with that property. >> >> What we cannot do is construct any possible sequence of permutations. > >This may be a good place to continue the cryptanalysis of the strength >of the DT cipher. A PRNG with N! states to make every permutation of >the bits in an N bit block can only generate some of the possible >sequences of permutations. There are (N!)! possible sequences of >permutations. There are (N!)**S possible sequences of permutations, of sequence length S. >AFAIK it's safe to say the PRNG generates N! sequences >(assuming the set of seed values is equal to the set of possible outputs >of the PRNG, both sets are of order N!.) Only N!/ (N!)! of the sequences >can ever be seen. ?? >There *may* be exploitable relationships between successive permutations >due to this, but I can't point to any yet, just a hunch. Permutations >with certain characteristics may always follow others with other >characteristics - or perhaps permutations with certain characteristics >can never follow permutations with other characteristics. Look for >dependencies between successive permutations that hold without knowing >the exact permutations involved. As I see it, the issue is not so much that there are no exploitable relationships (although I certainly would not expect anything easy from a well-engineered RNG), as it is that one cannot sense those relationships from the ciphertext results. The transformation from plaintext to ciphertext is just one bit-permutation. But when we have bit-balanced plaintext, many different bit-permutations will produce the exact same ciphertext. Furthermore, the permutation is not re-used; we don't have the opportunity to inject data changes and see where they end up. We don't have any ability to know what any particular permutation actually is. >How about a permutation's cycle decomposition - its type? (If >permutation pi() has a_i cycles of length i , where 1 <= i <= N , then >the type of permutation pi() is the partition [1^a_1, 2^a_2, 3^a_3, ... >N^a_N] of N.) Yes, yes, yes, but you can't see the permutation itself, only the result, which is the ciphertext. No 1:1 relationship exists. Known-plaintext and ciphertext does not expose the permutation. >I'd try to find any relationships between the cycle >decompositions/types of the successive permutations produced by the PRNG >output fed through the shuffling algorithm. I'd look for a way to >predict the cycle decomposition/type of the next permutation generated >given the current permutation's cycle decomposition/type - and I'd look >for any "forbidden" transitions - is it impossible (due to the nature of >the shuffling algorithm and the PRNG) for permutations of certain cyclic >decompositions/type to be followed by permutations of some other cyclic >decomposition/type. --- Terry Ritter ritter@io.com http://www.io.com/~ritter/ Crypto Glossary http://www.io.com/~ritter/GLOSSARY.HTM
Subject: Re: Dynamic Transposition Revisited (long) Date: Tue, 23 Jan 2001 08:29:16 -0800 From: "John A. Malley" <102667.2235@compuserve.com> Message-ID: <3A6DB15C.232B4A88@compuserve.com> References: <3a6d40d5.1847395@news.io.com> Newsgroups: sci.crypt Lines: 73 Terry Ritter wrote: > [snip] > >This may be a good place to continue the cryptanalysis of the strength > >of the DT cipher. A PRNG with N! states to make every permutation of > >the bits in an N bit block can only generate some of the possible > >sequences of permutations. There are (N!)! possible sequences of > >permutations. > > There are (N!)**S possible sequences of permutations, of sequence > length S. Please help - where did I go wrong in calculating the total number of possible sequences of the N! total possible permutations? Here's my reasoning - Given N bits there are N! different, unique ways to permute those bits - the N! unique permutations. They make a set P. I number the permutations in the set from 1 to N!. How many different ways can I sequence the members of the set of permutations? Or in other words, how many different ways can I write down (list) the elements of P? Let the number of elements in P be M, so M = N!. The number of unique listing sequences of the M elements is the number of permutations of the M elements of P, which is M!. Since M = N!, then M = (N!)!. So that's how I derived the number of ways the individual elements of the set of permutations of an N bit block can be listed out as a sequence. > > >AFAIK it's safe to say the PRNG generates N! sequences > >(assuming the set of seed values is equal to the set of possible outputs > >of the PRNG, both sets are of order N!.) Only N!/ (N!)! of the sequences > >can ever be seen. > > ?? There are M! ways to list the M values from 1 - M. A PRNG outputs lists (sequences) of the values between 1-M. The PRNG starts from a seed value s and makes a list of the M values. Each list is different. The PRNG can only make as many unique lists of the M values are there are unique seeds s. Let the order of the set S of seed values be K. Then the PRNG can only make K out of M! listings (sequences) of the M values from 1 - M. So the PRNG only produces a fraction K / M! of the total possible sequences of the M values. Let the set S be the set of M output values of the PRNG. The order of the set M is M. Then the PRNG only produces a fraction M / M! of the total possible sequences of the M values. So let M = N! (the number of possible permutations of N bits). The set of seed values S for this PRNG is the set of numbers from 1 through M! The PRNG generates a number from 1 through M! to choose the permutation to apply to the ith bit block. (This is the most abstract view of the PRNG's role in the DTC.) This PRNG can produce only M = N! possible output sequences of permutations. That's why I state the PRNG produces a fraction M! / (M!)! of the total possible listings of the M! permutations of the N bits. Why is this derivation wrong? John A. Malley 102667.2235@compuserve.com
Subject: Re: Dynamic Transposition Revisited (long) Date: Tue, 23 Jan 2001 20:13:12 GMT From: ritter@io.com (Terry Ritter) Message-ID: <3a6de525.7305780@news.io.com> References: <3A6DB15C.232B4A88@compuserve.com> Newsgroups: sci.crypt Lines: 136 On Tue, 23 Jan 2001 08:29:16 -0800, in <3A6DB15C.232B4A88@compuserve.com>, in sci.crypt "John A. Malley" <102667.2235@compuserve.com> wrote: >Terry Ritter wrote: >> >[snip] > >> >This may be a good place to continue the cryptanalysis of the strength >> >of the DT cipher. A PRNG with N! states to make every permutation of >> >the bits in an N bit block can only generate some of the possible >> >sequences of permutations. There are (N!)! possible sequences of >> >permutations. >> >> There are (N!)**S possible sequences of permutations, of sequence >> length S. > >Please help - where did I go wrong in calculating the total number of >possible sequences of the N! total possible permutations? > >Here's my reasoning - > >Given N bits there are N! different, unique ways to permute those bits - >the N! unique permutations. They make a set P. > >I number the permutations in the set from 1 to N!. How many different >ways can I sequence the members of the set of permutations? Or in other >words, how many different ways can I write down (list) the elements of >P? Let the number of elements in P be M, so >M = N!. The number of unique listing sequences of the M elements is the >number of permutations of the M elements of P, which is M!. Since M = >N!, then M = (N!)!. > >So that's how I derived the number of ways the individual elements of >the set of permutations of an N bit block can be listed out as a >sequence. OK, I had no idea what you were doing. Of course, I still have no idea where you are going. Do you have any idea how big (N!)! is? Even 128! is 3.85620482e+215, and the factorial of that is some number which is about 2.75610295e+218 bits long. (From http://www.io.com/~ritter/JAVASCRP/PERMCOMB.HTM#Factorials ). Surely, there is no reason to imagine that permutations must all occur before repeating. In fact, that would be a weakness. The design goal is to allow the very same permutation to occur on the next block, and then rely on the almost infinitesimal probability of any particular permutation occurring to be assured that it will almost never happen. The goal is to make the permutation selection for each and every block independent, with equal probabilities. We can see the selected permutation as a "value," in a sequence of values, exactly the same way we get random values from an RNG, or the way we think of sequences as some number of symbols, each one chosen from a set. It is a weakness for a random generator to produce a value which will not then re-occur until the generator recycles. >> >AFAIK it's safe to say the PRNG generates N! sequences >> >(assuming the set of seed values is equal to the set of possible outputs >> >of the PRNG, both sets are of order N!.) Only N!/ (N!)! of the sequences >> >can ever be seen. >> >> ?? > >There are M! ways to list the M values from 1 - M. These are called permutations. >A PRNG outputs lists >(sequences) of the values between 1-M. Some RNG's are like that. Don't do that. >The PRNG starts from a seed >value s and makes a list of the M values. Each list is different. The >PRNG can only make as many unique lists of the M values are there are >unique seeds s. Let the order of the set S of seed values be K. Then >the PRNG can only make K out of M! listings (sequences) of the M values >from 1 - M. So the PRNG only produces a fraction K / M! of the total >possible sequences of the M values. Internally, there is some concept of a huge cycle which is shuffled by an RNG -- the internal state -- but that concept is not necessarily the output value. Surely, when we have a huge internal state, we do not imagine that we must take the whole amount of that state as the RNG result. When we do not, any particular value can re-occur on the very next RNG step. The Additive RNG is discussed in Knuth II. And even though those are tiny, it should be clear that, in an Additive RNG, values can and do repeat. The intent is to make the probability of that immeasurably close to independent selection. >Let the set S be the set of M output values of the PRNG. The order of >the set M is M. Then the PRNG only produces a fraction M / M! of the >total possible sequences of the M values. > >So let M = N! (the number of possible permutations of N bits). The set >of seed values S for this PRNG is the set of numbers from 1 through M! >The PRNG generates a number from 1 through M! to choose the permutation >to apply to the ith bit block. (This is the most abstract view of the >PRNG's role in the DTC.) > >This PRNG can produce only M = N! possible output sequences of >permutations. That's why I state the PRNG produces a fraction M! / (M!)! >of the total possible listings of the M! permutations of the N bits. > >Why is this derivation wrong? Well, it is only wrong if it isn't a step toward your goal. But I don't see where you are going. It certainly is an upper limit!!! There is some truth to it, but the implication I get is that once a particular permutation occurs, it cannot re-occur until the RNG has cycled, which is false. The sequence of permutations will of course start to repeat after the RNG has cycled enough to produce the exact same internal state at the start point of the shuffling process (which may well take many, many full RNG cycles). But a single RNG cycle length is, of course, far beyond what could be traversed in practice. The problem appears to be based on a fundamental misunderstanding of how RNG's produce output values. Presumably there is a generalization from simple RNG's to strong ones. It either does not apply, or is trying to compute something to which I assign little significance. --- Terry Ritter ritter@io.com http://www.io.com/~ritter/ Crypto Glossary http://www.io.com/~ritter/GLOSSARY.HTM
Subject: Re: Dynamic Transposition Revisited (long) Date: Tue, 23 Jan 2001 23:57:25 -0800 From: "John A. Malley" <102667.2235@compuserve.com> Message-ID: <3A6E8AE5.A0E0CF23@compuserve.com> References: <3a6de525.7305780@news.io.com> Newsgroups: sci.crypt Lines: 124 Terry Ritter wrote: > [snip] > > Surely, there is no reason to imagine that permutations must all occur > before repeating. In fact, that would be a weakness. Yes, and this is where I was going in the examination of the strengths of the DTC : What are the effects on the "strength" of the DTC if the PRNG selecting the permutations (via a shuffling algorithm or some equivalent) must cycle through every possible permutation once before any particular permutation appears again? Can the statistics of permutation types (what type follow what type, how many of each type can occur, what types can never follow what types) be exploited in concert with known plaintext to predict with sufficient probability the likely permutations to follow? How much plaintext would be needed to get predictions better than 50/50? IMO the answers to these questions gauge the strength of the DTC and allow quantitative comparison to other ciphers. > > The design goal is to allow the very same permutation to occur on the > next block, and then rely on the almost infinitesimal probability of > any particular permutation occurring to be assured that it will almost > never happen. The goal is to make the permutation selection for each > and every block independent, with equal probabilities. This is a very important 'engineering' constraint on the PRNG driving the permutation selection mechanism in the DTC. And AFAIK there is no PRNG that satisfies this constraint. AFAIK (and I readily admit what I do know about cryptology is less than what I DON'T know about cryptology): A PRNG = ( S, s_0, T, U, G) where S is a finite set of states, s_0 is the initial state of the PRNG and an element of S, T is the state transition function T: S -> S mapping the elements of S to the elements of S, U is a finite set of output symbols and G is the output function G: S -> U mapping the elements of the set S to the elements of the set U. The current state is a function of the previous state. The current output of the PRNG is a function of the current state. Now the order of U cannot exceed the order of S. If the |S| = |U| then there's a one-to-one correspondence between the states and the outputs of the PRNG through the function G. If the order of U is less than the order of S, then multiple states map to the same element in the output set U and the function G is a surjection. A subset of S maps to the same element u_i in U. We see multiple occurrences of the same output symbol u_i in the output sequence from the PRNG. The function T takes the current state and maps it to the next state - i.e. state feedback. Now I *think* the longest possible output sequence of states ever possible when the previous state determines the next state occurs if and only if T : S -> S is bijective (and thus a permutation on S) AND that permutation of S (which T is) has a cycle decomposition into only one cycle, an |S|-cycle, where |S| is the order of the set S. There are |S| start states for the PRNG so there are only |S| maximal-length sequences of state the PRNG can produce. Each unique seed value s_i makes a unique state sequence. If the mapping from S to U is surjective then elements of U (the u_i ) can occur multiple times in the output sequence from the PRNG as described here. (All of this description is from the tutorial by Pierre L'Ecuyer in his paper "Uniform Random Number Generation", Annals of Operation Research, 1994 at his web site.) So the current output of the PRNG is a function of the current state of the PRNG. And the current state of the PRNG is a function of its past state. Let the current output of the PRNG determine the permutation to use on the current N-bit block in the DTC. Now we want the permutation selection on the current N-bit block to be independent of the permutation selections on past blocks. Independence by definition means the outcome of this trial (selecting a permutation to use on this block) does not depend on the outcome of any previous trials (selections of permutations used on past blocks.) But, the permutation selection for the current N-bit block is a function of the current output of the PRNG. The current output of the PRNG is a function of the current state of the PRNG. And the current state of the PRNG is a function of its previous state. These events are NOT independent. What happened in the past does affect the current state. And since states map to PRNG outputs, and outputs map to permutations, the permutation outcomes in the past will affect the current permutation outcome - some outcomes will be more likely than others depending on what states in the PRNG already occurred. > > We can see the selected permutation as a "value," in a sequence of > values, exactly the same way we get random values from an RNG, or the > way we think of sequences as some number of symbols, each one chosen > from a set. It is a weakness for a random generator to produce a > value which will not then re-occur until the generator recycles. Yes, I agree, and the function G described above can map a subset K of the states of S to a single output value u_i, so the number of times u_i appears in the output sequence of the PRNG is |K|. So it is possible for the same permutation of the N-bit bit-balanced block to occur multiple times in the DTC. Thanks for pointing this out to me. I disagree with the assertion that we can ever "make the permutation selection for each and every block independent, with equal probabilities." AFAIK this cannot be done with a PRNG no matter how many states over and beyond the N! required to get multiple occurrences of the same permutations for each of the N! permutations of the N-bit block. The current permutation is always dependent on the past permutations since the current state of the PRNG is always dependent on the past state. I do find this cipher interesting. I've never seen anything like this before - the combination of the bit-balanced block with the permutation. It's like a new kind of one-way function. It's easy to get the output ciphertext block given the input plaintext and the permutation, but its "hard" to determine the permutation given the plaintext and the ciphertext. I've been mulling that over today...how to relate this to other 'one-way functions'. I use the quotes only because the existence of one-way functions has never been proved :-) John A. Malley 102667.2235@compuserve.com
Subject: Re: Dynamic Transposition Revisited (long) Date: Thu, 25 Jan 2001 07:11:39 GMT From: ritter@io.com (Terry Ritter) Message-ID: <3a6fd076.8332438@news.io.com> References: <3A6E8AE5.A0E0CF23@compuserve.com> Newsgroups: sci.crypt Lines: 276 On Tue, 23 Jan 2001 23:57:25 -0800, in <3A6E8AE5.A0E0CF23@compuserve.com>, in sci.crypt "John A. Malley" <102667.2235@compuserve.com> wrote: >Terry Ritter wrote: >> >[snip] >> >> Surely, there is no reason to imagine that permutations must all occur >> before repeating. In fact, that would be a weakness. > >Yes, and this is where I was going in the examination of the strengths >of the DTC : > >What are the effects on the "strength" of the DTC if the PRNG selecting >the permutations (via a shuffling algorithm or some equivalent) must >cycle through every possible permutation once before any particular >permutation appears again? Fine, but note that your assumed situation does not represent the Dynamic Transposition design in the "Revisited" article. Please be careful not to open "Red Herring" arguments. If the only thing we look at is the permutations, there is no "permutation of permutations" cycle. No permutation is prevented from re-occurring adjacent to itself. And, while the RNG does of course have some cycle length, that value is far beyond what can occur. >Can the statistics of permutation types >(what type follow what type, how many of each type can occur, what types >can never follow what types) be exploited in concert with known >plaintext to predict with sufficient probability the likely permutations >to follow? Clearly, the design cannot put out every possible sequence of permutations. But that does not mean that the various permutations are not equally probable to the extent that one could measure them. In particular, one must ask how they could be measured at all. The permutations are not exposed by ciphering, or by known-plaintext, so how could any imbalance be exploited? >How much plaintext would be needed to get predictions better >than 50/50? IMO the answers to these questions gauge the strength of the >DTC and allow quantitative comparison to other ciphers. >> The design goal is to allow the very same permutation to occur on the >> next block, and then rely on the almost infinitesimal probability of >> any particular permutation occurring to be assured that it will almost >> never happen. The goal is to make the permutation selection for each >> and every block independent, with equal probabilities. > >This is a very important 'engineering' constraint on the PRNG driving >the permutation selection mechanism in the DTC. And AFAIK there is no >PRNG that satisfies this constraint. You need to read more carefully: A "design goal" is not a constraint; it is a goal. The desire to have a design in which each block is an independent selection among all possible permutations obviously cannot be achieved to perfection, but it can be approximated. To the extent that the approximation exceeds what can be measured or exploited, there would seem to be little point in doing better. To have a mechanism in which deviations from optimality are hidden is essentially to have a cipher; that is what we do: we hide imperfection behind curtains and doors. Having a structure in which deviations from perfection approach zero as we increase the size is a conventional proof technique. One way we approach the production of independent permutations is by having a far-larger RNG output value than will be used in shuffling. The result is that a vast number of different RNG output values will produce exactly the same shuffling value, and a vast number of different sequences will produce the same permutation. >AFAIK (and I readily admit what I >do know about cryptology is less than what I DON'T know about >cryptology): That is true for us all. >A PRNG = ( S, s_0, T, U, G) where S is a finite set of states, s_0 is >the initial state of the PRNG and an element of S, T is the state >transition function T: S -> S mapping the elements of S to the elements >of S, U is a finite set of output symbols and G is the output function >G: S -> U mapping the elements of the set S to the elements of the set >U. First of all, this is a standard state-machine model. Unfortunately, this model does not completely fit an Additive RNG, at least in the sense of providing insight as to which RNG values are going to be correlated. In an Additive RNG, adjacent values are not likely to be correlated (other than in the most abstract sense), but values which are separated by the distance between recursion polynomial feedback terms will be equally influenced by some other value. Eventually this fact can be used to expose the state of the linear RNG, provided we have sufficient data which corresponds to the linear output. In this particular case, we also have: a nonlinear filter ("jitterizer"), double-shuffling, and range reduction both to and inside the shuffling; all of these stand in the way of collecting that information. S is contrived to be extremely large, 310k bits. So even though in theory the content of S must repeat, sometime; in practice that sometime cannot be reached. Any issues which require cycle traversal would seem to be irrelevant. If we do the best we can with the model, probably the item of most interest is G. In an Additive RNG, this is a severe reducing function. It takes S, the full state of the RNG (310k bits) into the 12 bits we need to send to the shuffling of a 4k-element array. Consequently, about 2**310,036 different RNG states will produce the exact same 12-bit shuffling value. (Here we ignore the nonlinear filter.) >The current state is a function of the previous state. The current >output of the PRNG is a function of the current state. Now the order of >U cannot exceed the order of S. If the |S| = |U| then there's a >one-to-one correspondence between the states and the outputs of the PRNG >through the function G. Well, the correspondence is 1:1 to the *input* of G, but G is a reducing x:1 function. (In these terms G is actually quite severe: The internal state S is 9689 * 32 = 310,048 bits. Of this we will eventually use at most 12 in a single shuffling selection. So G is 2**310,048 : 2**12. G is a sort of hash.) >If the order of U is less than the order of S, >then multiple states map to the same element in the output set U and the >function G is a surjection. A subset of S maps to the same element u_i >in U. We see multiple occurrences of the same output symbol u_i in the >output sequence from the PRNG. Right. >The function T takes the current state and maps it to the next state - >i.e. state feedback. Now I *think* the longest possible output sequence >of states ever possible when the previous state determines the next >state occurs if and only if T : S -> S is bijective (and thus a >permutation on S) AND that permutation of S (which T is) has a cycle >decomposition into only one cycle, an |S|-cycle, where |S| is the order >of the set S. While abstractly true, that is not the case here. An Additive RNG does not create a maximal-length sequence in terms of its' state. Instead, an Additive RNG creates a huge number of distinct cycles, each of which is "long enough." It is not necessary to have "the longest possible output sequence" when the sequence we do get vastly exceeds whatever we need. How long is the cycle? For the Additive RNG, Marsaglia proves period ((2**r)-1)*(2**(n-1)) for degree r and width n. Here we have r as 9689 and n as 32. So we have the cycle length as (2**9689 - 1) * (2**31) ~= 2**9689, which of course is some binary value of about 9,690 bits. We will not traverse 2**9690 steps in an afternoon. >There are |S| start states for the PRNG so there are only |S| >maximal-length sequences of state the PRNG can produce. Each unique seed >value s_i makes a unique state sequence. If the mapping from S to U is >surjective then elements of U (the u_i ) can occur multiple times in >the output sequence from the PRNG as described here. > >(All of this description is from the tutorial by Pierre L'Ecuyer in his >paper "Uniform Random Number Generation", Annals of Operation Research, >1994 at his web site.) It's a classic state-machine description. >So the current output of the PRNG is a function of the current state of >the PRNG. And the current state of the PRNG is a function of its past >state. Let the current output of the PRNG determine the permutation to >use on the current N-bit block in the DTC. > >Now we want the permutation selection on the current N-bit block to be >independent of the permutation selections on past blocks. >Independence by definition means the outcome of this trial (selecting a >permutation to use on this block) does not depend on the outcome of any >previous trials (selections of permutations used on past blocks.) But, >the permutation selection for the current N-bit block is a function of >the current output of the PRNG. The current output of the PRNG is a >function of the current state of the PRNG. And the current state of the >PRNG is a function of its previous state. These events are NOT >independent. What happened in the past does affect the current state. >And since states map to PRNG outputs, and outputs map to permutations, >the permutation outcomes in the past will affect the current permutation >outcome - some outcomes will be more likely than others depending on >what states in the PRNG already occurred. Actually, successive Additive RNG output values will tend to be "fairly" independent, because of the structure of the generator. Even if that were not so, RNG value independence is *approached* by the action of G which is 2**310,036 : 1. Such correlation as does occur between values is thus massively reduced and hidden by the severe hash action of G. Similarly, when we shuffle, there is also a substantial reduction (from about 10k 12-bit values or 120k bits to a 4k-element shuffle representing about 43k bits). Again, there is no 1:1 relationship, but instead an 3:1 reduction or hash. Nor can the issue be about whether the produced permutations are independent in the abstract. The whole point of a cipher design is to take the imperfections of a real design and hide them in such a way that they cannot be exploited. >> We can see the selected permutation as a "value," in a sequence of >> values, exactly the same way we get random values from an RNG, or the >> way we think of sequences as some number of symbols, each one chosen >> from a set. It is a weakness for a random generator to produce a >> value which will not then re-occur until the generator recycles. > >Yes, I agree, and the function G described above can map a subset K of >the states of S to a single output value u_i, so the number of times u_i >appears in the output sequence of the PRNG is |K|. So it is possible >for the same permutation of the N-bit bit-balanced block to occur >multiple times in the DTC. Thanks for pointing this out to me. > >I disagree with the assertion that we can ever "make the permutation >selection for each and every block independent, with equal >probabilities." As far as I know, I made no such assertion. I think this is it: >>The goal is to make the permutation selection for each >>and every block independent, with equal probabilities. So what I really said was that such was *the goal*. In real cryptography, we often do not achieve such goals, but may well approach them sufficiently so the difference cannot be distinguished. I think this is such a case. It is, however, disturbing to see an argument made which deliberately takes a statement out of the original context so that it can be easily dismissed. Now, I know this is a common academic disease, but I will not continue a discussion in which that is the mode of argument. 'nuff said. >AFAIK this cannot be done with a PRNG no matter how >many states over and beyond the N! required to get multiple occurrences >of the same permutations for each of the N! permutations of the N-bit >block. The current permutation is always dependent on the past >permutations since the current state of the PRNG is always dependent on >the past state. > >I do find this cipher interesting. I've never seen anything like this >before - the combination of the bit-balanced block with the >permutation. It's like a new kind of one-way function. It's easy to get >the output ciphertext block given the input plaintext and the >permutation, but its "hard" to determine the permutation given the >plaintext and the ciphertext. I've been mulling that over today...how >to relate this to other 'one-way functions'. I use the quotes only >because the existence of one-way functions has never been proved :-) Indeed. --- Terry Ritter ritter@io.com http://www.io.com/~ritter/ Crypto Glossary http://www.io.com/~ritter/GLOSSARY.HTM
Subject: Re: Dynamic Transposition Revisited (long) Date: Wed, 24 Jan 2001 23:47:34 -0800 From: "John A. Malley" <102667.2235@compuserve.com> Message-ID: <3A6FDA16.DFE6ACEE@compuserve.com> References: <3a6fd076.8332438@news.io.com> Newsgroups: sci.crypt Lines: 52 Terry Ritter wrote: > [snip some good stuff] > > > >I disagree with the assertion that we can ever "make the permutation > >selection for each and every block independent, with equal > >probabilities." > > As far as I know, I made no such assertion. I think this is it: > > >>The goal is to make the permutation selection for each > >>and every block independent, with equal probabilities. > > So what I really said was that such was *the goal*. In real > cryptography, we often do not achieve such goals, but may well > approach them sufficiently so the difference cannot be distinguished. > I think this is such a case. > > It is, however, disturbing to see an argument made which deliberately > takes a statement out of the original context so that it can be easily > dismissed. Now, I know this is a common academic disease, but I will > not continue a discussion in which that is the mode of argument. > 'nuff said. I meant no disrespect and apologize if I offended you. I did not deliberately take the statement out of the original context in an attempt to easily dismiss it. I am not attempting to dismiss the DTC. I am only attempting to explore ways to describe its "strength". Your replies to my posts have helped me better understand the strengths of the DTC. I appreciate the difference between "goal" and actual achievement as you explained in your response. I thought you were asserting DTC achieved permutation selection for each and every block independent from other blocks, and with equal probabilities, due to your previous post in this thread, where it's stated > There are (N!)**S possible sequences of permutations, of sequence > length S. I understood this statement to mean that each N-bit block gets one of N! possible permutations applied to it, independent of any preceding or following block, for S blocks that make up the message. Thank you for replying to my posts, thanks for the dialogue. I do hope we will continue to correspond. John A. Malley 102667.2235@compuserve.com
Subject: Re: Dynamic Transposition Revisited (long) Date: Thu, 25 Jan 2001 09:05:10 GMT From: ritter@io.com (Terry Ritter) Message-ID: <3a6febf9.15375599@news.io.com> References: <3A6FDA16.DFE6ACEE@compuserve.com> Newsgroups: sci.crypt Lines: 35 On Wed, 24 Jan 2001 23:47:34 -0800, in <3A6FDA16.DFE6ACEE@compuserve.com>, in sci.crypt "John A. Malley" <102667.2235@compuserve.com> wrote: >[...] >I appreciate the difference between "goal" and actual achievement as you >explained in your response. I thought you were asserting DTC achieved >permutation selection for each and every block independent from other >blocks, and with equal probabilities, due to your previous post in this >thread, where it's stated > >> There are (N!)**S possible sequences of permutations, of sequence >> length S. > >I understood this statement to mean that each N-bit block gets one of N! >possible permutations applied to it, independent of any preceding or >following block, for S blocks that make up the message. Basically, that response was trying to present a correct count for permutation strings, instead of (as I recall) the (N!)! thing, since there is no such permutation of permutations. It is an expectation and a goal. In practice, I do think we would have a very difficult time distinguishing reality from that ideal. >Thank you for replying to my posts, thanks for the dialogue. I do hope >we will continue to correspond. Great. 'nuff said. --- Terry Ritter ritter@io.com http://www.io.com/~ritter/ Crypto Glossary http://www.io.com/~ritter/GLOSSARY.HTM
Subject: Re: Dynamic Transposition Revisited (long) Date: Fri, 26 Jan 2001 19:33:06 GMT From: AllanW <allan_w@my-deja.com> Message-ID: <94sjdi$8ti$1@nnrp1.deja.com> References: <3a6de525.7305780@news.io.com> Newsgroups: sci.crypt Lines: 169 ritter@io.com (Terry Ritter) wrote: > > On Tue, 23 Jan 2001 08:29:16 -0800, in > <3A6DB15C.232B4A88@compuserve.com>, in sci.crypt "John A. Malley" > <102667.2235@compuserve.com> wrote: > > >Terry Ritter wrote: > >> > >[snip] > > > >> >This may be a good place to continue the cryptanalysis of the strength > >> >of the DT cipher. A PRNG with N! states to make every permutation of > >> >the bits in an N bit block can only generate some of the possible > >> >sequences of permutations. There are (N!)! possible sequences of > >> >permutations. > >> > >> There are (N!)**S possible sequences of permutations, of sequence > >> length S. > > > >Please help - where did I go wrong in calculating the total number of > >possible sequences of the N! total possible permutations? > > > >Here's my reasoning - > > > >Given N bits there are N! different, unique ways to permute those bits - > >the N! unique permutations. They make a set P. > > > >I number the permutations in the set from 1 to N!. How many different > >ways can I sequence the members of the set of permutations? Or in other > >words, how many different ways can I write down (list) the elements of > >P? Let the number of elements in P be M, so > >M = N!. The number of unique listing sequences of the M elements is the > >number of permutations of the M elements of P, which is M!. Since M = > >N!, then M = (N!)!. > > > >So that's how I derived the number of ways the individual elements of > >the set of permutations of an N bit block can be listed out as a > >sequence. > > OK, I had no idea what you were doing. Of course, I still have no > idea where you are going. Do you have any idea how big (N!)! is? > Even 128! is 3.85620482e+215, and the factorial of that is some number > which is about 2.75610295e+218 bits long. (From > > http://www.io.com/~ritter/JAVASCRP/PERMCOMB.HTM#Factorials > > ). > > Surely, there is no reason to imagine that permutations must all occur > before repeating. In fact, that would be a weakness. > > The design goal is to allow the very same permutation to occur on the > next block, and then rely on the almost infinitesimal probability of > any particular permutation occurring to be assured that it will almost > never happen. The goal is to make the permutation selection for each > and every block independent, with equal probabilities. > > We can see the selected permutation as a "value," in a sequence of > values, exactly the same way we get random values from an RNG, or the > way we think of sequences as some number of symbols, each one chosen > from a set. It is a weakness for a random generator to produce a > value which will not then re-occur until the generator recycles. > > >> >AFAIK it's safe to say the PRNG generates N! sequences > >> >(assuming the set of seed values is equal to the set of possible outputs > >> >of the PRNG, both sets are of order N!.) Only N!/ (N!)! of the sequences > >> >can ever be seen. > >> > >> ?? > > > >There are M! ways to list the M values from 1 - M. > > These are called permutations. > > >A PRNG outputs lists > >(sequences) of the values between 1-M. > > Some RNG's are like that. Don't do that. > > >The PRNG starts from a seed > >value s and makes a list of the M values. Each list is different. The > >PRNG can only make as many unique lists of the M values are there are > >unique seeds s. Let the order of the set S of seed values be K. Then > >the PRNG can only make K out of M! listings (sequences) of the M values > >from 1 - M. So the PRNG only produces a fraction K / M! of the total > >possible sequences of the M values. > > Internally, there is some concept of a huge cycle which is shuffled by > an RNG -- the internal state -- but that concept is not necessarily > the output value. Surely, when we have a huge internal state, we do > not imagine that we must take the whole amount of that state as the > RNG result. When we do not, any particular value can re-occur on the > very next RNG step. > > The Additive RNG is discussed in Knuth II. And even though those are > tiny, it should be clear that, in an Additive RNG, values can and do > repeat. The intent is to make the probability of that immeasurably > close to independent selection. > > >Let the set S be the set of M output values of the PRNG. The order of > >the set M is M. Then the PRNG only produces a fraction M / M! of the > >total possible sequences of the M values. > > > >So let M = N! (the number of possible permutations of N bits). The set > >of seed values S for this PRNG is the set of numbers from 1 through M! > >The PRNG generates a number from 1 through M! to choose the permutation > >to apply to the ith bit block. (This is the most abstract view of the > >PRNG's role in the DTC.) > > > >This PRNG can produce only M = N! possible output sequences of > >permutations. That's why I state the PRNG produces a fraction M! / (M!)! > >of the total possible listings of the M! permutations of the N bits. > > > >Why is this derivation wrong? > > Well, it is only wrong if it isn't a step toward your goal. But I > don't see where you are going. It certainly is an upper limit!!! > > There is some truth to it, but the implication I get is that once a > particular permutation occurs, it cannot re-occur until the RNG has > cycled, which is false. The sequence of permutations will of course > start to repeat after the RNG has cycled enough to produce the exact > same internal state at the start point of the shuffling process (which > may well take many, many full RNG cycles). But a single RNG cycle > length is, of course, far beyond what could be traversed in practice. > > The problem appears to be based on a fundamental misunderstanding of > how RNG's produce output values. Presumably there is a generalization > from simple RNG's to strong ones. It either does not apply, or is > trying to compute something to which I assign little significance. I think John A. Malley was trying to find the number of possible outputs of the PRNG and compare it to the number of states of the bit-balanced plaintext block. There is more than one way to do the shuffle, but if you're selecting N numbers of value 0 to (N-1) then there are N**N possibe outputs for the PRNG, and (N!)/((N/2!)(N/2!)) possible states for the plaintext. Which illustrates one of Terry Ritter's original points: There are many more possible outputs from the PRNG than there are states for the plaintext, so that for any one plaintext/ciphertext combination there is no way to tell what the PRNG outputs were. -- Allan_W@my-deja.com is a "Spam Magnet," never read. Please reply in newsgroups only, sorry. Sent via Deja.com http://www.deja.com/
Subject: Re: Dynamic Transposition Revisited (long) Date: Fri, 26 Jan 2001 23:08:54 -0800 From: "John A. Malley" <102667.2235@compuserve.com> Message-ID: <3A727406.970A466@compuserve.com> References: <94sjdi$8ti$1@nnrp1.deja.com> Newsgroups: sci.crypt Lines: 86 AllanW wrote: > I think John A. Malley was trying to find the number of > possible outputs of the PRNG and compare it to the number of > states of the bit-balanced plaintext block. There is more than > one way to do the shuffle, but if you're selecting N numbers > of value 0 to (N-1) then there are N**N possibe outputs for > the PRNG, and (N!)/((N/2!)(N/2!)) possible states for the > plaintext. > Yes, that's true, I wanted to understand the relationship between the number of possible outputs of the PRNG compared to the number of states of the bit-balanced blocks, and since a predictable (not cryptographically secure) PRNG is used, were there statistical vulnerabilities in the permutation selections. I started with the high-level description of the Dynamic Transposition cipher: (1) Collect plaintext data in bit-balanced (or almost bit-balanced) blocks. (2) Shuffle the bits in those blocks under the control of a keyed pseudorandom sequence. Three errors on my part propagated though the thread of discourse with Mr. Ritter: 1. I did not understand the requirement that one out of the possible N! permutations of the N-bit bit-balanced block be 'randomly' applied to the current N-bit block with independence from the permutations applied to previous blocks (or as independently as possible). So a random permutation out of N! is applied to each block (as a goal.) 2. I immediately grabbed my copy of Knuth's "The Art of Computer Programming, Seminumerical Algorithms", Vol. 2 upon a quick read of the original "Dynamic Transposition Revisited" post to read about the Shuffling Algorithm P, and there in the text I read the cautionary paragraph that Algorithm P cannot generate more than m distinct permutations when driven by a recursive (output feedback) that takes on only m distinct output values. Here my imagination fixed on PRNGs using generators on finite fields. Such a PRNG based on Z*_p where p is prime, for example, makes an output sequence containing every value from 1 to p-1, once, in some order, before repeating. Such a PRNG could generate every value from 1 to N! but each value (permutation) would occur only once and would never reappear until the PRNG cycled. This was consistent with misunderstanding 1) above. All statements I made about the number of outputs and number of possible sequences of permutations were predicated on the use of a PRNG that outputs each possible permutation once and only once before its output cycle repeats. I thought this would be a weakness with an implementation of a DTC, but Mr. Ritter already knew that and never had any intention to use such a PRNG. 3. I overlooked the fact a PRNG can generate multiple instances of the same output value if it maps more than one internal state to the same output value. (The canonical model cited in the thread, from a paper by Prof. Pierre L'Ecuyer, "Uniform Random Number Generation", in the Annals of Operations Research, 1994, shows this with the function G that maps internal states of the PRNG to output symbols.) So in fact a PRNG could get close (but never exactly match) the goal behavior of generating one random permutation out of N!, independently for each block of plaintext, only if it had >> N! states. I understand these points now. Thanks for your help, too :-) Now I'm spending time trying to understand Mr. Ritter's statement about the additive PRNG and the canonical PRNG model in L'Ecyuer's paper, "Unfortunately, this model does not completely fit an Additive RNG, at least in the sense of providing insight as to which RNG values are going to be correlated," which threw me, since L'Ecuyer addresses the additive PRNG in that paper but never says anything about it differing in any sense from the canonical model in the paper. It's best I go off and read some more :-) John A. Malley 102667.2235@compuserve.com
Subject: Re: Dynamic Transposition Revisited (long) Date: Sat, 27 Jan 2001 22:42:09 GMT From: ritter@io.com (Terry Ritter) Message-ID: <3a734eaa.9284035@news.io.com> References: <3A727406.970A466@compuserve.com> Newsgroups: sci.crypt Lines: 48 On Fri, 26 Jan 2001 23:08:54 -0800, in <3A727406.970A466@compuserve.com>, in sci.crypt "John A. Malley" <102667.2235@compuserve.com> wrote: >[...] >Now I'm spending time trying to understand Mr. Ritter's statement about >the additive PRNG and the canonical PRNG model in L'Ecyuer's paper, > >"Unfortunately, this model does not completely fit an Additive RNG, at >least in the sense of providing insight as to which RNG values are >going to be correlated," > >which threw me, since L'Ecuyer addresses the additive PRNG in that paper >but never says anything about it differing in any sense from the >canonical model in the paper. As far as I can recall (after so many words in so many directions), I had two things in mind with that comment. First, it seems to me that a mapping from state S through G to output symbols does not really capture the action of an Additive RNG. The feedback polynomial has to be a primitive mod 2, and most designers use a trinomial, so that only 2 element values need be selected and added. The distinction is that only 2 elements from S are involved at any one time, out of perhaps 10,000. Any information one has about that result is useless until we encounter one of those elements again, some time later. Next, the issue of correlated values is related to the particular feedback polynomial used. Each time the RNG takes another step, "new" values are selected. These may be independent, both with respect to each other, and with respect to all previous values, until we reach the point where we have used one of the values before. At that time we get a result which is still unknown, because the other value in the addition is unknown, but certainly is "correlated" to a previous output. This is the correlation with respect to a in a + b = c, and in a + d = e. This is the beginning of the linear relationship which eventually becomes solvable in any RNG. That is why I use a "jitterizer" stage to isolate RNG output. The point is that if there is to be "correlation" in the selection of Shuffle elements, that is how the correlation will occur. --- Terry Ritter ritter@io.com http://www.io.com/~ritter/ Crypto Glossary http://www.io.com/~ritter/GLOSSARY.HTM
Subject: Re: Dynamic Transposition Revisited (long) Date: Fri, 26 Jan 2001 19:09:04 GMT From: AllanW <allan_w@my-deja.com> Message-ID: <94si09$7j5$1@nnrp1.deja.com> References: <3A6DB15C.232B4A88@compuserve.com> Newsgroups: sci.crypt Lines: 184 "John A. Malley" <102667.2235@compuserve.com> wrote: > > >This may be a good place to continue the cryptanalysis of the strength > > >of the DT cipher. A PRNG with N! states to make every permutation of > > >the bits in an N bit block can only generate some of the possible > > >sequences of permutations. There are (N!)! possible sequences of > > >permutations. > > Terry Ritter wrote: > > There are (N!)**S possible sequences of permutations, of sequence > > length S. If S is the sequence length, what is N? > Please help - where did I go wrong in calculating the total number of > possible sequences of the N! total possible permutations? > > Here's my reasoning - [snip] To calculate the number of permutations for N values, remember that there are N values for the first number times (N-1) for the second, and so on. This is the definition of N!. So for the values 0,1,2,3,4,5, there are 6!=720 permutations, and here they are: 012345 012354 012435 012453 012534 012543 013245 013254 013425 013452 013524 013542 014235 014253 014325 014352 014523 014532 015234 015243 015324 015342 015423 015432 021345 021354 021435 021453 021534 021543 023145 023154 023415 023451 023514 023541 024135 024153 024315 024351 024513 024531 025134 025143 025314 025341 025413 025431 031245 031254 031425 031452 031524 031542 032145 032154 032415 032451 032514 032541 034125 034152 034215 034251 034512 034521 035124 035142 035214 035241 035412 035421 041235 041253 041325 041352 041523 041532 042135 042153 042315 042351 042513 042531 043125 043152 043215 043251 043512 043521 045123 045132 045213 045231 045312 045321 051234 051243 051324 051342 051423 051432 052134 052143 052314 052341 052413 052431 053124 053142 053214 053241 053412 053421 054123 054132 054213 054231 054312 054321 102345 102354 102435 102453 102534 102543 103245 103254 103425 103452 103524 103542 104235 104253 104325 104352 104523 104532 105234 105243 105324 105342 105423 105432 120345 120354 120435 120453 120534 120543 123045 123054 123405 123450 123504 123540 124035 124053 124305 124350 124503 124530 125034 125043 125304 125340 125403 125430 130245 130254 130425 130452 130524 130542 132045 132054 132405 132450 132504 132540 134025 134052 134205 134250 134502 134520 135024 135042 135204 135240 135402 135420 140235 140253 140325 140352 140523 140532 142035 142053 142305 142350 142503 142530 143025 143052 143205 143250 143502 143520 145023 145032 145203 145230 145302 145320 150234 150243 150324 150342 150423 150432 152034 152043 152304 152340 152403 152430 153024 153042 153204 153240 153402 153420 154023 154032 154203 154230 154302 154320 201345 201354 201435 201453 201534 201543 203145 203154 203415 203451 203514 203541 204135 204153 204315 204351 204513 204531 205134 205143 205314 205341 205413 205431 210345 210354 210435 210453 210534 210543 213045 213054 213405 213450 213504 213540 214035 214053 214305 214350 214503 214530 215034 215043 215304 215340 215403 215430 230145 230154 230415 230451 230514 230541 231045 231054 231405 231450 231504 231540 234015 234051 234105 234150 234501 234510 235014 235041 235104 235140 235401 235410 240135 240153 240315 240351 240513 240531 241035 241053 241305 241350 241503 241530 243015 243051 243105 243150 243501 243510 245013 245031 245103 245130 245301 245310 250134 250143 250314 250341 250413 250431 251034 251043 251304 251340 251403 251430 253014 253041 253104 253140 253401 253410 254013 254031 254103 254130 254301 254310 301245 301254 301425 301452 301524 301542 302145 302154 302415 302451 302514 302541 304125 304152 304215 304251 304512 304521 305124 305142 305214 305241 305412 305421 310245 310254 310425 310452 310524 310542 312045 312054 312405 312450 312504 312540 314025 314052 314205 314250 314502 314520 315024 315042 315204 315240 315402 315420 320145 320154 320415 320451 320514 320541 321045 321054 321405 321450 321504 321540 324015 324051 324105 324150 324501 324510 325014 325041 325104 325140 325401 325410 340125 340152 340215 340251 340512 340521 341025 341052 341205 341250 341502 341520 342015 342051 342105 342150 342501 342510 345012 345021 345102 345120 345201 345210 350124 350142 350214 350241 350412 350421 351024 351042 351204 351240 351402 351420 352014 352041 352104 352140 352401 352410 354012 354021 354102 354120 354201 354210 401235 401253 401325 401352 401523 401532 402135 402153 402315 402351 402513 402531 403125 403152 403215 403251 403512 403521 405123 405132 405213 405231 405312 405321 410235 410253 410325 410352 410523 410532 412035 412053 412305 412350 412503 412530 413025 413052 413205 413250 413502 413520 415023 415032 415203 415230 415302 415320 420135 420153 420315 420351 420513 420531 421035 421053 421305 421350 421503 421530 423015 423051 423105 423150 423501 423510 425013 425031 425103 425130 425301 425310 430125 430152 430215 430251 430512 430521 431025 431052 431205 431250 431502 431520 432015 432051 432105 432150 432501 432510 435012 435021 435102 435120 435201 435210 450123 450132 450213 450231 450312 450321 451023 451032 451203 451230 451302 451320 452013 452031 452103 452130 452301 452310 453012 453021 453102 453120 453201 453210 501234 501243 501324 501342 501423 501432 502134 502143 502314 502341 502413 502431 503124 503142 503214 503241 503412 503421 504123 504132 504213 504231 504312 504321 510234 510243 510324 510342 510423 510432 512034 512043 512304 512340 512403 512430 513024 513042 513204 513240 513402 513420 514023 514032 514203 514230 514302 514320 520134 520143 520314 520341 520413 520431 521034 521043 521304 521340 521403 521430 523014 523041 523104 523140 523401 523410 524013 524031 524103 524130 524301 524310 530124 530142 530214 530241 530412 530421 531024 531042 531204 531240 531402 531420 532014 532041 532104 532140 532401 532410 534012 534021 534102 534120 534201 534210 540123 540132 540213 540231 540312 540321 541023 541032 541203 541230 541302 541320 542013 542031 542103 542130 542301 542310 543012 543021 543102 543120 543201 543210 If some of the values repeat, then N! is too high. For instance, if we change the 5 to be 1, you can see that 012345 and 052341 both change into 012341 and we have a duplicate. In general, when there are A identical values then we must divide the number of permutations by A! to remove the duplicates. In the case we're considering, there are N/2 0's and N/2 1's, so we must divide N! by (N/2)! and then divide that by (N/2)! again. So for 3 0's and 3 1's, we get 6! / (3!)(3!) = 720/6*6 = 6*5*4/3*2 = 20 permutations, and here they are: 000111 001011 001101 001110 010011 010101 010110 011001 011010 011100 100011 100101 100110 101001 101010 101100 110001 110010 110100 111000 This works for much larger values of N, too. So the correct formula for the number of permutations of length N bits, balanced with N/2 0-bits and N/2 1-bits is: N! ----------- (N/2)! ** 2 -- Allan_W@my-deja.com is a "Spam Magnet," never read. Please reply in newsgroups only, sorry. Sent via Deja.com http://www.deja.com/
Subject: Re: Dynamic Transposition Revisited (long) Date: Fri, 26 Jan 2001 19:25:06 GMT From: AllanW <allan_w@my-deja.com> Message-ID: <94siu8$8bo$1@nnrp1.deja.com> References: <3A6DB15C.232B4A88@compuserve.com> Newsgroups: sci.crypt Lines: 36 "John A. Malley" <102667.2235@compuserve.com> wrote: > Please help - where did I go wrong in calculating the total number of > possible sequences of the N! total possible permutations? I just posted a formula for the number of balanced permutations in an N-bit block, but I should have added a formula for all permutations (including non-balanced ones). That formula is much easier. Each bit can be either 0 or 1. Since 01 is not the same as 10, there are no duplicates. So multiply 2 for the first bit times 2 for the second, and so on. The forumula is 2^N (or 2**N) where ^ and ** are two alternative ways of expressing powers. In other words, 2 to the power of N. As an example that's probably familiar, when N is 8 then there are 2**8=256 possible values. An 8-bit unsigned byte can take on 256 different values (0 to 255), and this is not a coincidence. By the same token, an unsigned 16-bit byte can contain any number from 0 to 65535, and 2**16=65536. When estimating the number for large N, use the approximation 2**N slightly more than 10**(0.3*N) So 2**512 is slightly more than 10**(153.6), meaning that the number of permutations of 512 bits is about 1e514. According to my calculator the actual value is about 1.34078e154, so the approximation is pretty close. -- Allan_W@my-deja.com is a "Spam Magnet," never read. Please reply in newsgroups only, sorry. Sent via Deja.com http://www.deja.com/
Subject: Re: Dynamic Transposition Revisited (long) Date: Tue, 23 Jan 2001 06:54:04 GMT From: "Matt Timmermans" <matt@timmermans.nospam-remove.org> Message-ID: <gU9b6.849$M63.76888@news20.bellglobal.com> References: <3a6d0765.2630739@news.io.com> Newsgroups: sci.crypt Lines: 40 "Terry Ritter" <ritter@io.com> wrote in message news:3a6d0765.2630739@news.io.com... > Essentially the problem is one of ambiguity: the idea that Kahn is > talking about a real, practical OTP, when in fact he can only be > talking about the theoretical OTP. For example, he starts out saying: Real practical OTP's do exist, and have been used to good effect. It's not that hard to generate effective keying material -- you could use dice or radioactive decay, for example. Getting the key to the intended recipient of messages is a problem, but it's the almost the same problem you have with regular symmetric ciphers. The only differences here are that the key is big, and it doesn't last forever. If you can meet someone personally every month or so and exchange CDRs, and if you have a sound card, you have can have your regular communications protected by a perfectly viable, practical, and provably secure OTP. The problem newbies have understanding OTP is not in assuming that it's provably secure (because it is), but in mistaking keyed PRNG output for an actual random message key. The newbie cryptographer likes the idea of provable security, but hates those nasty key requirements, so he invents a marvelous new version of "OTP" that is practical because only a short key is required, and the long message key can be generated from this seed. Pretty much all of the cryptosystems that pop up here claiming to be OTP are not -- OTP is well defined, and it requires a _random_ key as long as all of the messages your ever going to send with it, and it requires that no part of that key ever be divulged or reused. Oh well, newbies invent broken stuff all the time, but it certainly _is_ possible to make real one-time pads in practice, so the provable security of OTP does not apply to theoretical systems only. And any "OTP" that satisfies the definition of the term is, in fact, provably secure. OTP is also the simplest provably secure cipher we can have until we prove that P!=NP, unless you're prepared to define polynomial-time lower bounds for known-plaintext attacks as "security".
Subject: Re: Dynamic Transposition Revisited (long) Date: Tue, 23 Jan 2001 08:20:25 GMT From: ritter@io.com (Terry Ritter) Message-ID: <3a6d3eb8.1306509@news.io.com> References: <gU9b6.849$M63.76888@news20.bellglobal.com> Newsgroups: sci.crypt Lines: 85 On Tue, 23 Jan 2001 06:54:04 GMT, in <gU9b6.849$M63.76888@news20.bellglobal.com>, in sci.crypt "Matt Timmermans" <matt@timmermans.nospam-remove.org> wrote: >"Terry Ritter" <ritter@io.com> wrote in message >news:3a6d0765.2630739@news.io.com... >> Essentially the problem is one of ambiguity: the idea that Kahn is >> talking about a real, practical OTP, when in fact he can only be >> talking about the theoretical OTP. For example, he starts out saying: > >Real practical OTP's do exist, and have been used to good effect. It's not >that hard to generate effective keying material -- you could use dice or >radioactive decay, for example. Getting the key to the intended recipient >of messages is a problem, but it's the almost the same problem you have with >regular symmetric ciphers. The only differences here are that the key is >big, and it doesn't last forever. If you can meet someone personally every >month or so and exchange CDRs, and if you have a sound card, you have can >have your regular communications protected by a perfectly viable, practical, >and provably secure OTP. > >The problem newbies have understanding OTP is not in assuming that it's >provably secure (because it is), No, it is not, at least not when used in practice. Normally, only the theoretical model of an OTP can be proven secure. >but in mistaking keyed PRNG output for an >actual random message key. The newbie cryptographer likes the idea of >provable security, but hates those nasty key requirements, so he invents a >marvelous new version of "OTP" that is practical because only a short key is >required, and the long message key can be generated from this seed. Pretty >much all of the cryptosystems that pop up here claiming to be OTP are not -- >OTP is well defined, and it requires a _random_ key as long as all of the >messages your ever going to send with it, and it requires that no part of >that key ever be divulged or reused. > >Oh well, newbies invent broken stuff all the time, but it certainly _is_ >possible to make real one-time pads in practice, so the provable security of >OTP does not apply to theoretical systems only. The provable security of the OTP *does* apply to theoretical systems only. Assume the supposedly "random" pad you have is in fact predictable. Surely you will not argue that a "OTP" with a completely predictable sequence is secure. But wait! Isn't the OTP *proven* secure? For proven OTP security it is not sufficient simply to use the sequence only once. It is *also* necessary for the sequence to be "random," or more explicitly, "unpredictable." The problem with this requirement is that we cannot measure an arbitrary sequence and thus know how unpredictable it is. Nor can we assemble all sorts of measurements of radioactive events or other random sources with the guaranteed assurance of unpredictability that proof requires. That does not mean that most sequences we develop will not be very strong, what it means is that we cannot prove it. No practical OTP can be proven secure unless the sequence it uses can be proven to be unpredictable, and that is generally impossible. >And any "OTP" that >satisfies the definition of the term is, in fact, provably secure. Now we are getting into the worst sort of circular definition: If we try to build an OTP, we are expecting to in fact get an OTP. If later we find that our information has been extracted from the cipher -- if we find it was not, after all, "provably secure" -- it is a bit cute to be told: "Oh, I guess it must not have been a real OTP after all." If we cannot tell whether or not a system is an OTP before it is found weak, there is no point in having the concept of OTP. >OTP is >also the simplest provably secure cipher we can have until we prove that >P!=NP, unless you're prepared to define polynomial-time lower bounds for >known-plaintext attacks as "security". --- Terry Ritter ritter@io.com http://www.io.com/~ritter/ Crypto Glossary http://www.io.com/~ritter/GLOSSARY.HTM
Subject: Re: Dynamic Transposition Revisited (long) Date: Wed, 24 Jan 2001 00:22:44 GMT From: William Hugh Murray <whmurray@optonline.net> Message-ID: <3A6E2024.6B7BC1E5@optonline.net> References: <3a6d3eb8.1306509@news.io.com> Newsgroups: sci.crypt Lines: 18 Terry Ritter wrote: > On Tue, 23 Jan 2001 06:54:04 GMT, in > <gU9b6.849$M63.76888@news20.bellglobal.com>, in sci.crypt "Matt > Timmermans" <matt@timmermans.nospam-remove.org> wrote: > > >"Terry Ritter" <ritter@io.com> wrote in message > >news:3a6d0765.2630739@news.io.com... > >> Essentially the problem is one of ambiguity: the idea that Kahn is > >> talking about a real, practical OTP, when in fact he can only be > >> talking about the theoretical OTP. For example, he starts out saying: > > > >Real practical OTP's do exist, and have been used to good effect. I agree with Terry on this. It can be nearly practical or nearly provable but not both.
Subject: Re: Dynamic Transposition Revisited (long) Date: Wed, 24 Jan 2001 03:31:28 GMT From: "Matt Timmermans" <matt@timmermans.nospam-remove.org> Message-ID: <k0sb6.111177$JT5.4022661@news20.bellglobal.com> References: <3a6d3eb8.1306509@news.io.com> Newsgroups: sci.crypt Lines: 39 "Terry Ritter" <ritter@io.com> wrote in message news:3a6d3eb8.1306509@news.io.com... > Assume the supposedly "random" pad you have is in fact predictable. > Surely you will not argue that a "OTP" with a completely predictable > sequence is secure. But wait! Isn't the OTP *proven* secure? The pad is defined to be random. If it's not random, then your're not using OTP. Whatever cipher you _are_ using may very well be insecure. > For proven OTP security it is not sufficient simply to use the > sequence only once. It is *also* necessary for the sequence to be > "random," or more explicitly, "unpredictable." Yes. That's how OTP is defined. > The problem with this requirement is that we cannot measure an > arbitrary sequence and thus know how unpredictable it is. You also cannot measure a Rijndael key to find out whether or not it is known to other parties -- you don't use arbitrary sequences as keys. > Nor can we > assemble all sorts of measurements of radioactive events or other > random sources with the guaranteed assurance of unpredictability that > proof requires. That does not mean that most sequences we develop > will not be very strong, what it means is that we cannot prove it. Sure you can. You can collect a large amount of biased random data by sampling natural phenomena, use IDA to divide it into many small chunks, take one, and throw the rest away. Repeat until you have enough. The only difficult part is making sure you underestimate the entropy of your source -- and even that's not too hard. With certain types of natural phenomena, you can even base a proof of unpredictability on the uncertainty principle.
Subject: Re: Dynamic Transposition Revisited (long) Date: Wed, 24 Jan 2001 06:37:55 GMT From: ritter@io.com (Terry Ritter) Message-ID: <3a6e7838.11212399@news.io.com> References: <k0sb6.111177$JT5.4022661@news20.bellglobal.com> Newsgroups: sci.crypt Lines: 81 On Wed, 24 Jan 2001 03:31:28 GMT, in <k0sb6.111177$JT5.4022661@news20.bellglobal.com>, in sci.crypt "Matt Timmermans" <matt@timmermans.nospam-remove.org> wrote: >"Terry Ritter" <ritter@io.com> wrote in message >news:3a6d3eb8.1306509@news.io.com... >> Assume the supposedly "random" pad you have is in fact predictable. >> Surely you will not argue that a "OTP" with a completely predictable >> sequence is secure. But wait! Isn't the OTP *proven* secure? > >The pad is defined to be random. If it's not random, then your're not using >OTP. Whatever cipher you _are_ using may very well be insecure. That's the part that is too cute for me: You can say you have an OTP, so users think they have "mathematically proven" security, and then, later, if we find out that the pad really is predictable, you announce that the damage really was not due to the OTP after all. So the damage is not accounted to the OTP, despite the fact that it failed in use. That seems like slimy cryptography to me. The whole point of having a name for a cipher is to be able to associate a design with an outcome, not wiggle out of responsibility. It may be that absolutely random pads do exist. But since we cannot measure the predictability of those, if we cannot somehow prove they are not predictable, we can only assert security, not prove it. >> For proven OTP security it is not sufficient simply to use the >> sequence only once. It is *also* necessary for the sequence to be >> "random," or more explicitly, "unpredictable." > >Yes. That's how OTP is defined. > >> The problem with this requirement is that we cannot measure an >> arbitrary sequence and thus know how unpredictable it is. > >You also cannot measure a Rijndael key to find out whether or not it is >known to other parties -- you don't use arbitrary sequences as keys. We are discussing a security proof. If you want a security proof, you need to prove the assumptions. If OTP assumes a random pad, then you need to be able to prove that pad is random. In reality, we cannot measure such a thing, and probably cannot prove it. >> Nor can we >> assemble all sorts of measurements of radioactive events or other >> random sources with the guaranteed assurance of unpredictability that >> proof requires. That does not mean that most sequences we develop >> will not be very strong, what it means is that we cannot prove it. > >Sure you can. You can collect a large amount of biased random data by >sampling natural phenomena, use IDA to divide it into many small chunks, >take one, and throw the rest away. Repeat until you have enough. The only >difficult part is making sure you underestimate the entropy of your >source -- and even that's not too hard. With certain types of natural >phenomena, you can even base a proof of unpredictability on the uncertainty >principle. It may be possible to have equipment which has a pretty decent proof of strength. In reality, quantum events are fairly small, and sensing them typically requires some electronics. That means the electronics can go bad, or oscillate, or have a noisy connection, or work intermittently, or whatever. Your task is to prove absolutely beyond any shadow of a doubt that nothing like that happened. I am very familiar with electrical noise, and I have promoted sampling a source which has a known non-flat distribution. Then, if we get the expected distribution, we may leap to the conclusion that everything has worked OK. However, since there can be no test for abstract randomness, it is extremely difficult to make the leap to cryptographic levels of assurance. Any number of subtle things might happen which might not be detected, and yet would still influence the outcome. We can be very sure, but is that really mathematical proof? --- Terry Ritter ritter@io.com http://www.io.com/~ritter/ Crypto Glossary http://www.io.com/~ritter/GLOSSARY.HTM
Subject: Re: Dynamic Transposition Revisited (long) Date: Thu, 25 Jan 2001 04:10:52 GMT From: "Matt Timmermans" <matt@timmermans.nospam-remove.org> Message-ID: <gHNb6.5111$M63.371652@news20.bellglobal.com> References: <3a6e7838.11212399@news.io.com> Newsgroups: sci.crypt Lines: 55 "Terry Ritter" <ritter@io.com> wrote in message news:3a6e7838.11212399@news.io.com... > That's the part that is too cute for me: You can say you have an OTP, > so users think they have "mathematically proven" security, and then, > later, if we find out that the pad really is predictable, you announce > that the damage really was not due to the OTP after all. It's like saying you have Rijndael, but you left out the S-boxes. > We are discussing a security proof. If you want a security proof, you > need to prove the assumptions. If OTP assumes a random pad, then you > need to be able to prove that pad is random. In reality, we cannot > measure such a thing, and probably cannot prove it. You don't need to prove it to anyone but your self, so you can base the proof on the way the key was generated, rather than the statistical properties of the key itself. Note -- the same thing is true with any cipher. If you use some black-box program to generate the key, you just have to trust that the key is unpredictable. If the key is predictable, brute-force attacks might suddenly become quite feasible. We have seen examples of this as well, but you don't use those examples to say that the cipher is insecure. > It may be possible to have equipment which has a pretty decent proof > of strength. In reality, quantum events are fairly small, and sensing > them typically requires some electronics. That means the electronics > can go bad, or oscillate, or have a noisy connection, or work > intermittently, or whatever. Your task is to prove absolutely beyond > any shadow of a doubt that nothing like that happened. > > I am very familiar with electrical noise, and I have promoted sampling > a source which has a known non-flat distribution. Then, if we get the > expected distribution, we may leap to the conclusion that everything > has worked OK. However, since there can be no test for abstract > randomness, it is extremely difficult to make the leap to > cryptographic levels of assurance. Any number of subtle things might > happen which might not be detected, and yet would still influence the > outcome. We can be very sure, but is that really mathematical proof? In all likelyhood, that would be a very practical generator for OTP keys, and it would be reasonably easy to purposely underestimate the amount of entropy you're getting. If you want proof, though, you should do something different. For instance: Generate a photon, and polarize it vertically. Then measure its polarization at 45 degrees from the vertical. Repeat. By measuring the transparency of your optics, the sensitivity of your photomultipliers, and the orientation of your polarizers, you can place a very confident lower bound on the rate of real randomness.
Subject: Re: Dynamic Transposition Revisited (long) Date: Thu, 25 Jan 2001 06:25:05 GMT From: ritter@io.com (Terry Ritter) Message-ID: <3a6fc682.5783390@news.io.com> References: <gHNb6.5111$M63.371652@news20.bellglobal.com> Newsgroups: sci.crypt Lines: 21 On Thu, 25 Jan 2001 04:10:52 GMT, in <gHNb6.5111$M63.371652@news20.bellglobal.com>, in sci.crypt "Matt Timmermans" <matt@timmermans.nospam-remove.org> wrote: >[...] >Generate a photon, and polarize it vertically. Then measure its >polarization at 45 degrees from the vertical. Repeat. > >By measuring the transparency of your optics, the sensitivity of your >photomultipliers, and the orientation of your polarizers, you can place a >very confident lower bound on the rate of real randomness. In the language of statistics, a confidence level is some *probability* of being correct -- which also implies some probability of error. That is never absolute certainty, and so is never proof. --- Terry Ritter ritter@io.com http://www.io.com/~ritter/ Crypto Glossary http://www.io.com/~ritter/GLOSSARY.HTM
Subject: Re: Dynamic Transposition Revisited (long) Date: Fri, 26 Jan 2001 20:40:00 GMT From: jsavard@ecn.ab.SBLOK.ca.nowhere (John Savard) Message-ID: <3a71e05b.25794019@news.powersurfr.com> References: <94sl80$alu$1@nnrp1.deja.com> <gHNb6.5111$M63.371652@news20.bellglobal.com> Newsgroups: sci.crypt Lines: 18 On Fri, 26 Jan 2001 20:04:24 GMT, AllanW <allan_w@my-deja.com> wrote, in part: >I think I missed one of my classes when I learned programming. >Could you please show me the code corresponding to "generate a >photon?" heh I think we can safely assume that special hardware is required for this particular method of producing randomness. In fact, that's true of *any* physical means of generating random numbers, even if you use what is built in to the support chips of the Pentium III. John Savard http://home.ecn.ab.ca/~jsavard/crypto.htm
Subject: Re: Dynamic Transposition Revisited (long) Date: Sat, 27 Jan 2001 11:06:30 GMT From: Bryan Olson <bryanolson@my-deja.com> Message-ID: <94ua3l$jqb$1@nnrp1.deja.com> References: <gHNb6.5111$M63.371652@news20.bellglobal.com> Newsgroups: sci.crypt Lines: 40 Matt Timmermans wrote: > "Terry Ritter" wrote : > > That's the part that is too cute for me: You can say you have > > an OTP, so users think they have "mathematically proven" > > security, and then, later, if we find out that the pad really > > is predictable, you announce that the damage really was not > > due to the OTP after all. Ritter simply mis-states the result there. > It's like saying you have Rijndael, but you left out the S-boxes. > > > We are discussing a security proof. If you want a security > > proof, you need to prove the assumptions. If OTP assumes a > > random pad, then you need to be able to prove that pad is > > random. In reality, we cannot measure such a thing, and > > probably cannot prove it. > > You don't need to prove it to anyone but your self, so you can > base the proof on the way the key was generated, rather than > the statistical properties of the key itself. Note -- the > same thing is true with any cipher. If you use some black-box > program to generate the key, you just have to trust that the > key is unpredictable. If the key is predictable, brute-force > attacks might suddenly become quite feasible. We have seen > examples of this as well, but you don't use those examples to > say that the cipher is insecure. A good point. One of the advantages of a proof is that shows what conditions we must meet. Low-entropy keys break every cryptosystem there is, but this objection is most often applied to the one-time pad. That's because the OTP theorem makes the "given" of the proof explicit. --Bryan Sent via Deja.com http://www.deja.com/
Subject: Re: Dynamic Transposition Revisited (long) Date: Tue, 23 Jan 2001 12:27:59 GMT From: jsavard@ecn.ab.SBLOK.ca.nowhere (John Savard) Message-ID: <3a6d7729.335617@news.powersurfr.com> References: <3a6d0765.2630739@news.io.com> Newsgroups: sci.crypt Lines: 39 On Tue, 23 Jan 2001 04:24:27 GMT, ritter@io.com (Terry Ritter) wrote, in part: >I frankly have no idea what (A / B)! could possibly be intended to >represent. Perhaps the equation should have been (A/B)**S, where S is >the number of blocks, the length of a sequence of permutations. Well, since you criticized DES because it doesn't provide all (2^64)! possible mappings of 64-bit blocks to 64-bit blocks, whereas you claim that Dynamic Transposition is, in a sense, complete because it provides all n! transpositions of an n-bit block, I am trying to point out that you're comparing apples and oranges. For both one transposition such as used in Dynamic Transposition, and two rounds of DES, we get: number of possible blocks < number of mappings provided by cipher < number of mappings of the set of blocks to itself For two rounds of DES, that is: 2^64 < 2^96 < (2^64)! For arbitrary bit transposition applied to an n-bit balanced block, we get n!/((n/2)!(n/2)!) < n! < (n!/((n/2)!(n/2)!))! Now, you might react and say that I'm unfairly criticizing Dynamic Transposition because it is a _transposition_, but I think it should be clear that I am looking at this at the right level of description: the cipher is ultimately a mapping from the set of input blocks to the set of output blocks. And transposition of bits is one kind of restricted mapping between those two sets, just as XOR is another restricted mapping, and substitution on small sub-blocks is a restricted mapping. John Savard http://home.ecn.ab.ca/~jsavard/crypto.htm
Subject: Re: Dynamic Transposition Revisited (long) Date: Tue, 23 Jan 2001 20:14:04 GMT From: ritter@io.com (Terry Ritter) Message-ID: <3a6de5da.7486629@news.io.com> References: <3a6d7729.335617@news.powersurfr.com> Newsgroups: sci.crypt Lines: 95 On Tue, 23 Jan 2001 12:27:59 GMT, in <3a6d7729.335617@news.powersurfr.com>, in sci.crypt jsavard@ecn.ab.SBLOK.ca.nowhere (John Savard) wrote: >On Tue, 23 Jan 2001 04:24:27 GMT, ritter@io.com (Terry Ritter) wrote, >in part: > >>I frankly have no idea what (A / B)! could possibly be intended to >>represent. Perhaps the equation should have been (A/B)**S, where S is >>the number of blocks, the length of a sequence of permutations. > >Well, since you criticized DES because it doesn't provide all (2^64)! >possible mappings of 64-bit blocks to 64-bit blocks, whereas you claim >that Dynamic Transposition is, in a sense, complete because it >provides all n! transpositions of an n-bit block, I am trying to point >out that you're comparing apples and oranges. First of all, these two ciphers are so fundamentally different that no analogy is likely to be very good. The statement that conventional block ciphers do not implement more than a tiny fraction of the natural keyspace of the (emulated) huge substitution is correct. The consequences of this have been described many times, in many ways, by many people. In essence, just a few known-plaintext blocks should suffice to identify the actual key out of all possible keys. The open literature does not show how to do that, but obviously the possibility of such an attack is inherent in the use of this conventional block cipher structure. The idea that Dynamic Transposition does or should produce a "permutation of permutations," such that no permutation can follow itself, is just wrong. More to the point, it is not relevant. The issue is how -- from the outside of the cipher -- anyone could know what the sequence is, or even what any particular permutation was. Even though there is a single correct permutation which took the plaintext to ciphertext, many, many different permutations could have done that same thing. Since a particular permutation cannot be identified, probably the most accurate description would be to accumulate a string of permutation-subsets. These will be huge, however, so the uncertainty would seem to rise exponentially. And then where does one go? These are different types of ciphering. >For both one transposition such as used in Dynamic Transposition, and >two rounds of DES, we get: > >number of possible blocks < number of mappings provided by cipher < >number of mappings of the set of blocks to itself > >For two rounds of DES, that is: > >2^64 < 2^96 < (2^64)! > >For arbitrary bit transposition applied to an n-bit balanced block, we >get > >n!/((n/2)!(n/2)!) < n! < (n!/((n/2)!(n/2)!))! > >Now, you might react and say that I'm unfairly criticizing Dynamic >Transposition because it is a _transposition_, but I think it should >be clear that I am looking at this at the right level of description: >the cipher is ultimately a mapping from the set of input blocks to the >set of output blocks. And that is n! >And transposition of bits is one kind of >restricted mapping between those two sets, just as XOR is another >restricted mapping, Those are fundamentally different: When we have an additive combiner (e.g., XOR), and known-plaintext, the confusion value is identified precisely. When we have a Dynamic Transposition combiner, and known-plaintext, the ciphering transposition is *not* identified precisely, but only as a possibility in a huge set of "false" permutations. >and substitution on small sub-blocks is a >restricted mapping. You mean, that if something is "a restricted mapping," it is, in some sense, similar to anything else which is "a restricted mapping?" That doesn't sound very meaningful to me. --- Terry Ritter ritter@io.com http://www.io.com/~ritter/ Crypto Glossary http://www.io.com/~ritter/GLOSSARY.HTM
Subject: Re: Dynamic Transposition Revisited (long) Date: Tue, 23 Jan 2001 12:36:33 GMT From: jsavard@ecn.ab.SBLOK.ca.nowhere (John Savard) Message-ID: <3a6d79d5.1019820@news.powersurfr.com> References: <3a6d0765.2630739@news.io.com> Newsgroups: sci.crypt Lines: 26 On Tue, 23 Jan 2001 04:24:27 GMT, ritter@io.com (Terry Ritter) wrote, in part: >I'm sorry, but even if you were right "_here_," you would still not be >right about the bandwidth "problem," and you have been trumpeting that >for a sequence of responses. You are correct that a 2.5% bandwidth cost isn't exactly a crushing burden of expense. The reason I say it's a problem has more to do with current fashions in cryptography; it has to do with the same reason that block ciphers are so popular instead of stream ciphers, especially stream ciphers with autokey properties (such as your Dynamic Transposition). Essentially, without a _compelling_ reason, people are unwilling to make any sacrifice of convenience for encryption, and so they insist on properties like good error recovery, and zero expansion of the plaintext (although they allow a fixed, _initial_, overhead, but I fear only because they have been told that that is *essential* to avoid the recognition of identical messages). I regret that I have failed to make that sufficiently clear. John Savard http://home.ecn.ab.ca/~jsavard/crypto.htm
Subject: Re: Dynamic Transposition Revisited (long) Date: Tue, 23 Jan 2001 19:13:48 GMT From: jsavard@ecn.ab.SBLOK.ca.nowhere (John Savard) Message-ID: <3a6dd745.218251@news.powersurfr.com> References: <3a6c2bff.1045135@news.powersurfr.com> Newsgroups: sci.crypt Lines: 20 On Mon, 22 Jan 2001 13:05:04 GMT, jsavard@ecn.ab.SBLOK.ca.nowhere (John Savard) wrote, in part: >And with substitution, unlike Dynamic Transposition, instead of being >stuck with one set of n! substitutions, one can use steps of different >kinds so that instead of just having, say, all 2^n possible mappings >obtained by XORing an n-bit block with an n-bit key, one can explore >the space of (2^n)! permutations more deeply - depending on how much >key we use, and how complicated a structure we give the cipher. Well, one can supplement Dynamic Transposition to increase the exploration of the mapping space as well. In addition to transposing the bits of a balanced block, one could also subject them to substitutions that preserve the number of 1 bits: i.e., one could have an 8-bit wide S-box that mapped 00000000 to itself, that scrambled the 8 combinations with a single 1 bit among themselves, that scrambled the 28 combinations with two 1 bits among themselves, and so on. John Savard http://home.ecn.ab.ca/~jsavard/crypto.htm
Subject: Re: Dynamic Transposition Revisited (long) Date: Tue, 23 Jan 2001 20:19:18 GMT From: ritter@io.com (Terry Ritter) Message-ID: <3a6de738.7836635@news.io.com> References: <3a6dd745.218251@news.powersurfr.com> Newsgroups: sci.crypt Lines: 32 On Tue, 23 Jan 2001 19:13:48 GMT, in <3a6dd745.218251@news.powersurfr.com>, in sci.crypt jsavard@ecn.ab.SBLOK.ca.nowhere (John Savard) wrote: >On Mon, 22 Jan 2001 13:05:04 GMT, jsavard@ecn.ab.SBLOK.ca.nowhere >(John Savard) wrote, in part: > >>And with substitution, unlike Dynamic Transposition, instead of being >>stuck with one set of n! substitutions, one can use steps of different >>kinds so that instead of just having, say, all 2^n possible mappings >>obtained by XORing an n-bit block with an n-bit key, one can explore >>the space of (2^n)! permutations more deeply - depending on how much >>key we use, and how complicated a structure we give the cipher. > >Well, one can supplement Dynamic Transposition to increase the >exploration of the mapping space as well. In addition to transposing >the bits of a balanced block, one could also subject them to >substitutions that preserve the number of 1 bits: i.e., one could have >an 8-bit wide S-box that mapped 00000000 to itself, that scrambled the >8 combinations with a single 1 bit among themselves, that scrambled >the 28 combinations with two 1 bits among themselves, and so on. I think you should first plainly describe what you see as the weakness, before rushing on to try to fix it. No such operation is needed for strength in Dynamic Transposition. --- Terry Ritter ritter@io.com http://www.io.com/~ritter/ Crypto Glossary http://www.io.com/~ritter/GLOSSARY.HTM
Subject: Re: Dynamic Transposition Revisited (long) Date: Tue, 23 Jan 2001 20:52:52 GMT From: jsavard@ecn.ab.SBLOK.ca.nowhere (John Savard) Message-ID: <3a6dec9a.5680867@news.powersurfr.com> References: <3a6de738.7836635@news.io.com> Newsgroups: sci.crypt Lines: 47 On Tue, 23 Jan 2001 20:19:18 GMT, ritter@io.com (Terry Ritter) wrote, in part: >I think you should first plainly describe what you see as the >weakness, before rushing on to try to fix it. I will do so. In fact, however, I already did so, I thought, in another post to which you have just replied, but it looks as though my point didn't get across. The "weakness" is: The set of permutations of n bits considered as a subset of the set of one-to-one and onto mappings from the set of bit-balanced blocks of n bits to itself is a subgroup of those mappings, and therefore is not a generator of the entire set of mappings. My "fix", therefore, is to propose another operation that can be applied to a bit-balanced block to yield a bit-balanced block, so as to allow a cipher acting on these blocks to produce a wider assortment of mappings. Essentially, I quite agree that transposition as applied to bit-balanced blocks is *better* than XOR. But since there already are substitutions that are considerably better than XOR, the fact that DT is a fixed amount better than XOR is not terribly attractive, considering that substitutions can be made as complex as desired. Essentially, therefore, I see DT as similar to, say, DES with subkeys that are generated by a stream cipher. Yes, DES, even with arbitrary subkeys, can't produce all (2^64)! mappings of blocks to blocks; but transposing bits can't produce all mappings of the entire set of bit-balanced blocks to itself _either_. So my point is: DT is not as bad as XOR, but it is not as good as what people can, and do, do with substitution. Although perhaps saying they really do make such use of substitution is perhaps an exaggeration; except for some of your advanced designs, there isn't that much out there which approaches my "large-key brainstorm" - which, I think, in the substitution world, strongly resembles what Dynamic Transposition achieves. John Savard http://home.ecn.ab.ca/~jsavard/crypto.htm
Subject: Re: Dynamic Transposition Revisited (long) Date: Tue, 23 Jan 2001 22:12:41 GMT From: ritter@io.com (Terry Ritter) Message-ID: <3a6e0193.1086054@news.io.com> References: <3a6dec9a.5680867@news.powersurfr.com> Newsgroups: sci.crypt Lines: 95 On Tue, 23 Jan 2001 20:52:52 GMT, in <3a6dec9a.5680867@news.powersurfr.com>, in sci.crypt jsavard@ecn.ab.SBLOK.ca.nowhere (John Savard) wrote: >On Tue, 23 Jan 2001 20:19:18 GMT, ritter@io.com (Terry Ritter) wrote, >in part: > >>I think you should first plainly describe what you see as the >>weakness, before rushing on to try to fix it. > >I will do so. In fact, however, I already did so, I thought, in >another post to which you have just replied, but it looks as though my >point didn't get across. > >The "weakness" is: > >The set of permutations of n bits > >considered as a subset of the set of one-to-one and onto mappings from >the set of bit-balanced blocks of n bits to itself > >is a subgroup of those mappings, and therefore is not a generator of >the entire set of mappings. All of which is not a problem, because the actual permutation which encrypted the data is hidden in that clump. There is no information to distinguish the correct one. You might as well say there is a problem because one could read the RNG state and thus know the keying of the cipher. We assume that state is kept secure. There is no way to distinguish the correct permutation from the huge group which can generate the same transformation. And it is only the correct permutation which leads back (eventually) to the shuffling sequence. A weakness which cannot be exploited is no weakness at all. >My "fix", therefore, is to propose another operation that can be >applied to a bit-balanced block to yield a bit-balanced block, so as >to allow a cipher acting on these blocks to produce a wider assortment >of mappings. > >Essentially, I quite agree that transposition as applied to >bit-balanced blocks is *better* than XOR. But since there already are >substitutions that are considerably better than XOR, the fact that DT >is a fixed amount better than XOR is not terribly attractive, >considering that substitutions can be made as complex as desired. Substitution complexity is limited by the size of the substitution, and the size of the substitution is limited to the size of the table one wants to use. That is why conventional block ciphers can only *emulate* a huge substitution. >Essentially, therefore, I see DT as similar to, say, DES with subkeys >that are generated by a stream cipher. Yes, DES, even with arbitrary >subkeys, can't produce all (2^64)! mappings of blocks to blocks; but >transposing bits can't produce all mappings of the entire set of >bit-balanced blocks to itself _either_. All of which is of no significance whatsoever. >So my point is: DT is not as bad as XOR, but it is not as good as what >people can, and do, do with substitution. Well, that would seem to be hard to dispute, since I know no limit to what "can be done." However, there is ample reason to believe that the Dynamic Transposition cipher, as described, is substantially stronger than any conventional block cipher, for example. Moreover, this strength is not dependent upon some complex internal structure of small substitutions which essentially has no basis in arguable strength. >Although perhaps saying they >really do make such use of substitution is perhaps an exaggeration; >except for some of your advanced designs, there isn't that much out >there which approaches my "large-key brainstorm" - which, I think, in >the substitution world, strongly resembles what Dynamic Transposition >achieves. I can see no basis for such belief. --- Terry Ritter ritter@io.com http://www.io.com/~ritter/ Crypto Glossary http://www.io.com/~ritter/GLOSSARY.HTM
Subject: Re: Dynamic Transposition Revisited (long) Date: Tue, 23 Jan 2001 23:37:26 GMT From: jsavard@ecn.ab.SBLOK.ca.nowhere (John Savard) Message-ID: <3a6e1394.1718960@news.powersurfr.com> References: <3a6e0193.1086054@news.io.com> Newsgroups: sci.crypt Lines: 47 On Tue, 23 Jan 2001 22:12:41 GMT, ritter@io.com (Terry Ritter) wrote, in part: >All of which is not a problem, because the actual permutation which >encrypted the data is hidden in that clump. There is no information >to distinguish the correct one. >You might as well say there is a problem because one could read the >RNG state and thus know the keying of the cipher. We assume that >state is kept secure. >There is no way to distinguish the correct permutation from the huge >group which can generate the same transformation. And it is only the >correct permutation which leads back (eventually) to the shuffling >sequence. >A weakness which cannot be exploited is no weakness at all. But in what way does that differ from me saying that - if I propose a cipher which consists of four rounds of DES, with subkeys supplied by some form of stream cipher generator - that given a corresponding plaintext and ciphertext after two rounds, there are 2^32 different subkey pairs which could have produced that particular output from the given input? (because of the expansion permutation and the way the S-boxes in DES work) If _you_ are going to complain that DES isn't a good starting point to work from, because it can't produce all (2^64)! mappings between inputs and outputs, so it's not a "true general substitution", then I don't see why I shouldn't point out that Dynamic Transposition is also not a true general substitution of bit-balanced blocks. Ah, but it's a transposition, and it is a "true general transposition", you seem to have said. I'm pointing out that this is something that really doesn't matter; if it's a limitation for one case, it's a limitation for the other case as well, that the total overall relation of possible inputs to possible outputs is limited. Essentially, the fact that "transposition" is used as the name for a class of ciphers considered distinct from substitution - while "XOR", or "monalphabetic substitution" are considered forms of substitution - is leading you into a fallacy based on regarding accidental historical linguistic habits as somehow fundamental. John Savard http://home.ecn.ab.ca/~jsavard/crypto.htm
Subject: Re: Dynamic Transposition Revisited (long) Date: Wed, 24 Jan 2001 06:38:10 GMT From: ritter@io.com (Terry Ritter) Message-ID: <3a6e7848.11228154@news.io.com> References: <3a6e1394.1718960@news.powersurfr.com> Newsgroups: sci.crypt Lines: 106 On Tue, 23 Jan 2001 23:37:26 GMT, in <3a6e1394.1718960@news.powersurfr.com>, in sci.crypt jsavard@ecn.ab.SBLOK.ca.nowhere (John Savard) wrote: >On Tue, 23 Jan 2001 22:12:41 GMT, ritter@io.com (Terry Ritter) wrote, >in part: > >>All of which is not a problem, because the actual permutation which >>encrypted the data is hidden in that clump. There is no information >>to distinguish the correct one. > >>You might as well say there is a problem because one could read the >>RNG state and thus know the keying of the cipher. We assume that >>state is kept secure. > >>There is no way to distinguish the correct permutation from the huge >>group which can generate the same transformation. And it is only the >>correct permutation which leads back (eventually) to the shuffling >>sequence. > >>A weakness which cannot be exploited is no weakness at all. > >But in what way does that differ from me saying that - if I propose a >cipher which consists of four rounds of DES, with subkeys supplied by >some form of stream cipher generator - that given a corresponding >plaintext and ciphertext after two rounds, I am not psychic. I can't address whatever vision of a design you have, unless you describe it in detail. If you want me to address how these ciphers differ, you will simply have to describe your design in at least as much detail as I did in my "Revisited" article. Only then can one hope to draw out distinctions and conclusions. >there are 2^32 different subkey pairs which could have produced that >particular output from the given input? (because of the expansion >permutation and the way the S-boxes in DES work) > >If _you_ are going to complain that DES isn't a good starting point to >work from, because it can't produce all (2^64)! mappings between >inputs and outputs, so it's not a "true general substitution", then I >don't see why I shouldn't point out that Dynamic Transposition is also >not a true general substitution of bit-balanced blocks. Because the comparison is incorrect. The ciphering part of DES is an emulated huge substitution. Yet keying can create only a tiny fraction of the possible substitutions of that size. The ciphering part of Dynamic Transposition is actual permutation. Any possible permutation can be selected by the keying. The distinction you have been pointing out is at the next level, beyond the permutation, beyond the ciphering, and so beyond the direct comparison with DES. >Ah, but it's a transposition, and it is a "true general >transposition", you seem to have said. I'm pointing out that this is >something that really doesn't matter; It does matter. To be comparable, you would need to describe a "Dynamic DES" that not only changed keying on a block-by-block basis, but also allowed keying to select among every possible huge substitution. And I don't think we know how to do that. With that Dynamic DES design, it would no more be possible for Dynamic DES to select every possible sequence of substitutions than for Dynamic Transposition to select every possible sequence of permutations. But even with that Dynamic DES design, known plaintext would completely reveal which transformation was used for that block. But with Dynamic Transposition, known plaintext would not reveal even one bit movement, let alone the entire permutation. >if it's a limitation for one >case, it's a limitation for the other case as well, that the total >overall relation of possible inputs to possible outputs is limited. I have no problem making a fair comparison, but you insist on fastening on a "weakness" which is already two steps away from DES, and beyond anything I can imagine creating from DES. I don't think we know how to emulate any possible permutation with flat probability. In contrast, we do know how to create any possible permutation with flat probability. That is part of the advantage. >Essentially, the fact that "transposition" is used as the name for a >class of ciphers considered distinct from substitution - while "XOR", >or "monalphabetic substitution" are considered forms of substitution - >is leading you into a fallacy based on regarding accidental historical >linguistic habits as somehow fundamental. The argument is not semantic. If you think it is, you have missed it. --- Terry Ritter ritter@io.com http://www.io.com/~ritter/ Crypto Glossary http://www.io.com/~ritter/GLOSSARY.HTM
Subject: Re: Dynamic Transposition Revisited (long) Date: Wed, 24 Jan 2001 10:30:39 GMT From: jsavard@ecn.ab.SBLOK.ca.nowhere (John Savard) Message-ID: <3a6ea4e7.16898757@news.powersurfr.com> References: <3a6e7848.11228154@news.io.com> Newsgroups: sci.crypt Lines: 70 On Wed, 24 Jan 2001 06:38:10 GMT, ritter@io.com (Terry Ritter) wrote, in part: >The argument is not semantic. If you think it is, you have missed it. Since I happen to think that Dynamic Transposition _is_ a good idea, but just not quite as great as claimed, and (because of the extremely tiny bandwidth problem) facing certain obstacles to acceptance because of the same fashions that led to the AES being a block cipher, I'm reluctant to belabor this point more than I already have. You are right that my "Dynamic DES" idea is _not_ intended to emulate every possible huge substitution. But Dynamic Transposition doesn't perform every possible huge _substitution_ of balanced blocks to balanced blocks. It performs every possible bit transposition. This is better than performing every possible XOR, because any possible individual substitution of block A to block B can be performed in several ways. But that is also true of the transformation performed by two rounds of DES. It too will transform block A to block B in several _different_ ways that will transform block C to D, D', D'', and so on. So the 96 bits that make up the two subkeys used can't be determined uniquely. Thus, the same arguments - the same precise arguments - that you make for Dynamic Transposition with two stages can be made for Dynamic 4-round DES. One difference, of course, is that I'm working with 64-bit blocks instead of 4,096-bit blocks, and that will improve the strength of DT. And DT is simple and elegant. It can't be further simplified to remove its desirable features as easily as "Dynamic DES". But you appear to be claiming that the availability of every possible bit transposition provides some important fundamental property, and that it challenges the OTP. These are indeed the kinds of claims that, if true, might well induce people to accept a 2.5% bandwidth penalty. But I haven't seen these claims justified in a way that will win general acknowledgement. Worse than that, I still believe these claims to be mistaken, and I feel that others will do so as well - to the extent they even deign to investigate. As a result, while I still find DT to be interesting and useful - particularly when combined with more conventional encryption by the use of an _invertible_ conversion of a subset of all possible n-bit binary blocks to slightly larger balanced blocks and back again - my view of its fundamental significance appears to be considerably different from yours. It is because I expect "my view" to be shared - or even replaced with a less sympathetic view, which does not include interest in what I feel are the interesting features of Dynamic Transposition (the use of transposition as a secure combiner, the use of an algebra with significantly different properties than commonly used) - by the bulk of the cryptographic community, specifically by its more influential members, that I raise these points, in the hope that they will be resolved. It is precisely because I believe that your contributions are too valuable to be obscured by a settled prejudice against you as "yet another snake-oil salesman" that I believe has been allowed to arise that I have pursued so intensely and in such detail the issue of claims which appear to me to be insupportable. John Savard http://home.ecn.ab.ca/~jsavard/crypto.htm
Subject: Re: Dynamic Transposition Revisited (long) Date: Wed, 24 Jan 2001 13:53:13 +0100 From: Mok-Kong Shen <mok-kong.shen@t-online.de> Message-ID: <3A6ED039.57B75006@t-online.de> References: <3a6ea4e7.16898757@news.powersurfr.com> Newsgroups: sci.crypt Lines: 29 John Savard wrote: > > ritter@io.com (Terry Ritter) wrote, in part: [snip] > But you appear to be claiming that the availability of every possible > bit transposition provides some important fundamental property, and > that it challenges the OTP. These are indeed the kinds of claims that, > if true, might well induce people to accept a 2.5% bandwidth penalty. > But I haven't seen these claims justified in a way that will win > general acknowledgement. Worse than that, I still believe these claims > to be mistaken, and I feel that others will do so as well - to the > extent they even deign to investigate. Elsewhere I said that there cannot exist any magic that turns something predictable (the best PRNG is not 'absolutely' unpredicatble) to something (absolutely) unpredictable. This suffices to convince one that, while DT might be extremely good so as to be practically very very secure, it, like all other 'real' algorithms, can't attain the goal of 'perfect' security, which is neither possible nor truly necessary (to attempt) to attain (with all prices) in practice. M. K. Shen --------------------------------- http://home.t-online.de/home/mok-kong.shen
Subject: Re: Dynamic Transposition Revisited (long) Date: Wed, 24 Jan 2001 16:25:02 GMT From: Benjamin Goldberg <goldbb2@earthlink.net> Message-ID: <3A6F01E2.80604AFC@earthlink.net> References: <3A6ED039.57B75006@t-online.de> Newsgroups: sci.crypt Lines: 37 Mok-Kong Shen wrote: > > John Savard wrote: > > > > ritter@io.com (Terry Ritter) wrote, in part: > [snip] > > > But you appear to be claiming that the availability of every > > possible bit transposition provides some important fundamental > > property, and that it challenges the OTP. These are indeed the kinds > > of claims that, if true, might well induce people to accept a 2.5% > > bandwidth penalty. > > But I haven't seen these claims justified in a way that will win > > general acknowledgement. Worse than that, I still believe these > > claims to be mistaken, and I feel that others will do so as well - > > to the extent they even deign to investigate. > > Elsewhere I said that there cannot exist any magic that > turns something predictable (the best PRNG is not 'absolutely' > unpredicatble) to something (absolutely) unpredictable. > This suffices to convince one that, while DT might be > extremely good so as to be practically very very secure, it, > like all other 'real' algorithms, can't attain the goal of > 'perfect' security, which is neither possible nor truly > necessary (to attempt) to attain (with all prices) in > practice. This is true of any system with a finite sized key. Since we can't have "perfect" security, perhaps we should define some sort of "as close to perfect as practically possible" security -- specifically, that only by brute forcing the key can the system be broken. Can DT satisfy this goal? -- Most scientific innovations do not begin with "Eureka!" They begin with "That's odd. I wonder why that happened?"
Subject: Re: Dynamic Transposition Revisited (long) Date: Wed, 24 Jan 2001 18:19:54 +0100 From: Mok-Kong Shen <mok-kong.shen@t-online.de> Message-ID: <3A6F0EBA.F0BA0778@t-online.de> References: <3A6F01E2.80604AFC@earthlink.net> Newsgroups: sci.crypt Lines: 55 Benjamin Goldberg wrote: > > Mok-Kong Shen wrote: > > > > John Savard wrote: > > > > > > ritter@io.com (Terry Ritter) wrote, in part: > > [snip] > > > > > But you appear to be claiming that the availability of every > > > possible bit transposition provides some important fundamental > > > property, and that it challenges the OTP. These are indeed the kinds > > > of claims that, if true, might well induce people to accept a 2.5% > > > bandwidth penalty. > > > But I haven't seen these claims justified in a way that will win > > > general acknowledgement. Worse than that, I still believe these > > > claims to be mistaken, and I feel that others will do so as well - > > > to the extent they even deign to investigate. > > > > Elsewhere I said that there cannot exist any magic that > > turns something predictable (the best PRNG is not 'absolutely' > > unpredicatble) to something (absolutely) unpredictable. > > This suffices to convince one that, while DT might be > > extremely good so as to be practically very very secure, it, > > like all other 'real' algorithms, can't attain the goal of > > 'perfect' security, which is neither possible nor truly > > necessary (to attempt) to attain (with all prices) in > > practice. > > This is true of any system with a finite sized key. Since we can't have > "perfect" security, perhaps we should define some sort of "as close to > perfect as practically possible" security -- specifically, that only by > brute forcing the key can the system be broken. > > Can DT satisfy this goal? With my very limited knowledge I would say that DT is an interesting approach, especially in its property of covering up the frequency distribution of the original bit sequence, and as such could well be a viable valuable component for combination with other components, e.g. those using substitutions, to further complicate the job of the opponent to a degree of 'practical' extreme infeasibility. (It has been my personal opinion that it's a good idea to have a PRNG-controlled encryption employing techniques of both stream and block ciphers.) The design of such systems is in my humble view akin to that of drinks (cf. Coca Cola) and the optimum is likely to dependent on many factors (that are partly external at least). M. K. Shen --------------------------- http://home.t-online.de/home/mok-kong.shen
Subject: Re: Dynamic Transposition Revisited (long) Date: Wed, 24 Jan 2001 15:00:42 GMT From: Benjamin Goldberg <goldbb2@earthlink.net> Message-ID: <3A6E4A75.EB20C059@earthlink.net> References: <3a6e1394.1718960@news.powersurfr.com> Newsgroups: sci.crypt Lines: 39 John Savard wrote: [snip] > I > don't see why I shouldn't point out that Dynamic Transposition is also > not a true general substitution of bit-balanced blocks. > > Ah, but it's a transposition, and it is a "true general > transposition", you seem to have said. Might I ask, what do you mean by "true general substitution" and "true general transposition?" With sufficiently large state in the PRNG, one might expect that DT is capable of producing any of the N! possible permutations as it's first output. Is this what is meant by "true general transposition?" Now suppose that we use the PRNG to generate round keys for some well known block cipher [2 round DES, as you suggested] with a similarly large state. Perhaps one might hope that the first output of this can be any of the (2^64)! different permutations, as with DT, but it is not. I suppose that this could probably be said to NOT be a "true general substitution," whatever that might be. What would be a "true general substitution," then? Another way of looking at things: DT creates a permutation of N items to N items. DES creates a permutation of 2^64 items to 2^64 items. It is easy to pick N and the PRNG so that DT produces any of the N! permutations for it's first output. It is [nearly?] impossible to pick a PRNG [and some number of rounds] to use with DES to produce any of the 2^64! permutations as it's first output. -- Most scientific innovations do not begin with "Eureka!" They begin with "That's odd. I wonder why that happened?"
Subject: Re: Dynamic Transposition Revisited (long) Date: Wed, 24 Jan 2001 16:55:14 GMT From: jsavard@ecn.ab.SBLOK.ca.nowhere (John Savard) Message-ID: <3a6f07e3.8670271@news.powersurfr.com> References: <3A6E4A75.EB20C059@earthlink.net> Newsgroups: sci.crypt Lines: 33 On Wed, 24 Jan 2001 15:00:42 GMT, Benjamin Goldberg <goldbb2@earthlink.net> wrote, in part: >Another way of looking at things: >DT creates a permutation of N items to N items. >DES creates a permutation of 2^64 items to 2^64 items. >It is easy to pick N and the PRNG so that DT produces any of the N! >permutations for it's first output. It is [nearly?] impossible to pick >a PRNG [and some number of rounds] to use with DES to produce any of the >2^64! permutations as it's first output. This is true enough, but essentially I am saying that it's no good using that comparison to claim that DT does some fantastic thing that a cipher based on DES, but with different subkeys for each block, couldn't possibly do. Because it's ultimately comparing apples to oranges. From the viewpoint of the overall security of the system against cryptanalysis, what matters is that a substitution cipher takes blocks from a set of (2^64) to blocks from a set of (2^64), and n-bit Dynamic Transposition takes input blocks from a set of n!/((n/2)!(n/2)!) to a set of n!/((n/2)!(n/2)!). Simply because "transposition" is usually thought of as a class of ciphers in itself, distinct from "substitution", doesn't mean that a cryptanalyst is going to concede it any special privileges on that account. John Savard http://home.ecn.ab.ca/~jsavard/crypto.htm
Subject: Re: Dynamic Transposition Revisited (long) Date: Wed, 24 Jan 2001 11:10:01 +0100 From: Mok-Kong Shen <mok-kong.shen@t-online.de> Message-ID: <3A6EA9F9.8D61E73E@t-online.de> References: <3a6e0193.1086054@news.io.com> Newsgroups: sci.crypt Lines: 65 Terry Ritter wrote: > > jsavard@ecn.ab.SBLOK.ca.nowhere (John Savard) wrote: > [snip] > >The "weakness" is: > > > >The set of permutations of n bits > > > >considered as a subset of the set of one-to-one and onto mappings from > >the set of bit-balanced blocks of n bits to itself > > > >is a subgroup of those mappings, and therefore is not a generator of > >the entire set of mappings. > > All of which is not a problem, because the actual permutation which > encrypted the data is hidden in that clump. There is no information > to distinguish the correct one. > > You might as well say there is a problem because one could read the > RNG state and thus know the keying of the cipher. We assume that > state is kept secure. > > There is no way to distinguish the correct permutation from the huge > group which can generate the same transformation. And it is only the > correct permutation which leads back (eventually) to the shuffling > sequence. > > A weakness which cannot be exploited is no weakness at all. Having read the ongoing discussions quite a bit, I am afraid I am yet very confused. As I wrote previously, since the individual operations done in performing the permutation does not reveal the exact values of the PRNG output used to do that, this 'indirectness' helps to a large extent to shield the PRNG sequence from being determined by the opponent. This is certainly beneficial. But the necessity of bit balancing, which seems to be considered to be major factor leading to the strength of DT, remains fairly unclear to me. Consider we have two bit strings 0011 and 0111, the one balanced, the other not. I show someone the sequences 0101 and 1011 and tell him that these are the results of certain permutations from certain original sequences with the help of a PRNG, how can he obtain more information about the PRNG in one case than the other, if he doesn't know the original sequences? The situation doesn't change, I suppose, even if he knows the original sequences. To make my point clear, consider the extreme case 1111. If I perform any permutation, then the result remains 1111, from which others certainly can't know which permutation operations I have done. So the point of bit balancing seems to boil down in my conjecture to not letting the opponent know the frequency distribution of the original sequence, namely the proportion of 0-bits to 1-bits in it and is inherently disassociated with the issue of predictability of the PRNG. Am I right or am I on an entirely wrong track of thought? Many thanks in advance. M. K. Shen ------------------------- http://home.t-online.de/home/mok-kong.shen
Subject: Re: Dynamic Transposition Revisited (long) Date: Thu, 25 Jan 2001 07:13:31 GMT From: ritter@io.com (Terry Ritter) Message-ID: <3a6fd1b0.8646377@news.io.com> References: <3A6EA9F9.8D61E73E@t-online.de> Newsgroups: sci.crypt Lines: 88 On Wed, 24 Jan 2001 11:10:01 +0100, in <3A6EA9F9.8D61E73E@t-online.de>, in sci.crypt Mok-Kong Shen <mok-kong.shen@t-online.de> wrote: >Terry Ritter wrote: >> >> jsavard@ecn.ab.SBLOK.ca.nowhere (John Savard) wrote: >> >[snip] >> >The "weakness" is: >> > >> >The set of permutations of n bits >> > >> >considered as a subset of the set of one-to-one and onto mappings from >> >the set of bit-balanced blocks of n bits to itself >> > >> >is a subgroup of those mappings, and therefore is not a generator of >> >the entire set of mappings. >> >> All of which is not a problem, because the actual permutation which >> encrypted the data is hidden in that clump. There is no information >> to distinguish the correct one. >> >> You might as well say there is a problem because one could read the >> RNG state and thus know the keying of the cipher. We assume that >> state is kept secure. >> >> There is no way to distinguish the correct permutation from the huge >> group which can generate the same transformation. And it is only the >> correct permutation which leads back (eventually) to the shuffling >> sequence. >> >> A weakness which cannot be exploited is no weakness at all. > >Having read the ongoing discussions quite a bit, I am >afraid I am yet very confused. As I wrote previously, >since the individual operations done in performing the >permutation does not reveal the exact values of the PRNG >output used to do that, this 'indirectness' helps to >a large extent to shield the PRNG sequence from being >determined by the opponent. This is certainly beneficial. >But the necessity of bit balancing, which seems to be >considered to be major factor leading to the strength >of DT, remains fairly unclear to me. Consider we have >two bit strings 0011 and 0111, the one balanced, the >other not. I show someone the sequences 0101 and 1011 >and tell him that these are the results of certain >permutations from certain original sequences with the >help of a PRNG, how can he obtain more information >about the PRNG in one case than the other, if he doesn't >know the original sequences? The situation doesn't >change, I suppose, even if he knows the original >sequences. To make my point clear, consider the extreme >case 1111. If I perform any permutation, then the result >remains 1111, from which others certainly can't know >which permutation operations I have done. So the point >of bit balancing seems to boil down in my conjecture to >not letting the opponent know the frequency distribution >of the original sequence, namely the proportion of 0-bits >to 1-bits in it and is inherently disassociated with the >issue of predictability of the PRNG. Am I right or am I >on an entirely wrong track of thought? Many thanks in >advance. But suppose the block is not 1111 but instead 1000. In that case, one complete bit-move would be fully exposed. So if we had only a single shuffle, that would expose one shuffling value. (Of course, the shuffling value is only a portion of the value from the RNG, and the rest is unknown. And if we double-shuffle, a known move doesn't expose anything.) The goal was to close every possible door to weakness at every level. In a sense, I guess the question is asking: "Is bit-balancing worth doing?" As I recall, the bit-balancing part is not a major overhead. The major cost is the shuffling -- for each bit-element we have to step the RNG, do nonlinear processing, fold and mask for the shuffling routine, which itself has to handle range reduction, and finally do a bit-exchange. So there would seem to be little advantage in leaving the bit-balancing out. The only reason for using this cipher is if we can believe that it is effectively unbreakable. We don't want to scrimp on strength. --- Terry Ritter ritter@io.com http://www.io.com/~ritter/ Crypto Glossary http://www.io.com/~ritter/GLOSSARY.HTM
Subject: Re: Dynamic Transposition Revisited (long) Date: Thu, 25 Jan 2001 10:57:56 +0100 From: Mok-Kong Shen <mok-kong.shen@t-online.de> Message-ID: <3A6FF8A4.727C743F@t-online.de> References: <3a6fd1b0.8646377@news.io.com> Newsgroups: sci.crypt Lines: 42 Terry Ritter wrote: > > In a sense, I guess the question is asking: "Is bit-balancing worth > doing?" As I recall, the bit-balancing part is not a major overhead. > The major cost is the shuffling -- for each bit-element we have to > step the RNG, do nonlinear processing, fold and mask for the shuffling > routine, which itself has to handle range reduction, and finally do a > bit-exchange. So there would seem to be little advantage in leaving > the bit-balancing out. > > The only reason for using this cipher is if we can believe that it is > effectively unbreakable. We don't want to scrimp on strength. As I also mentioned previously, cost of permutation at the bit level (as against using larger units) could be a (practical) issue but that doesn't in my view preclude its usefulness, for there are (many) situations where one is not heavily restricted by processing cost, I believe. If one permutes bits, then balancing shields the information about the proportion of 0/1 bits and, since the expansion is small as you have argued, should be advantageous. I like to repeat my point, though, about the 'connection' between balancing and predictability of PRNG. Note, for example, that one need even not apply an integral number of the process of Durstenfeld. One could do e.g. one and one half permutation, i.e. stopping the second permutation in its middle. (One could, for example, use a parameter to determine that, such that the length of the PRNG sequence used is also unknown to the opponent.) So from the result of permutation to deduce the sequence of the output of the PRNG can be fairly difficult, quite independent of the pattern bit string being permuted. Finally, I would like to say that part of the discussions on your DT is not about its 'practical' security (i.e. whether it is effectively unbreakable) but about whether it could offer 'perfect' security (this is one of the points that John Savard raised and doubted, if I understand correctly). M. K. Shen
Subject: Re: Dynamic Transposition Revisited (long) Date: Thu, 25 Jan 2001 22:27:38 GMT From: ritter@io.com (Terry Ritter) Message-ID: <3a70a835.16091262@news.io.com> References: <3A6FF8A4.727C743F@t-online.de> Newsgroups: sci.crypt Lines: 76 On Thu, 25 Jan 2001 10:57:56 +0100, in <3A6FF8A4.727C743F@t-online.de>, in sci.crypt Mok-Kong Shen <mok-kong.shen@t-online.de> wrote: >[...] >Finally, I would like to say that part of the discussions >on your DT is not about its 'practical' security (i.e. >whether it is effectively unbreakable) but about whether >it could offer 'perfect' security (this is one of the >points that John Savard raised and doubted, if I understand >correctly). "Perfect secrecy" is the technical condition where every possible plaintext is equally likely to represent the ciphertext. This is the cipher as a Latin square. A "perfect" cipher is not vulnerable to brute force because traversing each possible plaintext will just produce every possible message. The bit-balancing, bit-permutation, and one-time use of each permutation in Dynamic Transposition allows one to argue that each block does in fact have the "perfect security" property in that sense. Moreover, since the permutation is not exposed -- even by known-plaintext -- the best attack would appear to be some form of brute force. In addition I would like to think that the tremendous reduction of information from stage to stage hides the imperfections which must exist at earlier stages. In a previous message, somebody suggested that I might be using two different definitions of "proof." I think that is the case, and I suspect it is common. In the first case, I want "proven secure" to be some sort of absolute mathematical statement about a cipher. I accept that the implementation would have to be validated to the theory in some way, which probably would require experimentation to complete the "proof" in practice. Still, such a "proof" would be an absolute statement about the security of the cipher per se. This is the type of proof we seek, but may never find. The other case is cryptographic proof as it functions today: Assumptions are made, and from those, conclusions drawn, all of which may bear on cryptographic strength. But usually there is no universal statement of strength which applies against any possible attack whatsoever. These are "security proofs" in the sense that they are mathematical proofs pertaining to security, but they are not a guarantee of overall strength. I see no reason why a "mechanistic" cipher like Dynamic Transposition cannot have numerous security proofs; which is to say, mathematical statements about various aspects of the cipher. These should fit right into the body of cryptography as it has grown over the past couple of decades. We might hope to express the difference between an ideal goal, and achieved practical reality, and then show that the difference tends toward zero as the size of some component is increased. We might be to follow the reduction of information from stage to stage, and describe that as increasing uncertainty with respect to the original value. In this way, we might be able to show that any information exposed by the cipher is simply insufficient to attack an earlier stage. Because the main strength of Dynamic Transposition lies in a single "arbitrary" permutation, it may be one of the cleanest possible mechanistic ciphers about which one might make mathematical statements. --- Terry Ritter ritter@io.com http://www.io.com/~ritter/ Crypto Glossary http://www.io.com/~ritter/GLOSSARY.HTM
Subject: Re: Dynamic Transposition Revisited (long) Date: Fri, 26 Jan 2001 00:46:56 +0100 From: Mok-Kong Shen <mok-kong.shen@t-online.de> Message-ID: <3A70BAF0.D4066BCB@t-online.de> References: <3a70a835.16091262@news.io.com> Newsgroups: sci.crypt Lines: 83 Terry Ritter wrote: > > Mok-Kong Shen<mok-kong.shen@t-online.de> wrote: > > >[...] > >Finally, I would like to say that part of the discussions > >on your DT is not about its 'practical' security (i.e. > >whether it is effectively unbreakable) but about whether > >it could offer 'perfect' security (this is one of the > >points that John Savard raised and doubted, if I understand > >correctly). > > "Perfect secrecy" is the technical condition where every possible > plaintext is equally likely to represent the ciphertext. This is the > cipher as a Latin square. A "perfect" cipher is not vulnerable to > brute force because traversing each possible plaintext will just > produce every possible message. > > The bit-balancing, bit-permutation, and one-time use of each > permutation in Dynamic Transposition allows one to argue that each > block does in fact have the "perfect security" property in that sense. > Moreover, since the permutation is not exposed -- even by > known-plaintext -- the best attack would appear to be some form of > brute force. > > In addition I would like to think that the tremendous reduction of > information from stage to stage hides the imperfections which must > exist at earlier stages. > > In a previous message, somebody suggested that I might be using two > different definitions of "proof." I think that is the case, and I > suspect it is common. > > In the first case, I want "proven secure" to be some sort of absolute > mathematical statement about a cipher. I accept that the > implementation would have to be validated to the theory in some way, > which probably would require experimentation to complete the "proof" > in practice. Still, such a "proof" would be an absolute statement > about the security of the cipher per se. This is the type of proof we > seek, but may never find. > > The other case is cryptographic proof as it functions today: > Assumptions are made, and from those, conclusions drawn, all of which > may bear on cryptographic strength. But usually there is no universal > statement of strength which applies against any possible attack > whatsoever. These are "security proofs" in the sense that they are > mathematical proofs pertaining to security, but they are not a > guarantee of overall strength. > > I see no reason why a "mechanistic" cipher like Dynamic Transposition > cannot have numerous security proofs; which is to say, mathematical > statements about various aspects of the cipher. These should fit > right into the body of cryptography as it has grown over the past > couple of decades. > > We might hope to express the difference between an ideal goal, and > achieved practical reality, and then show that the difference tends > toward zero as the size of some component is increased. > > We might be to follow the reduction of information from stage to > stage, and describe that as increasing uncertainty with respect to the > original value. In this way, we might be able to show that any > information exposed by the cipher is simply insufficient to attack an > earlier stage. > > Because the main strength of Dynamic Transposition lies in a single > "arbitrary" permutation, it may be one of the cleanest possible > mechanistic ciphers about which one might make mathematical > statements. My knowledge is too meager to enable me to offer more points than I have already done. However, it is my impression that your arguments todate somehow (I have no definite idea of that) need strenthening, if DT is to be accepted as on a par with the theoretical OTP (which seems to be your goal), in particular by the academics, assuming DT has in fact that property. (Apology, if my open words are inappropriate in some way.) M. K. Shen
Subject: Re: Dynamic Transposition Revisited (long) Date: Fri, 26 Jan 2001 13:56:11 +0100 From: Mok-Kong Shen <mok-kong.shen@t-online.de> Message-ID: <3A7173EB.C6EA816E@t-online.de> References: <3A70BAF0.D4066BCB@t-online.de> Newsgroups: sci.crypt Lines: 48 Mok-Kong Shen wrote: > > Terry Ritter wrote: > > [snip] > My knowledge is too meager to enable me to offer more > points than I have already done. However, it is my > impression that your arguments todate somehow (I have > no definite idea of that) need strenthening, if DT is > to be accepted as on a par with the theoretical OTP > (which seems to be your goal), in particular by the > academics, assuming DT has in fact that property. > (Apology, if my open words are inappropriate in some > way.) After some re-thinking, I do like to elaborate a little a previous point of mine concerning the question of perfectness of DT. Suppose we have block size of n and we agree not to use the non-balanced groups of bits but only the balanced ones to transmit informations (in other words, we have an 'alphabet' of a certain size m that is less than n). This serves to separate out the issue of bit balancing in order to simpify the argumentation below. Now what we have is the following: Given an information block, we do permutations of its bits to produce an enciphered block with the help of a PRNG. A PRNG never provides a perfect bit sequence in the sense used in the context of the theoretical OTP. How can it be that this imperfectness does not manifest itself in some form (possibly in certain very much reduced, practically entirely negligible, intensity) AT ALL in our encryption result? Let's order all the distinct balanced bit groups into a sequence S and feed repetitions of S into our algorithm. This input is evidently perfectly predictable. Can the output be perfectly unpredictable? It certainly cannot. For the PRNG has a finite period and hence the ciphertext must cycle ultimately. This shows that, while DT could be practically very secure (an issue that certainly merits careful study before its being used in practice), it cannot offer perfect security that pertains to the theoretical OTP. M. K. Shen --------------------------- http://home.t-online.de/home/mok-kong.shen
Subject: Re: Dynamic Transposition Revisited (long) Date: Fri, 26 Jan 2001 14:06:48 +0100 From: Mok-Kong Shen <mok-kong.shen@t-online.de> Message-ID: <3A717668.2E689391@t-online.de> References: <3A7173EB.C6EA816E@t-online.de> Newsgroups: sci.crypt Lines: 10 Errata to my previous follow-up: ''an 'alphabet' of a certain size m that is less than n'' should read ''an 'alphabet' of a certain size m that is less than 2^n'' M. K. Shen
Subject: Re: Dynamic Transposition Revisited (long) Date: Fri, 26 Jan 2001 21:52:58 GMT From: ritter@io.com (Terry Ritter) Message-ID: <3a71f0ba.6153413@news.io.com> References: <3A7173EB.C6EA816E@t-online.de> Newsgroups: sci.crypt Lines: 157 On Fri, 26 Jan 2001 13:56:11 +0100, in <3A7173EB.C6EA816E@t-online.de>, in sci.crypt Mok-Kong Shen <mok-kong.shen@t-online.de> wrote: >Mok-Kong Shen wrote: >> >> Terry Ritter wrote: >> > >[snip] >> My knowledge is too meager to enable me to offer more >> points than I have already done. However, it is my >> impression that your arguments todate somehow (I have >> no definite idea of that) need strenthening, if DT is >> to be accepted as on a par with the theoretical OTP >> (which seems to be your goal), in particular by the >> academics, assuming DT has in fact that property. >> (Apology, if my open words are inappropriate in some >> way.) > >After some re-thinking, I do like to elaborate a little >a previous point of mine concerning the question of >perfectness of DT. > >Suppose we have block size of n and we agree not to use >the non-balanced groups of bits but only the balanced >ones to transmit informations (in other words, we have >an 'alphabet' of a certain size m that is less than n). This >serves to separate out the issue of bit balancing in order to >simpify the argumentation below. > >Now what we have is the following: Given an information block, >we do permutations of its bits to produce an enciphered block >with the help of a PRNG. A PRNG never provides a perfect bit >sequence in the sense used in the context of the theoretical >OTP. First of all, "perfect secrecy" is a technical term from Shannon for a type of balanced ciphering. I have also used "perfect" in my articles to describe a random number generator (RNG) which is a permutation generator (with a single cycle). But, in general, I am unaware of any technical meaning for "perfect bit sequence." There can be no perfect sequence. There is only a universe of sequences, which we can sample, and with the sampled properties we can establish statistically. It is hardly a surprise that something we build is not perfect in an absolute sense. At issue is whether the difference between perfect and what we have is noticeable or exploitable. The statement "A PRNG never provides a perfect bit sequence" is technically false: Suppose as an RNG we had a simple shift-register (SR), that simply repeated its' contents cyclically: Surely we can see that any possible sequence *shorter* than the SR is easily produced. Of course, for longer sequences, only some are possible, and those sequences are internally correlated, which is not good. Since we normally use short RNG's, that is the behavior we are taught to expect. The point of using a large-state RNG in Dynamic Transposition is to support a full permutation computation using independent state. The problem occurs when we talk about RNG output beyond the internal state size. Here the problem would arise in subsequent permutations, where perhaps not all possible sequences of permutations are available. That issue would be clearer if we just used the raw output from the RNG, but we don't: we deliberately process values so that less information is available. We fold the output value and mask it (a power-of-2 modulo), and delete values from the sequence (thus creating completely different shuffling sequences). These operations act to hide correlation and improve statistical qualities. Suppose we had a numbered 64-card deck, which had only the values 0 through 62 and 0 again. We assume we can only sample a small part of the deck at a time, and can have only so many samples. Eventually we may be able to establish that there 0 has twice the expected probability, and somehow exploit that situation. Now suppose we can only see the lower 2 bits of each card value. Here no value is missing, and 0 is only slightly more probable than 1 or 2. There is still a statistical difference, and we would still have difficulty finding it. But we could not say that any particular value would not occur, or that a related Shuffle move would not occur. I think the worst we could expect is that some permutations might be less probable than others. But I don't think we could catch the effect in practice, because there are too many permutations and our sample space is too small. Moreover, the actual defect is likely to be more like Y-permutation is less probable if X-permutation just occurred, which makes the statistical problem even worse. >How can it be that this imperfectness does not manifest >itself in some form (possibly in certain very much reduced, >practically entirely negligible, intensity) AT ALL in our >encryption result? There is no perfect sequence. We describe perfection statistically, across our sample of results. In a statistical sense, it may well be possible to approach statistical perfection -- over a limited use -- to the extent that statistical differences are not apparent. This done first by having a large state RNG, and then by folding and masking the RNG value, so that many different RNG values produce exactly the same shuffling value. This is information reduction or loss, and in this way deviations are hidden. >Let's order all the distinct balanced bit >groups into a sequence S and feed repetitions of S into our >algorithm. I don't understand "feeding repetitions of S." An RNG algorithm steps or produces a sequence in some way, but does not take input. >This input is evidently perfectly predictable. Can >the output be perfectly unpredictable? It certainly cannot. An RNG is "perfectly unpredictable" as it starts and continues so until the output exceeds the internal state. Here, that should support a full double-shuffle. An RNG just can't be perfectly unpredictable indefinitely. >For the PRNG has a finite period and hence the ciphertext >must cycle ultimately. If the RNG cannot cycle while our Sun still shines, can we really say that it does, in practice, cycle? Knowing that, can we use even the most gross property of cycling? I think not. >This shows that, while DT could be >practically very secure (an issue that certainly merits >careful study before its being used in practice), it cannot >offer perfect security that pertains to the theoretical OTP. Again, "perfect security" is a technical term. It refers to a cipher where any possible plaintext may be ciphered into exactly the same ciphertext with equal probability. That arguably does occur in Dynamic Transposition on a block basis. It is when we consider multiple blocks that the arguments become weaker. But since the ciphering does not expose the permutation, this imperfection may not be detectable. If it is not detectable, can we say, in practice, that it even exists? That is not a semantic argument, or a philosophical question -- it is the real situation. For example, perhaps we could show that a certain amount of information is necessary to distinguish between our real RNG and the theoretical ideal. Then if we could show that the information is not available, we could have provable "perfection," much like other cryptographic proofs. --- Terry Ritter ritter@io.com http://www.io.com/~ritter/ Crypto Glossary http://www.io.com/~ritter/GLOSSARY.HTM
Subject: Re: Dynamic Transposition Revisited (long) Date: Fri, 26 Jan 2001 23:26:55 +0100 From: Mok-Kong Shen <mok-kong.shen@t-online.de> Message-ID: <3A71F9AF.6415A4E2@t-online.de> References: <3a71f0ba.6153413@news.io.com> Newsgroups: sci.crypt Lines: 150 Terry Ritter wrote: > > Mok-Kong Shen<mok-kong.shen@t-online.de> wrote: > >After some re-thinking, I do like to elaborate a little > >a previous point of mine concerning the question of > >perfectness of DT. > > > >Suppose we have block size of n and we agree not to use > >the non-balanced groups of bits but only the balanced > >ones to transmit informations (in other words, we have > >an 'alphabet' of a certain size m that is less than n). This > >serves to separate out the issue of bit balancing in order to > >simpify the argumentation below. > > > >Now what we have is the following: Given an information block, > >we do permutations of its bits to produce an enciphered block > >with the help of a PRNG. A PRNG never provides a perfect bit > >sequence in the sense used in the context of the theoretical > >OTP. > > First of all, "perfect secrecy" is a technical term from Shannon for a > type of balanced ciphering. I have also used "perfect" in my articles > to describe a random number generator (RNG) which is a permutation > generator (with a single cycle). But, in general, I am unaware of any > technical meaning for "perfect bit sequence." There can be no perfect > sequence. There is only a universe of sequences, which we can sample, > and with the sampled properties we can establish statistically. > > It is hardly a surprise that something we build is not perfect in an > absolute sense. At issue is whether the difference between perfect > and what we have is noticeable or exploitable. > > The statement "A PRNG never provides a perfect bit sequence" is > technically false: > > Suppose as an RNG we had a simple shift-register (SR), that simply > repeated its' contents cyclically: Surely we can see that any > possible sequence *shorter* than the SR is easily produced. Of > course, for longer sequences, only some are possible, and those > sequences are internally correlated, which is not good. Since we > normally use short RNG's, that is the behavior we are taught to > expect. But the point is whether your DT is on a par with the theoretical OTP or perhaps better than it. So it is a 'theoretical' question, not a technical question. Thus a perfect bit sequence, as I said, is what is understood in the context of OTP and has, in particular, no finite period. > > The point of using a large-state RNG in Dynamic Transposition is to > support a full permutation computation using independent state. > > The problem occurs when we talk about RNG output beyond the internal > state size. Here the problem would arise in subsequent permutations, > where perhaps not all possible sequences of permutations are > available. That issue would be clearer if we just used the raw output > from the RNG, but we don't: we deliberately process values so that > less information is available. We fold the output value and mask it > (a power-of-2 modulo), and delete values from the sequence (thus > creating completely different shuffling sequences). These operations > act to hide correlation and improve statistical qualities. > > Suppose we had a numbered 64-card deck, which had only the values 0 > through 62 and 0 again. We assume we can only sample a small part of > the deck at a time, and can have only so many samples. Eventually we > may be able to establish that there 0 has twice the expected > probability, and somehow exploit that situation. > > Now suppose we can only see the lower 2 bits of each card value. Here > no value is missing, and 0 is only slightly more probable than 1 or 2. > There is still a statistical difference, and we would still have > difficulty finding it. But we could not say that any particular value > would not occur, or that a related Shuffle move would not occur. > > I think the worst we could expect is that some permutations might be > less probable than others. But I don't think we could catch the > effect in practice, because there are too many permutations and our > sample space is too small. Moreover, the actual defect is likely to > be more like Y-permutation is less probable if X-permutation just > occurred, which makes the statistical problem even worse. > > >How can it be that this imperfectness does not manifest > >itself in some form (possibly in certain very much reduced, > >practically entirely negligible, intensity) AT ALL in our > >encryption result? > > There is no perfect sequence. We describe perfection statistically, > across our sample of results. In a statistical sense, it may well be > possible to approach statistical perfection -- over a limited use -- > to the extent that statistical differences are not apparent. > > This done first by having a large state RNG, and then by folding and > masking the RNG value, so that many different RNG values produce > exactly the same shuffling value. This is information reduction or > loss, and in this way deviations are hidden. > > >Let's order all the distinct balanced bit > >groups into a sequence S and feed repetitions of S into our > >algorithm. > > I don't understand "feeding repetitions of S." An RNG algorithm steps > or produces a sequence in some way, but does not take input. > > >This input is evidently perfectly predictable. Can > >the output be perfectly unpredictable? It certainly cannot. > > An RNG is "perfectly unpredictable" as it starts and continues so > until the output exceeds the internal state. Here, that should > support a full double-shuffle. An RNG just can't be perfectly > unpredictable indefinitely. > > >For the PRNG has a finite period and hence the ciphertext > >must cycle ultimately. > > If the RNG cannot cycle while our Sun still shines, can we really say > that it does, in practice, cycle? Knowing that, can we use even the > most gross property of cycling? I think not. > > >This shows that, while DT could be > >practically very secure (an issue that certainly merits > >careful study before its being used in practice), it cannot > >offer perfect security that pertains to the theoretical OTP. > > Again, "perfect security" is a technical term. It refers to a cipher > where any possible plaintext may be ciphered into exactly the same > ciphertext with equal probability. That arguably does occur in > Dynamic Transposition on a block basis. > > It is when we consider multiple blocks that the arguments become > weaker. But since the ciphering does not expose the permutation, this > imperfection may not be detectable. If it is not detectable, can we > say, in practice, that it even exists? > > That is not a semantic argument, or a philosophical question -- it is > the real situation. For example, perhaps we could show that a certain > amount of information is necessary to distinguish between our real RNG > and the theoretical ideal. Then if we could show that the information > is not available, we could have provable "perfection," much like other > cryptographic proofs. Then one has to wait your 'showing'. Before that, one certainly can't legitimately support your previous claim of superiority of DT over OTP, I believe. M. K. Shen
Subject: Re: Dynamic Transposition Revisited (long) Date: Fri, 26 Jan 2001 23:29:03 GMT From: ritter@io.com (Terry Ritter) Message-ID: <3a72083b.12171705@news.io.com> References: <3A71F9AF.6415A4E2@t-online.de> Newsgroups: sci.crypt Lines: 28 On Fri, 26 Jan 2001 23:26:55 +0100, in <3A71F9AF.6415A4E2@t-online.de>, in sci.crypt Mok-Kong Shen <mok-kong.shen@t-online.de> wrote: >[...] >Then one has to wait your 'showing'. Before that, one >certainly can't legitimately support your previous claim >of superiority of DT over OTP, I believe. Then you believe incorrectly. We already know that OTP is weak if the sequence it uses is predictable. We also know that there is no test which can guarantee that a sequence we use is not predictable. The OTP thus has inherent potential weakness: the possibility of using a sequence with predictable structure. In contrast, Dynamic Transposition hides any structure in its' sequence behind permutations which do not reveal the sequence. This is an obvious superiority. Now we have a contest. OTP is not the obvious winner, I believe. --- Terry Ritter ritter@io.com http://www.io.com/~ritter/ Crypto Glossary http://www.io.com/~ritter/GLOSSARY.HTM
Subject: Re: Dynamic Transposition Revisited (long) Date: Sat, 27 Jan 2001 10:41:46 +0100 From: Mok-Kong Shen <mok-kong.shen@t-online.de> Message-ID: <3A7297DA.7ED0DA09@t-online.de> References: <3a72083b.12171705@news.io.com> Newsgroups: sci.crypt Lines: 51 Terry Ritter wrote: > > Mok-Kong Shen<mok-kong.shen@t-online.de> wrote: > > >[...] > >Then one has to wait your 'showing'. Before that, one > >certainly can't legitimately support your previous claim > >of superiority of DT over OTP, I believe. > > Then you believe incorrectly. > > We already know that OTP is weak if the sequence it uses is > predictable. We also know that there is no test which can guarantee > that a sequence we use is not predictable. The OTP thus has inherent > potential weakness: the possibility of using a sequence with > predictable structure. > > In contrast, Dynamic Transposition hides any structure in its' > sequence behind permutations which do not reveal the sequence. This > is an obvious superiority. > > Now we have a contest. OTP is not the obvious winner, I believe. I suppose you have a different and problematical concept of the (THEORETICAL) OTP. The bit sequence of OTP is by definition/assumption unpredictable. If a 'claimed' OTP uses a predictable bit sequence and consequently is weak as you said, then it is by definition NOT an OTP, though snake-oil peddlers used to call that OTP. Some people in crypto groups even object to use the term pseudo-OTP to designate that kind of stuff. (Once I got flames for having employed the term pseudo-OTP.) We should take care not to be contaminated in our terminology by the slangs of snake-oil peddlers. (Of course they could complain, because anything used 'one-time' is OT, but that's evidently outside our present concern.) BTW, my argumention in the previous follow-up could be simplified a bit. One does not have to use the big sequence S. It suffices to pick one arbitrary balanced block and feed repetitions of it to the algorithm. Basically, the argument boils down to the trivial fact that a PRNG has a finite period, while the theoretical OTP has, by definition, an infinite period. Hence there is no chance that the former can compete with the latter. M. K. Shen ----------------------------- http://home.t-onle.de/home/mok-kong.shen
Subject: Re: Dynamic Transposition Revisited (long) Date: Sat, 27 Jan 2001 13:18:29 GMT From: jsavard@ecn.ab.SBLOK.ca.nowhere (John Savard) Message-ID: <3a72c7d4.404608@news.powersurfr.com> References: <3A7297DA.7ED0DA09@t-online.de> Newsgroups: sci.crypt Lines: 42 On Sat, 27 Jan 2001 10:41:46 +0100, Mok-Kong Shen <mok-kong.shen@t-online.de> wrote, in part: >I suppose you have a different and problematical concept >of the (THEORETICAL) OTP. The bit sequence of OTP is by >definition/assumption unpredictable. If a 'claimed' OTP >uses a predictable bit sequence and consequently is weak >as you said, then it is by definition NOT an OTP, though >snake-oil peddlers used to call that OTP. This is true. But Terry Ritter isn't talking about fake OTPs based on algorithmic PRNGs, as far as I understand it. He is saying that even what people acknowledge as "real" OTPs, where the key has been generated by physical randomness, aren't provably the 'theoretical OTP', because you can't prove a particular physical random noise generator to be perfect. That is not, in itself, untrue. Physical random number generators can have bias, for example. However, it his his insistence that this is a major concern, and more specifically the implication that this makes the proof that the theoretical OTP is unbreakable _irrelevant_ to physically realizable OTPs, that I fear strikes many as simply bizarre. Because, whether or not that is his intention, it makes it sound as if he is worried about the NSA having a cryptanalytic attack which enables them to predict the roll of a die or the flip of a coin. In VENONA, not only did the NSA exploit pads used twice, but they even made use of the bias of numbers generated by hand by typists 'at random', so they did come closer to doing that than anyone might have expected. While precautions are needed in using the raw output of a simple physical RNG, there are still limits to what constitutes reasonable concern. John Savard http://home.ecn.ab.ca/~jsavard/crypto.htm
Subject: Re: Dynamic Transposition Revisited (long) Date: Sat, 27 Jan 2001 18:08:27 +0100 From: Mok-Kong Shen <mok-kong.shen@t-online.de> Message-ID: <3A73008B.E4938CEC@t-online.de> References: <3a72c7d4.404608@news.powersurfr.com> Newsgroups: sci.crypt Lines: 57 John Savard wrote: > > Mok-Kong Shen<mok-kong.shen@t-online.de> wrote, in part: > > >I suppose you have a different and problematical concept > >of the (THEORETICAL) OTP. The bit sequence of OTP is by > >definition/assumption unpredictable. If a 'claimed' OTP > >uses a predictable bit sequence and consequently is weak > >as you said, then it is by definition NOT an OTP, though > >snake-oil peddlers used to call that OTP. > > This is true. > > But Terry Ritter isn't talking about fake OTPs based on algorithmic > PRNGs, as far as I understand it. > > He is saying that even what people acknowledge as "real" OTPs, where > the key has been generated by physical randomness, aren't provably the > 'theoretical OTP', because you can't prove a particular physical > random noise generator to be perfect. > > That is not, in itself, untrue. Physical random number generators can > have bias, for example. > > However, it his his insistence that this is a major concern, and more > specifically the implication that this makes the proof that the > theoretical OTP is unbreakable _irrelevant_ to physically realizable > OTPs, that I fear strikes many as simply bizarre. Because, whether or > not that is his intention, it makes it sound as if he is worried about > the NSA having a cryptanalytic attack which enables them to predict > the roll of a die or the flip of a coin. > > In VENONA, not only did the NSA exploit pads used twice, but they even > made use of the bias of numbers generated by hand by typists 'at > random', so they did come closer to doing that than anyone might have > expected. > > While precautions are needed in using the raw output of a simple > physical RNG, there are still limits to what constitutes reasonable > concern. In this connection I suppose the following analogy could also be helpful. There is a well-known convergent series that has the number e as its limit. No matter how many (finite numbers) of terms one uses, one never gets EXACTLY to the number e. Yet one can, by taking more terms in the sum, get ever increasingly better approximations of it. I believe nobody would consider that the number e, which is thus never actually attained in this process, is useless. Note in particular that there is no 'boundary' separating the approximations from the exact e, i.e. one can always get nearer and nearer to it, there being no single 'best' approximation that cannot be further improved. M. K. Shen
Subject: Re: Dynamic Transposition Revisited (long) Date: Sat, 27 Jan 2001 22:42:46 GMT From: ritter@io.com (Terry Ritter) Message-ID: <3a734ed6.9327151@news.io.com> References: <3a72c7d4.404608@news.powersurfr.com> Newsgroups: sci.crypt Lines: 90 On Sat, 27 Jan 2001 13:18:29 GMT, in <3a72c7d4.404608@news.powersurfr.com>, in sci.crypt jsavard@ecn.ab.SBLOK.ca.nowhere (John Savard) wrote: >On Sat, 27 Jan 2001 10:41:46 +0100, Mok-Kong Shen ><mok-kong.shen@t-online.de> wrote, in part: > >>I suppose you have a different and problematical concept >>of the (THEORETICAL) OTP. The bit sequence of OTP is by >>definition/assumption unpredictable. If a 'claimed' OTP >>uses a predictable bit sequence and consequently is weak >>as you said, then it is by definition NOT an OTP, though >>snake-oil peddlers used to call that OTP. > >This is true. > >But Terry Ritter isn't talking about fake OTPs based on algorithmic >PRNGs, as far as I understand it. > >He is saying that even what people acknowledge as "real" OTPs, where >the key has been generated by physical randomness, aren't provably the >'theoretical OTP', because you can't prove a particular physical >random noise generator to be perfect. Right. >That is not, in itself, untrue. Physical random number generators can >have bias, for example. Hmmm, "not, in itself, untrue." My, what an interesting way to say that I was, in fact, absolutely right. I suppose that would be a "condemnation by faint praise." More concisely, my position is simply "correct." >However, it his his insistence that this is a major concern, and more >specifically the implication that this makes the proof that the >theoretical OTP is unbreakable _irrelevant_ to physically realizable >OTPs, that I fear strikes many as simply bizarre. First of all, I am getting pretty tired of you deciding what strikes "others." Who put you in charge of tracking the body public or knowing what others think? Next, the issue is not "bizarre," the issue is "proof of strength." As far as I can see, there is no advantage for anyone in messing with the various problems of an OTP in practice unless the goal is to have a real cipher with a mathematical proof of strength. That is why people want to use the OTP. The fact that the proof they seek does not apply unless they can also prove unpredictability -- which is generally unprovable -- is not "irrelevant," but is instead exactly on point. >Because, whether or >not that is his intention, it makes it sound as if he is worried about >the NSA having a cryptanalytic attack which enables them to predict >the roll of a die or the flip of a coin. If that's what it sounds like to you, then you aren't listening. In fact, it seems to me that there is a lot of not listening on purpose. The OTP issue is not security in practice but "proof of strength." Perhaps you can even bring yourself to recall that I have said that various techniques could be used to build pads which could be very strong in practice, but there could be no *proof* of that. Does that ring a bell, or shall I quote past messages in elaborate depth? >In VENONA, not only did the NSA exploit pads used twice, but they even >made use of the bias of numbers generated by hand by typists 'at >random', so they did come closer to doing that than anyone might have >expected. > >While precautions are needed in using the raw output of a simple >physical RNG, there are still limits to what constitutes reasonable >concern. I think there are limits to what constitutes reasonable discussion, and you are close to the edge. --- Terry Ritter ritter@io.com http://www.io.com/~ritter/ Crypto Glossary http://www.io.com/~ritter/GLOSSARY.HTM
Subject: Re: Dynamic Transposition Revisited (long) Date: Mon, 29 Jan 2001 05:36:22 GMT From: jsavard@ecn.ab.SBLOK.ca.nowhere (John Savard) Message-ID: <3a74fea3.314696@news.powersurfr.com> References: <3a734ed6.9327151@news.io.com> Newsgroups: sci.crypt Lines: 46 On Sat, 27 Jan 2001 22:42:46 GMT, ritter@io.com (Terry Ritter) wrote, in part: >On Sat, 27 Jan 2001 13:18:29 GMT, in ><3a72c7d4.404608@news.powersurfr.com>, in sci.crypt >jsavard@ecn.ab.SBLOK.ca.nowhere (John Savard) wrote: >>Because, whether or >>not that is his intention, it makes it sound as if he is worried about >>the NSA having a cryptanalytic attack which enables them to predict >>the roll of a die or the flip of a coin. >If that's what it sounds like to you, then you aren't listening. In >fact, it seems to me that there is a lot of not listening on purpose. >The OTP issue is not security in practice but "proof of strength." >Perhaps you can even bring yourself to recall that I have said that >various techniques could be used to build pads which could be very >strong in practice, but there could be no *proof* of that. Does that >ring a bell, or shall I quote past messages in elaborate depth? I am glad to hear that the negative impression I have had is mistaken. But that impression is a natural one to come to, if one encounters statements of the lack of provability of 'real-world' OTPs that are sufficiently emphatic to give the impression that what is meant is they aren't really any good. When reminding people of basic facts that are usually ignored, despite being acknowledged, it is possible, and it is important, not to sound as though one is being terribly unorthodox: even Stephen Jay Gould has made this mistake once or twice. While I don't feel that Dynamic Transposition, although it is of merit and interest, is actually successful in addressing the provability issue - and nor is the Galois Field notion that I mentioned again in a reply to a post by Benjamin Goldberg - and, in fact, I tend to feel that the provability issue is fundamentally incapable of being addressed by efforts in that direction - if the provability issue inspires you to develop interesting ciphers, fine. My concern is that you appear to me to be making claims that with Dynamic Transposition you have in fact achieved some demonstrable success with respect to the provability issue, and that these claims do not appear to me to be convincing. John Savard http://home.ecn.ab.ca/~jsavard/crypto.htm
Subject: Re: Dynamic Transposition Revisited (long) Date: Sat, 27 Jan 2001 22:42:26 GMT From: ritter@io.com (Terry Ritter) Message-ID: <3a734ec6.9311114@news.io.com> References: <3A7297DA.7ED0DA09@t-online.de> Newsgroups: sci.crypt Lines: 64 On Sat, 27 Jan 2001 10:41:46 +0100, in <3A7297DA.7ED0DA09@t-online.de>, in sci.crypt Mok-Kong Shen <mok-kong.shen@t-online.de> wrote: >[...] >I suppose you have a different and problematical concept >of the (THEORETICAL) OTP. The bit sequence of OTP is by >definition/assumption unpredictable. If a 'claimed' OTP >uses a predictable bit sequence and consequently is weak >as you said, then it is by definition NOT an OTP, though >snake-oil peddlers used to call that OTP. OK, then, in practice, there can be no OTP at all, since, in general, it will be impossible to prove in practice that any bit sequence actually is unpredictable. Clearly we can't compare a cipher which is designed to work in practice to one which cannot. Yet that was exactly what you tried to do. >Some people in >crypto groups even object to use the term pseudo-OTP to >designate that kind of stuff. Usually the objection is to using an obvious pseudo-RNG and calling the result an OTP instead of a Stream Cipher. >(Once I got flames for having >employed the term pseudo-OTP.) We should take care not to >be contaminated in our terminology by the slangs of >snake-oil peddlers. (Of course they could complain, because >anything used 'one-time' is OT, but that's evidently outside >our present concern.) > >BTW, my argumention in the previous follow-up could be >simplified a bit. One does not have to use the big sequence >S. It suffices to pick one arbitrary balanced block and >feed repetitions of it to the algorithm. Basically, the >argument boils down to the trivial fact that a PRNG has a >finite period, while the theoretical OTP has, by definition, >an infinite period. Hence there is no chance that the former >can compete with the latter. Had you actually read my "Revisited" article, you would have found the statement: "This of course does not mean that Dynamic Transposition cannot be attacked: Brute-force attacks on the keys are still imaginable, which is a good reason to use large random message keys." I am discussing a cipher which functions in practice, not some theoretical thing whose only use is to confuse and confound. You appear to be discussing perfection which can never occur in practice. --- Terry Ritter ritter@io.com http://www.io.com/~ritter/ Crypto Glossary http://www.io.com/~ritter/GLOSSARY.HTM
Subject: Re: Dynamic Transposition Revisited (long) Date: Sun, 28 Jan 2001 12:49:19 +0100 From: Mok-Kong Shen <mok-kong.shen@t-online.de> Message-ID: <3A74073F.A58BC254@t-online.de> References: <3a734ec6.9311114@news.io.com> Newsgroups: sci.crypt Lines: 79 Terry Ritter wrote: > > Mok-Kong Shen<mok-kong.shen@t-online.de> wrote: > > >[...] > >I suppose you have a different and problematical concept > >of the (THEORETICAL) OTP. The bit sequence of OTP is by > >definition/assumption unpredictable. If a 'claimed' OTP > >uses a predictable bit sequence and consequently is weak > >as you said, then it is by definition NOT an OTP, though > >snake-oil peddlers used to call that OTP. > > OK, then, in practice, there can be no OTP at all, since, in general, > it will be impossible to prove in practice that any bit sequence > actually is unpredictable. > > Clearly we can't compare a cipher which is designed to work in > practice to one which cannot. Yet that was exactly what you tried to > do. The last sentence is FALSE. It was you who made a comparison of your DT with the OTP and claimed even superiority over it. John Savard first wrote some lines on that and came back to it in a recent follow-up. I suggest that you leave out everything in that direction in your writings about DT. No reasonable man would have any internal reaction, when a girl says she is pretty. Things only become different, when it concerns the most beautiful creature of the world. > > >Some people in > >crypto groups even object to use the term pseudo-OTP to > >designate that kind of stuff. > > Usually the objection is to using an obvious pseudo-RNG and calling > the result an OTP instead of a Stream Cipher. > > >(Once I got flames for having > >employed the term pseudo-OTP.) We should take care not to > >be contaminated in our terminology by the slangs of > >snake-oil peddlers. (Of course they could complain, because > >anything used 'one-time' is OT, but that's evidently outside > >our present concern.) > > > >BTW, my argumention in the previous follow-up could be > >simplified a bit. One does not have to use the big sequence > >S. It suffices to pick one arbitrary balanced block and > >feed repetitions of it to the algorithm. Basically, the > >argument boils down to the trivial fact that a PRNG has a > >finite period, while the theoretical OTP has, by definition, > >an infinite period. Hence there is no chance that the former > >can compete with the latter. > > Had you actually read my "Revisited" article, you would have found the > statement: > > "This of course does not > mean that Dynamic Transposition cannot be attacked: > Brute-force attacks on the keys are still imaginable, which > is a good reason to use large random message keys." > > I am discussing a cipher which functions in practice, not some > theoretical thing whose only use is to confuse and confound. > > You appear to be discussing perfection which can never occur in > practice. Ah, I admit that I haven't carefully read your revision (because of your previous comparion with OTP, which has unfortunately relaxed my attention somewhat). But you should have retracted your claim of superiority over OTP in the revision, I suppose, in order to avoid confusion of readers. M. K. Shen ------------------------- http://home.t-online.de/home/mok-kong.shen
Subject: Re: Dynamic Transposition Revisited (long) Date: Sun, 28 Jan 2001 19:16:39 GMT From: ritter@io.com (Terry Ritter) Message-ID: <3a747006.2072862@news.io.com> References: <3A74073F.A58BC254@t-online.de> Newsgroups: sci.crypt Lines: 170 On Sun, 28 Jan 2001 12:49:19 +0100, in <3A74073F.A58BC254@t-online.de>, in sci.crypt Mok-Kong Shen <mok-kong.shen@t-online.de> wrote: >Terry Ritter wrote: >> >> Mok-Kong Shen<mok-kong.shen@t-online.de> wrote: >> >> >[...] >> >I suppose you have a different and problematical concept >> >of the (THEORETICAL) OTP. The bit sequence of OTP is by >> >definition/assumption unpredictable. If a 'claimed' OTP >> >uses a predictable bit sequence and consequently is weak >> >as you said, then it is by definition NOT an OTP, though >> >snake-oil peddlers used to call that OTP. >> >> OK, then, in practice, there can be no OTP at all, since, in general, >> it will be impossible to prove in practice that any bit sequence >> actually is unpredictable. >> >> Clearly we can't compare a cipher which is designed to work in >> practice to one which cannot. Yet that was exactly what you tried to >> do. > >The last sentence is FALSE. Really? From: Mok-Kong Shen Newsgroups: sci.crypt Subject: Re: Dynamic Transposition Revisited (long) Date: Fri, 26 Jan 2001 23:26:55 +0100 Message-ID: <3A71F9AF.6415A4E2@t-online.de> "But the point is whether your DT is on a par with the theoretical OTP or perhaps better than it. So it is a 'theoretical' question, not a technical question." >It was you who made a comparison >of your DT with the OTP and claimed even superiority over >it. From the "Revisited" article: "When every plaintext block is exactly bit-balanced, any possible plaintext block is some valid bit-permutation of any ciphertext block. So, even if an opponent could exhaustively un-permute a ciphertext block, the result would just be every possible plaintext block. No particular plaintext block could be distinguished as the source of the ciphertext. This is a form of balanced, nonlinear combining of the confusion sequence and data block: as such, it is related to XOR, Latin squares, Shannon "perfect secrecy," and the one-time-pad (OTP). "The inability to distinguish a particular plaintext, even when every possibility is tried, is basically the advantage claimed for the OTP. It is also an advantage which the OTP cannot justify in practice unless we can prove that the OTP keying sequence is unpredictable, which generally cannot be done. That makes the practical OTP exceedingly "brittle": if the opponents ever do gain the ability to predict the sequence, they may be able to attack many messages, both future and past. That would occur in the context of a system supposedly "proven" secure; as usual, the user would have no indication of security failure. "Dynamic Transposition does not need the assumption of sequence unpredictability, because the sequence is hidden behind a multitude of different sequences and permutations which all produce the same result. And if the sequence itself cannot be exposed, exploiting any predictability in the sequence will be difficult. (This of course does not mean that Dynamic Transposition cannot be attacked: Brute-force attacks on the keys are still imaginable, which is a good reason to use large random message keys.)" So exactly what about "an advantage which the OTP cannot justify in practice" do you not understand? >John Savard first wrote some lines on that and came >back to it in a recent follow-up. I suggest that you leave >out everything in that direction in your writings about DT. >No reasonable man would have any internal reaction, when a >girl says she is pretty. Things only become different, when >it concerns the most beautiful creature of the world. It is your delusion that OTP is that "most beautiful creature." In practice an OTP is nothing more than a stream cipher with a particularly inconvenient key. It is only the *theoretical* OTP -- an illusion which cannot protect real data -- which could be called "beautiful" -- but that can't be used. The reason people want to use -- in practice -- an OTP is to get a mathematical proof of strength. Unfortunately, the OTP proof *assumes* a random sequence. Now, surely there can be no debate about whether or not an "OTP" is secure if the keying sequence is predictable. Also there should be little if any debate about whether or not we can, by testing, *prove* a keying sequence is not predictable. (Note that even a complex and random-like sequence produced by stepping a counter into the cipher of your choice is predictable -- if someone can just reverse that cipher. And whether or not we can do that is irrelevant; for a proof of strength, we need to know that no opponent can do it, and that is something we cannot know.) So the OTP proof simply does not apply in practice, unless we have some way to *prove* sequence unpredictability to cryptographic levels of assurance. >> >Some people in >> >crypto groups even object to use the term pseudo-OTP to >> >designate that kind of stuff. >> >> Usually the objection is to using an obvious pseudo-RNG and calling >> the result an OTP instead of a Stream Cipher. >> >> >(Once I got flames for having >> >employed the term pseudo-OTP.) We should take care not to >> >be contaminated in our terminology by the slangs of >> >snake-oil peddlers. (Of course they could complain, because >> >anything used 'one-time' is OT, but that's evidently outside >> >our present concern.) >> > >> >BTW, my argumention in the previous follow-up could be >> >simplified a bit. One does not have to use the big sequence >> >S. It suffices to pick one arbitrary balanced block and >> >feed repetitions of it to the algorithm. Basically, the >> >argument boils down to the trivial fact that a PRNG has a >> >finite period, while the theoretical OTP has, by definition, >> >an infinite period. Hence there is no chance that the former >> >can compete with the latter. >> >> Had you actually read my "Revisited" article, you would have found the >> statement: >> >> "This of course does not >> mean that Dynamic Transposition cannot be attacked: >> Brute-force attacks on the keys are still imaginable, which >> is a good reason to use large random message keys." >> >> I am discussing a cipher which functions in practice, not some >> theoretical thing whose only use is to confuse and confound. >> >> You appear to be discussing perfection which can never occur in >> practice. > >Ah, I admit that I haven't carefully read your revision >(because of your previous comparion with OTP, which has >unfortunately relaxed my attention somewhat). But you >should have retracted your claim of superiority over OTP >in the revision, I suppose, in order to avoid confusion >of readers. How about this: Dynamic Transposition (a practical cipher) is arguably superior to the OTP (when used in practice), because under known-plaintext attack an OTP is immediately vulnerable to a predictable keying sequence, while Dynamic Substitution hides predictability behind a vast number of different permutations, each of which could have created the given ciphertext block. --- Terry Ritter ritter@io.com http://www.io.com/~ritter/ Crypto Glossary http://www.io.com/~ritter/GLOSSARY.HTM
Subject: Re: Dynamic Transposition Revisited (long) Date: Sun, 28 Jan 2001 23:50:48 +0100 From: Mok-Kong Shen <mok-kong.shen@t-online.de> Message-ID: <3A74A248.F2F69394@t-online.de> References: <3a747006.2072862@news.io.com> Newsgroups: sci.crypt Lines: 185 Terry Ritter wrote: > > Mok-Kong Shen<mok-kong.shen@t-online.de> wrote: > > >Terry Ritter wrote: > >> > >> Mok-Kong Shen<mok-kong.shen@t-online.de> wrote: > >> > >> >[...] > >> >I suppose you have a different and problematical concept > >> >of the (THEORETICAL) OTP. The bit sequence of OTP is by > >> >definition/assumption unpredictable. If a 'claimed' OTP > >> >uses a predictable bit sequence and consequently is weak > >> >as you said, then it is by definition NOT an OTP, though > >> >snake-oil peddlers used to call that OTP. > >> > >> OK, then, in practice, there can be no OTP at all, since, in general, > >> it will be impossible to prove in practice that any bit sequence > >> actually is unpredictable. > >> > >> Clearly we can't compare a cipher which is designed to work in > >> practice to one which cannot. Yet that was exactly what you tried to > >> do. > > > >The last sentence is FALSE. > > Really? > > From: Mok-Kong Shen <mok-kong.shen@t-online.de> > Newsgroups: sci.crypt > Subject: Re: Dynamic Transposition Revisited (long) > Date: Fri, 26 Jan 2001 23:26:55 +0100 > Message-ID: <3A71F9AF.6415A4E2@t-online.de> > > "But the point is whether your DT is on a par with the > theoretical OTP or perhaps better than it. So it is a > 'theoretical' question, not a technical question." > > >It was you who made a comparison > >of your DT with the OTP and claimed even superiority over > >it. > > From the "Revisited" article: > > "When every plaintext block is exactly bit-balanced, any > possible plaintext block is some valid bit-permutation of > any ciphertext block. So, even if an opponent could > exhaustively un-permute a ciphertext block, the result > would just be every possible plaintext block. No particular > plaintext block could be distinguished as the source of the > ciphertext. This is a form of balanced, nonlinear combining > of the confusion sequence and data block: as such, it is > related to XOR, Latin squares, Shannon "perfect secrecy," > and the one-time-pad (OTP). > > "The inability to distinguish a particular plaintext, even > when every possibility is tried, is basically the advantage > claimed for the OTP. It is also an advantage which the OTP > cannot justify in practice unless we can prove that the OTP > keying sequence is unpredictable, which generally cannot be > done. That makes the practical OTP exceedingly "brittle": > if the opponents ever do gain the ability to predict the > sequence, they may be able to attack many messages, both > future and past. That would occur in the context of a > system supposedly "proven" secure; as usual, the user would > have no indication of security failure. > > "Dynamic Transposition does not need the assumption of > sequence unpredictability, because the sequence is hidden > behind a multitude of different sequences and permutations > which all produce the same result. And if the sequence > itself cannot be exposed, exploiting any predictability in > the sequence will be difficult. (This of course does not > mean that Dynamic Transposition cannot be attacked: > Brute-force attacks on the keys are still imaginable, which > is a good reason to use large random message keys.)" > > So exactly what about "an advantage which the OTP cannot justify in > practice" do you not understand? I was referring to your claim in the 'original' thread of DT where you claimed superiority of DT over OTP. Apparently you have forgotten what you have written at that time and in particular of what John Savard commented to that point. See the quote below. > > >John Savard first wrote some lines on that and came > >back to it in a recent follow-up. I suggest that you leave > >out everything in that direction in your writings about DT. > >No reasonable man would have any internal reaction, when a > >girl says she is pretty. Things only become different, when > >it concerns the most beautiful creature of the world. > > It is your delusion that OTP is that "most beautiful creature." In > practice an OTP is nothing more than a stream cipher with a > particularly inconvenient key. It is only the *theoretical* OTP -- an > illusion which cannot protect real data -- which could be called > "beautiful" -- but that can't be used. > > The reason people want to use -- in practice -- an OTP is to get a > mathematical proof of strength. Unfortunately, the OTP proof > *assumes* a random sequence. Now, surely there can be no debate about > whether or not an "OTP" is secure if the keying sequence is > predictable. Also there should be little if any debate about whether > or not we can, by testing, *prove* a keying sequence is not > predictable. (Note that even a complex and random-like sequence > produced by stepping a counter into the cipher of your choice is > predictable -- if someone can just reverse that cipher. And whether > or not we can do that is irrelevant; for a proof of strength, we need > to know that no opponent can do it, and that is something we cannot > know.) So the OTP proof simply does not apply in practice, unless we > have some way to *prove* sequence unpredictability to cryptographic > levels of assurance. I have no delusion about the theoretical OTP. I know that it is a theoretical construct that can never be (exactly) obtained in practice, though approximated, and that there are big and essential difficulties of employing even its 'approximations'. But it was you who first brought the term OTP into the thread you initiated. Given that, OTP can be (allowed to be) discussed in connection with the claim you made in the original thread, isn't it?? > > >> >Some people in > >> >crypto groups even object to use the term pseudo-OTP to > >> >designate that kind of stuff. > >> > >> Usually the objection is to using an obvious pseudo-RNG and calling > >> the result an OTP instead of a Stream Cipher. > >> > >> >(Once I got flames for having > >> >employed the term pseudo-OTP.) We should take care not to > >> >be contaminated in our terminology by the slangs of > >> >snake-oil peddlers. (Of course they could complain, because > >> >anything used 'one-time' is OT, but that's evidently outside > >> >our present concern.) > >> > > >> >BTW, my argumention in the previous follow-up could be > >> >simplified a bit. One does not have to use the big sequence > >> >S. It suffices to pick one arbitrary balanced block and > >> >feed repetitions of it to the algorithm. Basically, the > >> >argument boils down to the trivial fact that a PRNG has a > >> >finite period, while the theoretical OTP has, by definition, > >> >an infinite period. Hence there is no chance that the former > >> >can compete with the latter. > >> > >> Had you actually read my "Revisited" article, you would have found the > >> statement: > >> > >> "This of course does not > >> mean that Dynamic Transposition cannot be attacked: > >> Brute-force attacks on the keys are still imaginable, which > >> is a good reason to use large random message keys." > >> > >> I am discussing a cipher which functions in practice, not some > >> theoretical thing whose only use is to confuse and confound. > >> > >> You appear to be discussing perfection which can never occur in > >> practice. > > > >Ah, I admit that I haven't carefully read your revision > >(because of your previous comparion with OTP, which has > >unfortunately relaxed my attention somewhat). But you > >should have retracted your claim of superiority over OTP > >in the revision, I suppose, in order to avoid confusion > >of readers. > > How about this: > > Dynamic Transposition (a practical cipher) is arguably superior to the > OTP (when used in practice), because under known-plaintext attack an > OTP is immediately vulnerable to a predictable keying sequence, while > Dynamic Substitution hides predictability behind a vast number of > different permutations, each of which could have created the given > ciphertext block. I would in your place omit reference to OTP in connection with DT entirely. This would completely avoid a point that John Savard raised. See one of his follow-ups today. M. K. Shen
Subject: Re: Dynamic Transposition Revisited (long) Date: Fri, 26 Jan 2001 23:18:50 GMT From: jsavard@ecn.ab.SBLOK.ca.nowhere (John Savard) Message-ID: <3a72041f.34951192@news.powersurfr.com> References: <3a71f0ba.6153413@news.io.com> Newsgroups: sci.crypt Lines: 46 On Fri, 26 Jan 2001 21:52:58 GMT, ritter@io.com (Terry Ritter) wrote, in part: >That is not a semantic argument, or a philosophical question -- it is >the real situation. For example, perhaps we could show that a certain >amount of information is necessary to distinguish between our real RNG >and the theoretical ideal. Then if we could show that the information >is not available, we could have provable "perfection," much like other >cryptographic proofs. That's an interesting thought. But the 'answer' is that as soon as one has enough bits of information to do a successful brute-force search, we can distinguish, even if we don't know how to distinguish _efficiently_. And proving that we won't discover in future how to distinguish efficiently is, I believe, fundamentally impossible. In the specific case of Dynamic Transposition: We have N bit blocks. Our PRNG generates one of N! permutations for each block. Given known plaintext, all we know is that the permutation generated is one of a set of (N/2)!(N/2)! possible permutations. Then, if the internal state of our PRNG is x bits, the number of blocks of known plaintext required (on average) for uniquely establishing that state is z, where z is the least integer such that 2^x is less than or equal to (N!/((N/2)!(N/2)!))^z This even applies if there are two transposition layers, as long as x is the number of state bits in _both_ PRNGs. The security of Dynamic Transposition, like other ciphers with a fixed-length key shorter than the plaintext, depends on the work factor, and is not information-theoretic in nature. John Savard http://home.ecn.ab.ca/~jsavard/crypto.htm
Subject: Re: Dynamic Transposition Revisited (long) Date: Sat, 27 Jan 2001 07:48:41 GMT From: Benjamin Goldberg <goldbb2@earthlink.net> Message-ID: <3A727320.6594CAE8@earthlink.net> References: <3a72041f.34951192@news.powersurfr.com> Newsgroups: sci.crypt Lines: 67 John Savard wrote: > > On Fri, 26 Jan 2001 21:52:58 GMT, ritter@io.com (Terry Ritter) wrote, > in part: > > >That is not a semantic argument, or a philosophical question -- it is > >the real situation. For example, perhaps we could show that a > >certain amount of information is necessary to distinguish between our > >real RNG and the theoretical ideal. Then if we could show that the > >information is not available, we could have provable "perfection," > >much like other cryptographic proofs. > > That's an interesting thought. > > But the 'answer' is that as soon as one has enough bits of information > to do a successful brute-force search, we can distinguish, even if we > don't know how to distinguish _efficiently_. And proving that we won't > discover in future how to distinguish efficiently is, I believe, > fundamentally impossible. > > In the specific case of Dynamic Transposition: > > We have N bit blocks. > > Our PRNG generates one of N! permutations for each block. > > Given known plaintext, all we know is that the permutation generated > is one of a set of (N/2)!(N/2)! possible permutations. > > Then, if the internal state of our PRNG is x bits, the number of > blocks of known plaintext required (on average) for uniquely > establishing that state is z, where z is the least integer such that > > 2^x > > is less than or equal to > > (N!/((N/2)!(N/2)!))^z Hmm. I'm not sure if this is right, so correct me if I'm wrong: x is the size of the key (size of the internal PRNG state) N is the size of the block z is the number of blocks needed to uniquely establish the key 2^x <= (N!/((N/2)!(N/2)!))^z x ln 2 <= z ln (N!/((N/2)!(N/2)!)) z >= (x ln 2) / (ln (N!/((N/2)!(N/2)!))) If the attacker has z or more blocks of known plaintext, we can assume he knows the key. If we simply make sure that we always send fewer than z blocks under any one key, the cipher is probably [provably?] secure against any attack save brute force. > This even applies if there are two transposition layers, as long as x > is the number of state bits in _both_ PRNGs. Unless I'm mistaken, both transpositions are done using the same PRNG. > The security of Dynamic Transposition, like other ciphers with a > fixed-length key shorter than the plaintext, depends on the work > factor, and is not information-theoretic in nature. -- Most scientific innovations do not begin with "Eureka!" They begin with "That's odd. I wonder why that happened?"
Subject: Re: Dynamic Transposition Revisited (long) Date: Fri, 26 Jan 2001 20:22:00 GMT From: AllanW <allan_w@my-deja.com> Message-ID: <94sm8t$bnk$1@nnrp1.deja.com> References: <3a6dec9a.5680867@news.powersurfr.com> Newsgroups: sci.crypt Lines: 69 In article <3a6dec9a.5680867@news.powersurfr.com>, jsavard@ecn.ab.SBLOK.ca.nowhere (John Savard) wrote: > On Tue, 23 Jan 2001 20:19:18 GMT, ritter@io.com (Terry Ritter) wrote, > in part: > > >I think you should first plainly describe what you see as the > >weakness, before rushing on to try to fix it. > > I will do so. In fact, however, I already did so, I thought, in > another post to which you have just replied, but it looks as though my > point didn't get across. > > The "weakness" is: > > The set of permutations of n bits > > considered as a subset of the set of one-to-one and onto mappings from > the set of bit-balanced blocks of n bits to itself > > is a subgroup of those mappings, and therefore is not a generator of > the entire set of mappings. > > My "fix", therefore, is to propose another operation that can be > applied to a bit-balanced block to yield a bit-balanced block, so as > to allow a cipher acting on these blocks to produce a wider assortment > of mappings. > > Essentially, I quite agree that transposition as applied to > bit-balanced blocks is *better* than XOR. But since there already are > substitutions that are considerably better than XOR, the fact that DT > is a fixed amount better than XOR is not terribly attractive, > considering that substitutions can be made as complex as desired. > > Essentially, therefore, I see DT as similar to, say, DES with subkeys > that are generated by a stream cipher. Yes, DES, even with arbitrary > subkeys, can't produce all (2^64)! mappings of blocks to blocks; but > transposing bits can't produce all mappings of the entire set of > bit-balanced blocks to itself _either_. I don't know about Terry Ritter. I for one now understand what your point is. But how do you come to this conclusion? The very definition of a shuffle is that all permutations of the input are possible as output, right? So is there some combination of bit-balanced input and bit-balanced output of the same size, that can't be created by a shuffle with some input from a PRNG? To take an extremly small example: a 16-bit block could contain any of 1170 different values, starting with 0000000011111111 and ending up with 1111111100000000. Any bit-shuffle would also produce one of these 1170 values. Is one of the 1170 possible values impossible to reach by shuffling? > So my point is: DT is not as bad as XOR, but it is not as good as what > people can, and do, do with substitution. Although perhaps saying they > really do make such use of substitution is perhaps an exaggeration; > except for some of your advanced designs, there isn't that much out > there which approaches my "large-key brainstorm" - which, I think, in > the substitution world, strongly resembles what Dynamic Transposition > achieves. -- Allan_W@my-deja.com is a "Spam Magnet," never read. Please reply in newsgroups only, sorry. Sent via Deja.com http://www.deja.com/
Subject: Re: Dynamic Transposition Revisited (long) Date: Fri, 26 Jan 2001 20:43:15 GMT From: jsavard@ecn.ab.SBLOK.ca.nowhere (John Savard) Message-ID: <3a71e0b6.25885697@news.powersurfr.com> References: <94sm8t$bnk$1@nnrp1.deja.com> Newsgroups: sci.crypt Lines: 29 On Fri, 26 Jan 2001 20:22:00 GMT, AllanW <allan_w@my-deja.com> wrote, in part: >To take an extremly small example: a 16-bit block could contain >any of 1170 different values, starting with 0000000011111111 >and ending up with 1111111100000000. Any bit-shuffle would also >produce one of these 1170 values. Is one of the 1170 possible >values impossible to reach by shuffling? No. And not one of the 2^64 possible outputs is impossible to reach by DES, either. But Terry Ritter appears to be criticizing DES because some of the (2^64)! possible overall substitutions are impossible to reach. Well, by shuffling, I can't reach a total overall pairing of bit balanced inputs to bit balanced outputs such that 0000000011111111 -> 1111111100000000 and 0000001010111111 -> 0000000011111111 so it's the 1170! possible overall correspondences of input to output that can't all be reached. John Savard http://home.ecn.ab.ca/~jsavard/crypto.htm
Subject: Re: Dynamic Transposition Revisited (long) Date: Fri, 26 Jan 2001 21:48:40 GMT From: ritter@io.com (Terry Ritter) Message-ID: <3a71f01d.5996441@news.io.com> References: <3a71e0b6.25885697@news.powersurfr.com> Newsgroups: sci.crypt Lines: 83 On Fri, 26 Jan 2001 20:43:15 GMT, in <3a71e0b6.25885697@news.powersurfr.com>, in sci.crypt jsavard@ecn.ab.SBLOK.ca.nowhere (John Savard) wrote: >On Fri, 26 Jan 2001 20:22:00 GMT, AllanW <allan_w@my-deja.com> wrote, >in part: > >>To take an extremly small example: a 16-bit block could contain >>any of 1170 different values, starting with 0000000011111111 >>and ending up with 1111111100000000. Any bit-shuffle would also >>produce one of these 1170 values. Is one of the 1170 possible >>values impossible to reach by shuffling? > >No. And not one of the 2^64 possible outputs is impossible to reach by >DES, either. > >But Terry Ritter appears to be criticizing DES because some of the >(2^64)! possible overall substitutions are impossible to reach. Some? SOME???!! (From my factorials page: http://www.io.com/~ritter/JAVASCRP/PERMCOMB.HTM ). (The number of different values in 64 bits is 2**64.) 2**64 ~= 18446744073709552000 = A A! ~= some value 1.15397859e+21 bits long. The DES keyspace is 56 bits, a value 2 bits long. Let's see: A value 10**21 bits long compared to a value 2 bits long. Yeah, I'd say "some" were impossible to reach. The DES keyspace is such a tiny fragment of the potential keyspace that there is no assurance at all that it retains the smooth theoretical properties of distribution for which we might hope. >Well, by shuffling, I can't reach a total overall pairing of bit >balanced inputs to bit balanced outputs such that > >0000000011111111 -> 1111111100000000 > >and > >0000001010111111 -> 0000000011111111 Why not? What does that mean? Are you criticizing the balancing algorithm? Fine, use another. With respect to the shuffling, every permutation is possible. The expected defect is statistical only, in the sense that there is no guarantee that all permutations have the same probability. But since the number of permutations is so vast, we could never hope to traverse those so we could establish such bias, or, presumably, use it. Of course, all of that would assume that we could what each permutation was. With known-plaintext, we *know* a particular entry in a substitution table. But known-plaintext does *not* expose a particular permutation with Dynamic Transposition. >so it's the 1170! possible overall correspondences of input to output >that can't all be reached. Here we have 16 things (bits), and so can have 16! (about 2**44) permuted results. Each result is not unique, though, since any arrangement of 0's (8!) and any arrangement of 1's (8!) produces the same result. 8! * 8! is about 2**30, so there should be about 2**14 unique result values, each of which is produced by 2**30 different permutations. So instead of getting every possible 16 bit value, we can see that balancing has reduced the number of different values to about 14-bits worth. What is the 1170! value? --- Terry Ritter ritter@io.com http://www.io.com/~ritter/ Crypto Glossary http://www.io.com/~ritter/GLOSSARY.HTM
Subject: Re: Dynamic Transposition Revisited (long) Date: Fri, 26 Jan 2001 23:30:36 GMT From: jsavard@ecn.ab.SBLOK.ca.nowhere (John Savard) Message-ID: <3a72063f.35495521@news.powersurfr.com> References: <3a71f01d.5996441@news.io.com> Newsgroups: sci.crypt Lines: 91 On Fri, 26 Jan 2001 21:48:40 GMT, ritter@io.com (Terry Ritter) wrote, in part: >The DES keyspace is 56 bits, a value 2 bits long. >Let's see: A value 10**21 bits long compared to a value 2 bits long. >Yeah, I'd say "some" were impossible to reach. Actually, 56 is a 6-bit-long binary number. And DES doesn't have 56 different keys, it has 2^56 different keys, as noted below. >The DES keyspace is such a tiny fragment of the potential keyspace >that there is no assurance at all that it retains the smooth >theoretical properties of distribution for which we might hope. quoting me: >>Well, by shuffling, I can't reach a total overall pairing of bit >>balanced inputs to bit balanced outputs such that >> >>0000000011111111 -> 1111111100000000 >> >>and >> >>0000001010111111 -> 0000000011111111 > >Why not? What does that mean? Are you criticizing the balancing >algorithm? Fine, use another. This has nothing to do with the balancing algorithm. This is about Dynamic Transposition itself. All possible N! transpositions of an N-bit block may be effected. However, 16! is also a tiny fraction of 12870!, just as 2^56 (not 56) is a tiny fraction of (2^64)!. If our PRTG - pseudorandom transposition generator - generates the transposition 9 12 14 16 15 11 10 13 3 5 1 2 7 6 4 8 then if the plaintext was 0000000011111111 the ciphertext will be 111111110000000 and if the plaintext was 0000001010111111 the ciphertext would be instead 1111110100001000 note that just as we have switched two bits in our plaintext, we have switched two bits in our ciphertext. A substitution cannot be reached by means of any transposition that will at the same time take 0000000011111111 to 1111111100000000 and take 0000001010111111 to 0000000011111111 since we are inverting a different number of bits in the plaintext and the ciphertext. Thus, as with DES, some total overall mappings, considering the whole ensemble of (plaintext, ciphertext) pairs generated at a single point in the process, are unreachable. This is not so much a criticism of Dynamic Transposition itself as of your claim that it is vastly superior to DES with stream-cipher-generated subkeys. John Savard http://home.ecn.ab.ca/~jsavard/crypto.htm
Subject: Re: Dynamic Transposition Revisited (long) Date: Sat, 27 Jan 2001 01:20:23 GMT From: ritter@io.com (Terry Ritter) Message-ID: <3a722237.18824326@news.io.com> References: <3a72063f.35495521@news.powersurfr.com> Newsgroups: sci.crypt Lines: 163 On Fri, 26 Jan 2001 23:30:36 GMT, in <3a72063f.35495521@news.powersurfr.com>, in sci.crypt jsavard@ecn.ab.SBLOK.ca.nowhere (John Savard) wrote: >On Fri, 26 Jan 2001 21:48:40 GMT, ritter@io.com (Terry Ritter) wrote, >in part: > >>The DES keyspace is 56 bits, a value 2 bits long. > >>Let's see: A value 10**21 bits long compared to a value 2 bits long. >>Yeah, I'd say "some" were impossible to reach. > >Actually, 56 is a 6-bit-long binary number. And DES doesn't have 56 >different keys, it has 2^56 different keys, as noted below. No, it's time to clear this up. I'm sorry I screwed up, because as soon as one does that, the remaining total of one's work is just cast aside. So let's go through it again: DES has a 64-bit block. DES has 2**64 unique data values. DES represents a substitution table with 2**64 entries. There are (2**64)! such tables. 2**64 is 18446744073709552000. 18446744073709552000! is some number with about 1.15397859e+21 bits. SO, DES has a potential keyspace of about 10**21 bits, of which a keyspace of 56 bits is actually implemented. We can compare the two by subtracting 56 from 10**21 which gives us, naturally, 10**21. This is how much larger the actual keyspace is (in bits) than the implemented keyspace. Alternately, one might say that DES has a potential keyspace of 2**(10**21) keys, of which 2**56 keys are actually selectable. The log base 2 (bit-length) comparison between 10**21 and 56 is correct. >>The DES keyspace is such a tiny fragment of the potential keyspace >>that there is no assurance at all that it retains the smooth >>theoretical properties of distribution for which we might hope. > >quoting me: >>>Well, by shuffling, I can't reach a total overall pairing of bit >>>balanced inputs to bit balanced outputs such that >>> >>>0000000011111111 -> 1111111100000000 >>> >>>and >>> >>>0000001010111111 -> 0000000011111111 >> >>Why not? What does that mean? Are you criticizing the balancing >>algorithm? Fine, use another. > >This has nothing to do with the balancing algorithm. > >This is about Dynamic Transposition itself. > >All possible N! transpositions of an N-bit block may be effected. > >However, 16! is also a tiny fraction of 12870!, just as 2^56 (not 56) >is a tiny fraction of (2^64)!. First, where does 12870! come from? Absent balancing, Dynamic Transposition is just permutations. When we have N elements, the number of permutations of those is N!. Here we permute bits, and you present 16 of them, so we have 16!, which is exactly 20922789888000, a number about 45 bits long. Where is this 12870! coming from? >If our PRTG - pseudorandom transposition generator - generates the >transposition > >9 12 14 16 15 11 10 13 3 5 1 2 7 6 4 8 > >then if the plaintext was > >0000000011111111 > >the ciphertext will be > >111111110000000 > >and if the plaintext was > >0000001010111111 > >the ciphertext would be instead > >1111110100001000 > >note that just as we have switched two bits in our plaintext, we have >switched two bits in our ciphertext. > >A substitution cannot be reached by means of any transposition that >will at the same time take > >0000000011111111 > >to > >1111111100000000 > >and take > >0000001010111111 > >to > >0000000011111111 > >since we are inverting a different number of bits in the plaintext and >the ciphertext. No. As far as I can tell, that has no relationship to Dynamic Transposition at all, where a permutation is produced at random, and then used once. The opponents do not get to explore the permutation with different data values. DES, of course, is specifically designed to be re-used without re-keying on every block. But I think you are not discussing DES, but instead some sort of Dynamic DES which does change keys on each block. And that is fine, but there is no hope of being able to cover the entire block-cipher keyspace with that keying either. >Thus, as with DES, some total overall mappings, considering the whole >ensemble of (plaintext, ciphertext) pairs generated at a single point >in the process, are unreachable. > >This is not so much a criticism of Dynamic Transposition itself as of >your claim that it is vastly superior to DES with >stream-cipher-generated subkeys. Dynamic Transposition can reach every possible permutation. With the 16-bit block you gave, every one of 16! (and NOT (2**16)!) or 20922789888000 possible permutations can be selected. If you meant to use a 64-bit block, then the number of possible permutations is "just" 64!, some number with about 296 bits. If you meant my suggested 4096-bit block, then the number of possible permutations is just 4096!, some number with about 43,251 bits. But DES with a 64-bit block does not have 64 things to permute; instead it has a substitution "table" with 2**64 entries. The number of such tables (2**64)!, a number having 10**21 bits. That is the keyspace. DES supplies 56 bits out of 10**21. There is simply no hope of being able to reach every possible conventional block cipher substitution "table" in the same way that Dynamic Transposition can reach every possible permutation. --- Terry Ritter ritter@io.com http://www.io.com/~ritter/ Crypto Glossary http://www.io.com/~ritter/GLOSSARY.HTM
Subject: Re: Dynamic Transposition Revisited (long) Date: Sat, 27 Jan 2001 01:46:48 GMT From: "Douglas A. Gwyn" <DAGwyn@null.net> Message-ID: <3A7228A6.F1F099B7@null.net> References: <3a722237.18824326@news.io.com> Newsgroups: sci.crypt Lines: 14 Terry Ritter wrote: > SO, DES has a potential keyspace of about 10**21 bits, of which a > keyspace of 56 bits is actually implemented. That's an utterly pointless exercise, since it says nothing about the very real differences in cryptosystems. *DES* does *not* have a "potential keyspace" of yadayada; what has that space is the class of *all possible transformations* from 64 bits to 64 bits. An actual cryptographic transformation is divided into two parts: a general system that constrains the class of transformations to just those having a certain structure, and a key that is used to select the particular member from that constrained class. There are obvious practical reasons for predividing the space in that way.
Subject: Re: Dynamic Transposition Revisited (long) Date: Sat, 27 Jan 2001 04:34:08 GMT From: ritter@io.com (Terry Ritter) Message-ID: <3a724fa9.1324824@news.io.com> References: <3A7228A6.F1F099B7@null.net> Newsgroups: sci.crypt Lines: 41 On Sat, 27 Jan 2001 01:46:48 GMT, in <3A7228A6.F1F099B7@null.net>, in sci.crypt "Douglas A. Gwyn" <DAGwyn@null.net> wrote: >Terry Ritter wrote: >> SO, DES has a potential keyspace of about 10**21 bits, of which a >> keyspace of 56 bits is actually implemented. > >That's an utterly pointless exercise, since it says nothing >about the very real differences in cryptosystems. *DES* does >*not* have a "potential keyspace" of yadayada; what has that >space is the class of *all possible transformations* from 64 >bits to 64 bits. An actual cryptographic transformation is >divided into two parts: a general system that constrains the >class of transformations to just those having a certain >structure, and a key that is used to select the particular >member from that constrained class. There are obvious >practical reasons for predividing the space in that way. You know, it really would help if you would follow the thread and address the point in proper context. Of course something will be "pointless" -- if you don't get the point. As far as I could tell, Savard was saying that DES was missing or unable to select only "some" of all possible emulated substitution tables of the 64-bit block size. I addressed that and showed it to be nowhere near true. The real situation is that the DES keyspace is just a virtually infinitesimal fraction of the true keying possibilities. The values I used are correct, the computations are correct, and the comparison is correct, as far as it goes. If you have problems with the conclusion, you probably don't understand the question. Now, you can claim that DES doesn't need to cover the full keying range for a 64-bit block cipher. Fine. Great. You won't shock anybody with that. But that was not the comparison or the issue. --- Terry Ritter ritter@io.com http://www.io.com/~ritter/ Crypto Glossary http://www.io.com/~ritter/GLOSSARY.HTM
Subject: Re: Dynamic Transposition Revisited (long) Date: Sat, 27 Jan 2001 02:11:54 GMT From: jsavard@ecn.ab.SBLOK.ca.nowhere (John Savard) Message-ID: <3a722bd8.45121722@news.powersurfr.com> References: <3a722237.18824326@news.io.com> Newsgroups: sci.crypt Lines: 44 On Sat, 27 Jan 2001 01:20:23 GMT, ritter@io.com (Terry Ritter) wrote, in part: >There is simply no hope of being able to reach every possible >conventional block cipher substitution "table" in the same way that >Dynamic Transposition can reach every possible permutation. Absolutely! I agree with that 100%. But what I'm on about is that that's a comparison between apples and oranges. Two rounds of DES with independent subkeys can produce 2^32 different substitutions which will take any plaintext block P to any ciphertext block C. Similarly, "every possible permutation" can produce, for every N-bit balanced block P, every possible N-bit balanced block C, in (N/2)!(N/2)! different ways. However, DES cannot produce every possible overall substitution, every possible table C(0),C(1),....C(2^64-1) of output ciphertext blocks from input plaintext blocks. And transposition, period, also cannot produce every possible overall substitution, every possible table C(0000000011111111) .... C(1111111100000000) where the 12870 possible balanced 16-bit blocks, (for N=16) are assigned, as ciphertext outputs, to the 12870 possible balanced 16-bit blocks as inputs. Transposition of balanced blocks is "better than XOR", because there is more than one way to get from a particular P to a particular C, but it does not have the sort of exhaustiveness that you are demanding a substitution-based polyalphabetic block cipher have in order to be comparable to Dynamic Transposition. The exhaustiveness it does have, "all possible transpositions", is not equivalent. Maybe it seems so because 'transposition' is treated as a class of encipherment in itself, comparable to 'substitution'; this isn't a semantic problem, it's a conceptual problem; I think you may be a victim of the Whorf hypothesis, that language limits how we can think. John Savard http://home.ecn.ab.ca/~jsavard/crypto.htm
Subject: Re: Dynamic Transposition Revisited (long) Date: Sat, 27 Jan 2001 04:51:04 GMT From: ritter@io.com (Terry Ritter) Message-ID: <3a7253a4.2343752@news.io.com> References: <3a722bd8.45121722@news.powersurfr.com> Newsgroups: sci.crypt Lines: 88 On Sat, 27 Jan 2001 02:11:54 GMT, in <3a722bd8.45121722@news.powersurfr.com>, in sci.crypt jsavard@ecn.ab.SBLOK.ca.nowhere (John Savard) wrote: >On Sat, 27 Jan 2001 01:20:23 GMT, ritter@io.com (Terry Ritter) wrote, >in part: > >>There is simply no hope of being able to reach every possible >>conventional block cipher substitution "table" in the same way that >>Dynamic Transposition can reach every possible permutation. > >Absolutely! I agree with that 100%. Great, back on track! >But what I'm on about is that that's a comparison between apples and >oranges. > >Two rounds of DES with independent subkeys can produce 2^32 different >substitutions which will take any plaintext block P to any ciphertext >block C. That would be 2**32 each round, right? So we have 2**64 possibilities. One might well ask how that could possibly differ from a 64-bit XOR. In what way has "substitution" had an impact? >Similarly, "every possible permutation" can produce, for every N-bit >balanced block P, every possible N-bit balanced block C, in >(N/2)!(N/2)! different ways. And I claim that as a basis for Dynamic Transposition strength which has no analog in the modern block cipher. >However, DES cannot produce every possible overall substitution, every >possible table C(0),C(1),....C(2^64-1) of output ciphertext blocks >from input plaintext blocks. > >And transposition, period, also cannot produce every possible overall >substitution, every possible table C(0000000011111111) .... >C(1111111100000000) where the 12870 possible balanced 16-bit blocks, >(for N=16) are assigned, as ciphertext outputs, to the 12870 possible >balanced 16-bit blocks as inputs. I have no idea what this means. The very reason we talk about transposition is that we are not talking about substitution tables. One can't compare substitution counts from a cipher which does in fact emulate substitution tables to a different cipher which does not. Transposition does not try to create an emulated table, and so can scarcely be faulted for not achieving that. So what is your point? >Transposition of balanced blocks is "better than XOR", because there >is more than one way to get from a particular P to a particular C, but >it does not have the sort of exhaustiveness that you are demanding a >substitution-based polyalphabetic block cipher have in order to be >comparable to Dynamic Transposition. The exhaustiveness it does have, >"all possible transpositions", is not equivalent. Of course not. Dynamic Transposition is a permutation cipher. It does not emulate huge substitution tables. That said, for the keyspace it uses -- permutations -- Dynamic Transposition can traverse the full keyspace. In contrast, for the keyspace the modern block cipher uses -- emulated huge substitution tables -- no design can touch more than the tiniest fraction of that keyspace. >Maybe it seems so because 'transposition' is treated as a class of >encipherment in itself, comparable to 'substitution'; this isn't a >semantic problem, it's a conceptual problem; I think you may be a >victim of the Whorf hypothesis, that language limits how we can think. The problem is yours, not mine. --- Terry Ritter ritter@io.com http://www.io.com/~ritter/ Crypto Glossary http://www.io.com/~ritter/GLOSSARY.HTM
Subject: Re: Dynamic Transposition Revisited (long) Date: Sat, 27 Jan 2001 05:46:44 GMT From: jsavard@ecn.ab.SBLOK.ca.nowhere (John Savard) Message-ID: <3a725cda.248233@news.powersurfr.com> References: <3a7253a4.2343752@news.io.com> Newsgroups: sci.crypt Lines: 67 On Sat, 27 Jan 2001 04:51:04 GMT, ritter@io.com (Terry Ritter) wrote, in part: >Of course not. Dynamic Transposition is a permutation cipher. It >does not emulate huge substitution tables. >That said, for the keyspace it uses -- permutations -- Dynamic >Transposition can traverse the full keyspace. >In contrast, for the keyspace the modern block cipher uses -- emulated >huge substitution tables -- no design can touch more than the tiniest >fraction of that keyspace. At least I now know my point has been understood, and I don't need to explain the mathematics of it again. What remains in dispute, though, is my claim that what you have said here, although true, is in no way a valid argument in favor of Dynamic Transposition. As you've noted, one small error causes people to neglect what is valid, and this is why I'm pounding on this one point - I believe I'm not the only one who would see this error, but most of the others have already stopped paying attention to your work. Essentially, my argument is that it doesn't matter to an Opponent what you are "attempting" to emulate. All that matters is what your cipher does or does not do. Neither a block cipher nor transposition provides all possible substitutions of input blocks to output blocks. Both a block cipher and transposition provide several alternative simple XOR, which provides only one. Transposition is 'exhaustive' in the sense that it provides every transformation in a particular tidy closed set of transformations. But that isn't even a strength: that is a weakness, just as it would have been a weakness if DES was a group. Suppose that, between the two transpositions, I inserted an S-box, that changed under the control of a good PRNG with each round, of the form: 0000 -> 0000 1111 -> 1111 0001 -> 0100 0111 -> 1011 0010 -> 0010 1011 -> 1110 0100 -> 1000 1101 -> 0111 1000 -> 0001 1110 -> 1101 0011 -> 1010 0101 -> 0110 0110 -> 1100 1001 -> 0011 1010 -> 0101 1100 -> 1001 In other words, I'm substituting small portions of the block with other substitutes that have the same number of 1 bits. Now I'm producing substitutions of input block to output block that can't be reached by transposition alone. Have I _weakened_ the cipher? John Savard http://home.ecn.ab.ca/~jsavard/crypto.htm
Subject: Re: Dynamic Transposition Revisited (long) Date: Sat, 27 Jan 2001 22:42:59 GMT From: ritter@io.com (Terry Ritter) Message-ID: <3a734eeb.9348154@news.io.com> References: <3a725cda.248233@news.powersurfr.com> Newsgroups: sci.crypt Lines: 174 On Sat, 27 Jan 2001 05:46:44 GMT, in <3a725cda.248233@news.powersurfr.com>, in sci.crypt jsavard@ecn.ab.SBLOK.ca.nowhere (John Savard) wrote: >On Sat, 27 Jan 2001 04:51:04 GMT, ritter@io.com (Terry Ritter) wrote, >in part: > >>Of course not. Dynamic Transposition is a permutation cipher. It >>does not emulate huge substitution tables. > >>That said, for the keyspace it uses -- permutations -- Dynamic >>Transposition can traverse the full keyspace. > >>In contrast, for the keyspace the modern block cipher uses -- emulated >>huge substitution tables -- no design can touch more than the tiniest >>fraction of that keyspace. > >At least I now know my point has been understood, and I don't need to >explain the mathematics of it again. I believe it was I who explained to you that DES was not unable to select just "some" possible keys, but instead was not able to select virtually all of them. Shall I go back and quote in depth? >What remains in dispute, though, is my claim that what you have said >here, although true, is in no way a valid argument in favor of Dynamic >Transposition. Let me see; before, you were arguing (with wrong numbers) that the argument was not true. Now that I have shown the unarguable numbers, it's suddenly true, but not a valid argument. Hmmm. Have you already decided on the conclusions, such that any argument which does not reach them must be "in no way a valid argument"? The Real Argument It appears to me that one advantage of having a combinatoric sort of encryption is to support mathematical analysis. Dynamic Transposition is based on balancing and permuting data, and it is specifically intended to support any possible permutation. As a consequence, what we know about the theoretical distribution of permutations may apply, at least approximately, in actual practice. This stands in direct contrast to the OTP, whose theoretical results generally do not apply in practice. Modern block ciphers are based on substitution: they emulate huge keyed substitution tables. But since that structure represents a truly immense keyspace, they actually key only a tiny -- almost infinitesimal -- fraction of it. As a result, we cannot apply what we know about the distribution of substitution tables, because only a very tiny number of presumably specialized tables are actually realized. The modern block cipher simply does not begin to approach the associated mathematical model. >As you've noted, one small error causes people to neglect what is >valid, and this is why I'm pounding on this one point - I believe I'm >not the only one who would see this error, but most of the others have >already stopped paying attention to your work. Is it just me, or is it really strange to have the belief that you know what others think and what others are doing and why? >Essentially, my argument is that it doesn't matter to an Opponent what >you are "attempting" to emulate. All that matters is what your cipher >does or does not do. Essentially, your argument is a "Red Herring." Since the position you attribute to me is not the reason I think this feature is beneficial, showing that it is wrong is simply irrelevant. >Neither a block cipher nor transposition provides all possible >substitutions of input blocks to output blocks. A transposition does not provide all substitutions? What is wrong with this picture? Since a substitution cipher *does* substitutions, it is not entirely unreasonable to expect it to do all of them. Since a transposition cipher does *not* do substitutions, it is quite *un*reasonable to expect that. With such an argument, you are being unreasonable, as well as irrelevant. >Both a block cipher and transposition provide several alternative >paths for any plaintext block to become any ciphertext block, unlike >simple XOR, which provides only one. Actually, transposition is a block cipher. It just is not a conventional block cipher based on emulating huge substitutions. >Transposition is 'exhaustive' in the sense that it provides every >transformation in a particular tidy closed set of transformations. But >that isn't even a strength: that is a weakness, just as it would have >been a weakness if DES was a group. Worse and worse. The issue of the groupiness of DES concerns multiple ciphering with DES alone (e.g., Triple-DES). If DES had been a group, then multiple ciphering could not have added strength, and would not have expanded the keyspace. The reason strength can be added in this way is that the cipher did not provide it in the first place. Since Dynamic Transposition provides all of its strength in the original design, it does not need to be multiple ciphered with itself. And, indeed, multiple ciphering with itself alone will not provide additional strength. (I just note that we could design a Dynamic Transposition cipher with a real keyspace of tens of thousands of bits, were that to be desirable.) How is being too strong suddenly a weakness, and how is this any sort of reasonable argument at all? >Suppose that, between the two transpositions, I inserted an S-box, >that changed under the control of a good PRNG with each round, of the >form: > >0000 -> 0000 1111 -> 1111 > >0001 -> 0100 0111 -> 1011 >0010 -> 0010 1011 -> 1110 >0100 -> 1000 1101 -> 0111 >1000 -> 0001 1110 -> 1101 > >0011 -> 1010 >0101 -> 0110 >0110 -> 1100 >1001 -> 0011 >1010 -> 0101 >1100 -> 1001 > >In other words, I'm substituting small portions of the block with >other substitutes that have the same number of 1 bits. > >Now I'm producing substitutions of input block to output block that >can't be reached by transposition alone. Have I _weakened_ the cipher? Since that does not describe "the cipher" -- Dynamic Transposition -- it would be hard to say whether it was "weakened" or not. But in a 3-level system consisting of transposition -> substitution -> transposition, arbitrary substitutions *do* weaken the last transposition level, if the substitution outputs are not bit-balanced. If your position is that Dynamic Transposition is bad because we can't multi-cipher with itself alone and gain strength, well, by gosh! I guess you got me. (I note that the current design is internally limited to "only" a 992-bit keyspace. But I guess that is not enough, since multiple ciphering has somehow become so important.) What I think is that you have this idea for a cipher of your own -- not Dynamic Transposition at all, but using transposition -- which you would really prefer to discuss but just have not done so. The result is that it gets tangled into the discussion of Dynamic Transposition in ways that are difficult to separate. Don't do that. If you want to discuss your design, put it out in a forthright manner. --- Terry Ritter ritter@io.com http://www.io.com/~ritter/ Crypto Glossary http://www.io.com/~ritter/GLOSSARY.HTM
Subject: Re: Dynamic Transposition Revisited (long) Date: Sun, 28 Jan 2001 04:43:04 GMT From: jsavard@ecn.ab.SBLOK.ca.nowhere (John Savard) Message-ID: <3a739bf2.54714929@news.powersurfr.com> References: <3a734eeb.9348154@news.io.com> Newsgroups: sci.crypt Lines: 159 On Sat, 27 Jan 2001 22:42:59 GMT, ritter@io.com (Terry Ritter) wrote, in part: >On Sat, 27 Jan 2001 05:46:44 GMT, in ><3a725cda.248233@news.powersurfr.com>, in sci.crypt >jsavard@ecn.ab.SBLOK.ca.nowhere (John Savard) wrote: >>At least I now know my point has been understood, and I don't need to >>explain the mathematics of it again. >I believe it was I who explained to you that DES was not unable to >select just "some" possible keys, but instead was not able to select >virtually all of them. Shall I go back and quote in depth? >>What remains in dispute, though, is my claim that what you have said >>here, although true, is in no way a valid argument in favor of Dynamic >>Transposition. >Let me see; before, you were arguing (with wrong numbers) that the >argument was not true. Now that I have shown the unarguable numbers, >it's suddenly true, but not a valid argument. Hmmm. >Have you already decided on the conclusions, such that any argument >which does not reach them must be "in no way a valid argument"? I had no problem accepting that DES cannot produce all (2^64)! possible codebooks. As soon as you first said that, I agreed, as it is something obviously true. The point I was trying to make was that transposing bits doesn't produce all possible codebooks of balanced blocks, and it looked to me, until the post I replied to, that that wasn't clear. That all possible codebooks, and not all possible transpositions, is the proper equivalent to all possible codebooks in DES is the point I have been - repetitiously - dealing with. So I don't think I've retreated in any way from what I have been saying. >>As you've noted, one small error causes people to neglect what is >>valid, and this is why I'm pounding on this one point - I believe I'm >>not the only one who would see this error, but most of the others have >>already stopped paying attention to your work. >Is it just me, or is it really strange to have the belief that you >know what others think and what others are doing and why? I am not a mindreader! No! I am one of the evil brainwashed zombies created by Bruce Schneier and the NSA! That's how I know what the other zombies think! I'm sorry to have to get silly on you, but although I'm a bit more of an independent thinker than the real "crypto gods", I still respect their knowledge and expertise, and understand the reasoning behind the conventional position. Yes, when I see what appear to me to be errors, I'm fierce in getting my point across. Not because I want to discredit your work. Because I want to get those errors out of the way, so that they won't interfere with the entrance of your contributions to the mainstream of the science. >>Neither a block cipher nor transposition provides all possible >>substitutions of input blocks to output blocks. >A transposition does not provide all substitutions? What is wrong >with this picture? >Since a substitution cipher *does* substitutions, it is not entirely >unreasonable to expect it to do all of them. >Since a transposition cipher does *not* do substitutions, it is quite >*un*reasonable to expect that. >With such an argument, you are being unreasonable, as well as >irrelevant. From the Opponent's point of view, if Cipher 1 takes general blocks IN and puts general blocks OUT, and Cipher 2 takes balanced blocks IN and puts general blocks OUT, the fact that neither cipher 1 nor cipher 2 produces all possible input block/output block codebooks is what counts. That Cipher 1 fails to do so because it is four rounds of DES, and Cipher 2 fails to do so because it is a transposition ... well, while the detailed structure of the cipher one is attacking is very important to the cryptanalyst, being a transposition is not some kind of "excuse". >Since Dynamic Transposition provides all of its strength in the >original design, it does not need to be multiple ciphered with itself. >And, indeed, multiple ciphering with itself alone will not provide >additional strength. (I just note that we could design a Dynamic >Transposition cipher with a real keyspace of tens of thousands of >bits, were that to be desirable.) >How is being too strong suddenly a weakness, and how is this any sort >of reasonable argument at all? Not being able to _gain_ strength is a weakness. "Being too strong" is the part of your claim that is false, because 4096! is not as big as (2^64)! ... because 'all possible transpositions' is only one step better than 'all possible XORs' and is no better than 'all possible two-round DES encipherments'. >>In other words, I'm substituting small portions of the block with >>other substitutes that have the same number of 1 bits. >>Now I'm producing substitutions of input block to output block that >>can't be reached by transposition alone. Have I _weakened_ the cipher? >Since that does not describe "the cipher" -- Dynamic Transposition -- >it would be hard to say whether it was "weakened" or not. >But in a 3-level system consisting of transposition -> substitution -> >transposition, arbitrary substitutions *do* weaken the last >transposition level, if the substitution outputs are not bit-balanced. Yes, that's why I chose the substitution that wouldn't do that. >If your position is that Dynamic Transposition is bad because we can't >multi-cipher with itself alone and gain strength, well, by gosh! I >guess you got me. (I note that the current design is internally >limited to "only" a 992-bit keyspace. But I guess that is not enough, >since multiple ciphering has somehow become so important.) Well, it *does* take more than 992 bits to describe a codebook with 2^64 entries, doesn't it? >What I think is that you have this idea for a cipher of your own -- >not Dynamic Transposition at all, but using transposition -- which you >would really prefer to discuss but just have not done so. The result >is that it gets tangled into the discussion of Dynamic Transposition >in ways that are difficult to separate. Don't do that. If you want >to discuss your design, put it out in a forthright manner. Well, I pointed you at my 'Large-Key Brainstorm'. I'm saying that two transpositions are _roughly_ comparable to four rounds of DES, except that Dynamic Transposition can be scaled up to much larger block sizes. So I am comparing DT to 'another cipher', basically in respect of my original argument that the (admittedly, trifling) bandwidth cost of DT is _unnecessary_ since DT can be equalled in the substitution world. And this is why I raised the subject of _perfect_ bit-balancing conversions, so as to allow DT to be multi-ciphered with substitution systems, so it _can_ make a contribution by means of its different algebraic structure. Your basic argument for why DT is important mentions the OTP - which (no, I didn't get this from an Ouija board) raises a red "Snake Oil!" flag for the conventionally-minded - and then has what I believe to be basic flaws, and flaws of a nature as to be apparent to more individuals than merely myself. Which is why, I fear, some people may not have read any further (this, admittedly, is speculation) and why there are not certain other newsgroup regulars raising points in respect of Dynamic Transposition, critical or otherwise. John Savard http://home.ecn.ab.ca/~jsavard/crypto.htm
Subject: Re: Dynamic Transposition Revisited (long) Date: Sun, 28 Jan 2001 13:10:28 GMT From: Benjamin Goldberg <goldbb2@earthlink.net> Message-ID: <3A741A49.516D16F2@earthlink.net> References: <3a739bf2.54714929@news.powersurfr.com> Newsgroups: sci.crypt Lines: 171 John Savard wrote: [snip] > >Have you already decided on the conclusions, such that any argument > >which does not reach them must be "in no way a valid argument"? > > I had no problem accepting that DES cannot produce all (2^64)! > possible codebooks. As soon as you first said that, I agreed, as it is > something obviously true. > > The point I was trying to make was that transposing bits doesn't > produce all possible codebooks of balanced blocks, and it looked to > me, until the post I replied to, that that wasn't clear. We don't need to produce all possible *codebooks*, just all possible *codes*. You seem to want: ForEach k ( k is a key ) DT_k produces one of (N!/(N/2!)(N/2!))! substitutions. This is indeed something which DT does NOT do. However, it's not needed. We just need for any input code to be able to produce any of the possible output codes, given an appropriate key. Since this english sentence is slightly ambiguous, I'll say it mathematically: ForAll pt, ct ( pt, ct are bit balanced blocks ) ThereExists k ( k is a PRNG state ) DT_k(pt) = ct This is like saying [for OTP] ForAll pt, ct ( pt, ct are equal length bitstrings ) ThereExists k ( z is bitstring with length equal to pt ) XOR(pt,k) = ct > That all possible codebooks, and not all possible transpositions, is > the proper equivalent to all possible codebooks in DES is the point I > have been - repetitiously - dealing with. So I don't think I've > retreated in any way from what I have been saying. I think I might be able to say ForAll pt, ct ( pt, ct are length 64 bistrings ) ThereExists k ( k is a PRNG state ) yourDES_k(pt) = ct However, it is just as impossible, if not impossibler, to randomly select from all (2^64)! substitutions as it is to randomly select from all (N!/(N/2!)(N/2)!)! [bit balanced] substitutions (dynamic keying or not). [snip] > >>Neither a block cipher nor transposition provides all possible > >>substitutions of input blocks to output blocks. > > >A transposition does not provide all substitutions? What is wrong > >with this picture? > > >Since a substitution cipher *does* substitutions, it is not entirely > >unreasonable to expect it to do all of them. > > >Since a transposition cipher does *not* do substitutions, it is quite > >*un*reasonable to expect that. > > >With such an argument, you are being unreasonable, as well as > >irrelevant. > > From the Opponent's point of view, if Cipher 1 takes general blocks IN > and puts general blocks OUT, and Cipher 2 takes balanced blocks IN and > puts general blocks OUT, the fact that neither cipher 1 nor cipher 2 > produces all possible input block/output block codebooks is what > counts. Whoops, where do you get off converting balanced blocks to general blocks? > That Cipher 1 fails to do so because it is four rounds of DES, and > Cipher 2 fails to do so because it is a transposition ... well, while > the detailed structure of the cipher one is attacking is very > important to the cryptanalyst, being a transposition is not some kind > of "excuse". I have no clue what you just said here. Would you mind rephrasing that? > >Since Dynamic Transposition provides all of its strength in the > >original design, it does not need to be multiple ciphered with > >itself. And, indeed, multiple ciphering with itself alone will not > >provide additional strength. (I just note that we could design a > >Dynamic Transposition cipher with a real keyspace of tens of > >thousands of bits, were that to be desirable.) > > >How is being too strong suddenly a weakness, and how is this any sort > >of reasonable argument at all? > > Not being able to _gain_ strength is a weakness. Does that mean that OTP has a weakness? You can't double encipher with OTP and gain strength. IIUC, what Ritter mean by "multiple ciphering with itself alone will not provide additional strength," was that if encipher the message with DT, and without using a new key/altering the state, encipher again, the cipher will not be stronger. This is analogous to the following: Here's cipher 1, it uses a keystream generator: pt, ct = plaintext, ciphertext ks = The first |pt| words of keystream ct = pt XOR ks Here's cipher 2, it also uses a keystream generator: pt, ct = plaintext, ciphertext ks1 = The first |pt| words of keystream ks2 = The next |pt| words of keystream ct = pt XOR ks1 XOR ks2 Is cipher 2 stronger than cipher 1? Ritter is saying, that with DT as the combiner, cipher 2 is not stronger than cipher 1... no matter how many times one adds part of the keystream, the amount of work needed to attack is equal to the amount of work needed to brute force the key. Of course, this doesn't mean you can't make it stronger... it just means that the only way to make it stronger is to lengthen the key. [snip] > >What I think is that you have this idea for a cipher of your own -- > >not Dynamic Transposition at all, but using transposition -- which > >you would really prefer to discuss but just have not done so. The > >result is that it gets tangled into the discussion of Dynamic > >Transposition in ways that are difficult to separate. Don't do that. > >If you want to discuss your design, put it out in a forthright > >manner. > > Well, I pointed you at my 'Large-Key Brainstorm'. I'm saying that two > transpositions are _roughly_ comparable to four rounds of DES, except > that Dynamic Transposition can be scaled up to much larger block > sizes. > > So I am comparing DT to 'another cipher', basically in respect of my > original argument that the (admittedly, trifling) bandwidth cost of DT > is _unnecessary_ since DT can be equalled in the substitution world. Suppose we are working with 64 bit blocks, and we have a known plaintext. If we use the 4 round DES, with PRNG generated round keys, how many different round keys can produce that plaintext/ciphertext pair? If it's a balanced 64 bit block, and we are using DT, how many different transpositions can produce that plaintext/ciphertext pair? Assuming that when doing DT, we transposed twice (for the purpose of hiding the keystream), how many different keystream strings could have produced the transposition that created that plaintext/ciphertext pair? Your assertion that DT can be equalled by 4 round DES is handwaving. I'd like to see some justification. Prove that 4 round DES with PRNG generated round keys does a better job of hiding the keystream than DT. > And this is why I raised the subject of _perfect_ bit-balancing > conversions, so as to allow DT to be multi-ciphered with substitution > systems, so it _can_ make a contribution by means of its different > algebraic structure. To say "to allow DT to be multi-ciphered with substitution systems," doesn't seem to make sense... it's like saying "to allow DES rounds to be multi-enciphered with Blowfish rounds" or some such. DT is a cipher. You could use it as a component in a larger system (as DES is a component in 3DES, and various forms of fenced DES), but mixing DT with something else makes it cease to be DT, and may make it cease to have some of the more important mathematical properties DT has. -- Most scientific innovations do not begin with "Eureka!" They begin with "That's odd. I wonder why that happened?"
Subject: Re: Dynamic Transposition Revisited (long) Date: Sun, 28 Jan 2001 19:02:54 +0100 From: Mok-Kong Shen <mok-kong.shen@t-online.de> Message-ID: <3A745ECD.37E3EC8@t-online.de> References: <3A741A49.516D16F2@earthlink.net> Newsgroups: sci.crypt Lines: 24 Benjamin Goldberg wrote: > > John Savard wrote: [snip] > To say "to allow DT to be multi-ciphered with substitution systems," > doesn't seem to make sense... it's like saying "to allow DES rounds to > be multi-enciphered with Blowfish rounds" or some such. DT is a cipher. > You could use it as a component in a larger system (as DES is a > component in 3DES, and various forms of fenced DES), but mixing DT with > something else makes it cease to be DT, and may make it cease to have > some of the more important mathematical properties DT has. It could be due to my foreigner's English understanding, but I understand that John Savard means that his effort is to enable the technique underlying DT (the basic ideas) to be combined with other encryption techniques to achieve better strength. That hereby, through modification/adaptation, the exact 'orginal' DT is no longer present (in the narrow sense) is clear. But this isn't essential, excepting from a historical point of view, I suppose. M. K. Shen
Subject: Re: Dynamic Transposition Revisited (long) Date: Sun, 28 Jan 2001 18:22:26 GMT From: jsavard@ecn.ab.SBLOK.ca.nowhere (John Savard) Message-ID: <3a746005.99507@news.powersurfr.com> References: <3A741A49.516D16F2@earthlink.net> Newsgroups: sci.crypt Lines: 69 On Sun, 28 Jan 2001 13:10:28 GMT, Benjamin Goldberg <goldbb2@earthlink.net> wrote, in part: >We don't need to produce all possible *codebooks*, just all possible >*codes*. You seem to want: > ForEach k ( k is a key ) > DT_k produces one of (N!/(N/2!)(N/2!))! substitutions. actually, ForEach S (S is any one of (N!/((N/2)!(N/2)!))! substitutions ThereExists a key k such that DT_k produces S just to be completely accurate and produce no confusion >This is indeed something which DT does NOT do. However, it's not >needed. We just need for any input code to be able to produce any of >the possible output codes, given an appropriate key. You're close, but this isn't exactly what I'm saying. You are right that "it's not needed" that Dynamic Transposition do this. But, equally, it isn't needed that a substitution-based cipher do this either. Since Mr. Ritter appears to me to be saying that a cipher, based on using DES with subkeys supplied by a stream cipher, is inadequate, and inferior to Dynamic Transposition, *precisely because* DES cannot produce all possible codebooks, my point is not that Dynamic Transposition is _also_ inadequate, but rather that the claim that Dynamic Transposition is inherently greatly superior is mistaken. Neither cipher produces all possible codebooks; both ciphers produce any possible output - with many different keys for each output - for each given input. "All possible transpositions" may be nice, but it doesn't buy you the kind of important advantages that appear to be being claimed. However, thank you for reminding me about a post I made a while back, which generated intense controversy. In that post, I noted that the properties of Galois Fields were such that if instead of applying a keystream via XOR, I applied two keystreams, one by XOR, and the other - composed of nonzero values only - by Galois Field multiplication, I could obtain the situation where: For any plaintext block P, not only may I obtain multiple substitutions in which the cipher block is Q, but I can obtain a substitution in which P -> Q and also P' -> Q' for any P' <> P and any Q' <> Q. I was excoriated by several of the genuine cryptographic experts in this newsgroup for daring to suggest this property - which provided resistance against the bit-flipping attack - was significant or useful. Use ciphers to encrypt, and hash functions and signatures to authenticate! Well, I wasn't disputing that that was sound practice, I was merely giving what I thought was the simplest possible example that illustrated the importance of Galois Fields in cryptography. Anyhow, Dynamic Transposition doesn't even have this property. John Savard http://home.ecn.ab.ca/~jsavard/crypto.htm
Subject: Re: Dynamic Transposition Revisited (long) Date: Mon, 29 Jan 2001 22:25:09 GMT From: ritter@io.com (Terry Ritter) Message-ID: <3a75edb9.7691406@news.io.com> References: <3A741A49.516D16F2@earthlink.net> Newsgroups: sci.crypt Lines: 61 On Sun, 28 Jan 2001 13:10:28 GMT, in <3A741A49.516D16F2@earthlink.net>, in sci.crypt Benjamin Goldberg <goldbb2@earthlink.net> wrote: >John Savard wrote: >[...] >Does that mean that OTP has a weakness? You can't double encipher with >OTP and gain strength. > >IIUC, what Ritter mean by "multiple ciphering with itself alone will not >provide additional strength," was that if encipher the message with DT, >and without using a new key/altering the state, encipher again, the >cipher will not be stronger. Actually, my comments came in the context of DES groupiness: As it stands, a single Dynamic Transposition already produces *any possible permutation*. So if we do Dynamic Transposition again, we *still* have *any possible permutation*. Nothing changed, so there was no strength increase *in the Triple-DES sense* of a keyspace increase. I was not addressing whether or not multiple ciphering with Dynamic Transposition would hide the individual key to each ciphering level, which may well be the case, and, in that sense, might indeed add strength. >This is analogous to the following: > >Here's cipher 1, it uses a keystream generator: > pt, ct = plaintext, ciphertext > ks = The first |pt| words of keystream > ct = pt XOR ks > >Here's cipher 2, it also uses a keystream generator: > pt, ct = plaintext, ciphertext > ks1 = The first |pt| words of keystream > ks2 = The next |pt| words of keystream > ct = pt XOR ks1 XOR ks2 > >Is cipher 2 stronger than cipher 1? Ritter is saying, that with DT as >the combiner, cipher 2 is not stronger than cipher 1... no matter how >many times one adds part of the keystream, the amount of work needed to >attack is equal to the amount of work needed to brute force the key. > >Of course, this doesn't mean you can't make it stronger... it just means >that the only way to make it stronger is to lengthen the key. Yes, but I think this part of strength is really overplayed: Once we have a "large enough" key (large enough to prevent a brute-force attack on the keys), we don't need more of that type of strength, but must instead address other types of attack. Having a larger key is not *wrong* per se, just not helpful with respect to brute-force attacks on keys. There can be various clarity or implementation reasons for having large keys. --- Terry Ritter ritter@io.com http://www.io.com/~ritter/ Crypto Glossary http://www.io.com/~ritter/GLOSSARY.HTM
Subject: Re: Dynamic Transposition Revisited (long) Date: Sat, 27 Jan 2001 00:37:03 GMT From: ritter@io.com (Terry Ritter) Message-ID: <3a7217ec.16188641@news.io.com> References: <3a71f01d.5996441@news.io.com> Newsgroups: sci.crypt Lines: 44 On Fri, 26 Jan 2001 21:48:40 GMT, in <3a71f01d.5996441@news.io.com>, in sci.crypt ritter@io.com (Terry Ritter) wrote: Please, somebody save me from myself! >>[...] >>But Terry Ritter appears to be criticizing DES because some of the >>(2^64)! possible overall substitutions are impossible to reach. > >Some? SOME???!! > >(From my factorials page: > > http://www.io.com/~ritter/JAVASCRP/PERMCOMB.HTM > >). > >(The number of different values in 64 bits is 2**64.) >2**64 ~= 18446744073709552000 = A >A! ~= some value 1.15397859e+21 bits long. >The DES keyspace is 56 bits, a value 2 bits long. No, the DES keyspace is 56 bits, ironically enough. >Let's see: A value 10**21 bits long compared to a value 2 bits long. That would be: "A value 10**21 bits long compared to a value 56 bits long." But the conclusions remain: >Yeah, I'd say "some" were impossible to reach. > >The DES keyspace is such a tiny fragment of the potential keyspace >that there is no assurance at all that it retains the smooth >theoretical properties of distribution for which we might hope. >[...] --- Terry Ritter ritter@io.com http://www.io.com/~ritter/ Crypto Glossary http://www.io.com/~ritter/GLOSSARY.HTM
Subject: Re: Dynamic Transposition Revisited (long) Date: Fri, 26 Jan 2001 21:50:35 GMT From: jsavard@ecn.ab.SBLOK.ca.nowhere (John Savard) Message-ID: <3a71f114.30075583@news.powersurfr.com> References: <3a71e0b6.25885697@news.powersurfr.com> Newsgroups: sci.crypt Lines: 9 On Fri, 26 Jan 2001 20:43:15 GMT, jsavard@ecn.ab.SBLOK.ca.nowhere (John Savard) wrote, in part: >so it's the 1170! actually, that should be 12870!, I guess you had a typo. John Savard http://home.ecn.ab.ca/~jsavard/crypto.htm
Subject: Re: Dynamic Transposition Revisited (long) Date: Mon, 22 Jan 2001 23:51:49 GMT From: jsavard@ecn.ab.SBLOK.ca.nowhere (John Savard) Message-ID: <3a6cc4d2.14252390@news.powersurfr.com> References: <3a6bdbeb.7409784@news.io.com> Newsgroups: sci.crypt Lines: 39 On Mon, 22 Jan 2001 07:07:48 GMT, ritter@io.com (Terry Ritter) wrote, in part: >In my experience with actually running such a cipher, bit-balancing >adds 25 percent to 33 percent to simple ASCII text. The 1/3 value was >given both in the "Revisited" article, as well as the original >Cryptologia article on my pages. And if the text is compressed first, >there will be even less expansion. If you see even a 33 percent >expansion as a show-stopping amount, a "bandwidth problem," I think >you need to pause for re-calibration. Well, it is possible to map 6 arbitrary bits (64 possibilities) to a string of 8 balanced bits (70 possibilities) for an increase of 33 percent. However, for larger block sizes, one can indeed do better than that. One can map 37 arbitrary bits (137438953472 possibilities) to 40 balanced bits (137846528820 possibilities) for only an 8.11% increase in bandwidth cost. Or one can map 158 arbitrary bits (365375409332725729550921208179070754913983135744 possibilities) to 162 balanced bits (365907784099042279561985786395502921046971688680 possibilities), which further reduces the bandwidth cost to 2.532%. So I suppose you could say that I am quite seriously in need of "recalibration". Note that the fraction of 8-bit sequences that are balanced is just over 1/4, while the fraction of balanced 40-bit sequences is just over 1/8, and the fraction of balanced 162-bit sequences is just over 1/16; the proportion does decline as the number of bits increases, but much more slowly than the number of bits increases. Of course, one expects the efficiency to keep increasing, as overall balance over a larger block is a less restrictive condition than balance in small subblocks. John Savard http://home.ecn.ab.ca/~jsavard/crypto.htm
Subject: Re: Dynamic Transposition Revisited (long) Date: Tue, 23 Jan 2001 04:05:22 GMT From: ritter@io.com (Terry Ritter) Message-ID: <3a6d02c4.1446054@news.io.com> References: <3a6cc4d2.14252390@news.powersurfr.com> Newsgroups: sci.crypt Lines: 75 On Mon, 22 Jan 2001 23:51:49 GMT, in <3a6cc4d2.14252390@news.powersurfr.com>, in sci.crypt jsavard@ecn.ab.SBLOK.ca.nowhere (John Savard) wrote: >On Mon, 22 Jan 2001 07:07:48 GMT, ritter@io.com (Terry Ritter) wrote, >in part: > >>In my experience with actually running such a cipher, bit-balancing >>adds 25 percent to 33 percent to simple ASCII text. The 1/3 value was >>given both in the "Revisited" article, as well as the original >>Cryptologia article on my pages. And if the text is compressed first, >>there will be even less expansion. If you see even a 33 percent >>expansion as a show-stopping amount, a "bandwidth problem," I think >>you need to pause for re-calibration. > >Well, it is possible to map 6 arbitrary bits (64 possibilities) to a >string of 8 balanced bits (70 possibilities) for an increase of 33 >percent. > >However, for larger block sizes, one can indeed do better than that. > >One can map 37 arbitrary bits (137438953472 possibilities) to 40 >balanced bits (137846528820 possibilities) for only an 8.11% increase >in bandwidth cost. What are you going on about? If we had a flat distribution of all possible values, there would be little if any bit-imbalance in a Dynamic Transposition bit-balanced block. Then, using my scheme, we would have an expansion of a little over one byte per block. One byte in 512, for example. The point is that ASCII does not have a flat distribution, so if we encipher ASCII, we accumulate an bit-imbalance of 2 or 3 bits per data byte. You might figure out a scheme to reduce that, in which case you will have a data-compression scheme. But -- as I mentioned in the "Revisited" article -- it is reasonable to apply data-compression to ASCII before accumulation in the block, in which case we can expect the random-like distribution to usually be almost perfectly balanced as it stands. >Or one can map 158 arbitrary bits >(365375409332725729550921208179070754913983135744 possibilities) to >162 balanced bits (365907784099042279561985786395502921046971688680 >possibilities), which further reduces the bandwidth cost to 2.532%. > >So I suppose you could say that I am quite seriously in need of >"recalibration". Yes, you are. I think you need some sleep. >Note that the fraction of 8-bit sequences that are balanced is just >over 1/4, One can obtain a bit-balanced block even if each byte in the block is not itself bit-balanced. The issue is the distribution of the various values. When that distribution is flat, the block is quite likely to be near balance automatically. The various complex schemes you mention repeatedly are worse, not better. >while the fraction of balanced 40-bit sequences is just over >1/8, and the fraction of balanced 162-bit sequences is just over 1/16; >the proportion does decline as the number of bits increases, but much >more slowly than the number of bits increases. Of course, one expects >the efficiency to keep increasing, as overall balance over a larger >block is a less restrictive condition than balance in small subblocks. --- Terry Ritter ritter@io.com http://www.io.com/~ritter/ Crypto Glossary http://www.io.com/~ritter/GLOSSARY.HTM
Subject: Re: Dynamic Transposition Revisited (long) Date: Tue, 23 Jan 2001 06:59:46 GMT From: "Matt Timmermans" <matt@timmermans.nospam-remove.org> Message-ID: <CZ9b6.866$M63.78453@news20.bellglobal.com> References: <3a6cc4d2.14252390@news.powersurfr.com> Newsgroups: sci.crypt Lines: 24 "John Savard" <jsavard@ecn.ab.SBLOK.ca.nowhere> wrote in message news:3a6cc4d2.14252390@news.powersurfr.com... > > However, for larger block sizes, one can indeed do better than that. > > One can map 37 arbitrary bits (137438953472 possibilities) to 40 > balanced bits (137846528820 possibilities) for only an 8.11% increase > in bandwidth cost. > > Or one can map 158 arbitrary bits > (365375409332725729550921208179070754913983135744 possibilities) to > 162 balanced bits (365907784099042279561985786395502921046971688680 > possibilities), which further reduces the bandwidth cost to 2.532%. > > So I suppose you could say that I am quite seriously in need of > "recalibration". Alternatively, you could XOR the plaintext with a psuedo-random stream, and then use Terry's simple balancing operation. The probability of wasting a significant number of bits gets quite small as the blocks get long.
Subject: Re: Dynamic Transposition Revisited (long) Date: 26 Jan 2001 13:18:37 GMT From: rpw3@rigden.engr.sgi.com (Rob Warnock) Message-ID: <94rtfd$4b7p4$1@fido.engr.sgi.com> References: <3a6cc4d2.14252390@news.powersurfr.com> Newsgroups: sci.crypt Lines: 97 John Savard <jsavard@ecn.ab.SBLOK.ca.nowhere> wrote: +--------------- | ritter@io.com (Terry Ritter) wrote, | >In my experience with actually running such a cipher, bit-balancing | >adds 25 percent to 33 percent to simple ASCII text. | | Well, it is possible to map 6 arbitrary bits (64 possibilities) to a | string of 8 balanced bits (70 possibilities) for an increase of 33 | percent. | | However, for larger block sizes, one can indeed do better than that. | | One can map 37 arbitrary bits (137438953472 possibilities) to 40 | balanced bits (137846528820 possibilities) for only an 8.11% increase | in bandwidth cost. | | Or one can map 158 arbitrary bits | (365375409332725729550921208179070754913983135744 possibilities) to | 162 balanced bits (365907784099042279561985786395502921046971688680 | possibilities), which further reduces the bandwidth cost to 2.532%. | | So I suppose you could say that I am quite seriously in need of | "recalibration". +--------------- I'd say so! ;-} ;-} You guys need to talk to some hardware types more often. This "bit balancing" stuff is a long-ago-solved problem in the data-transmission domain, where it's known as "bounded-disparity encoding" [where the "disparity" is the excess of 1's over 0's in a bit stream (or sometimes the absolute value of the difference)]. You can make the overhead for perfect bit balancing (a.k.a. "zero disparity") as low as you like for arbitrary data, as long as you allow the blocks to be large enough. One version that seems useful/applicable for Ritter's DT is the scheme used in the "21b/24b" code used in the HIPPI-Serial standard. One bit of each codeword says whether the remaining bits of that codeword are to be inverted or not before being sent (and before being used after being received). The encoder counts the running disparity of all codewords sent so far, looks at the pop. count of the current word to be sent, and either inverts or leaves alone the current word, depending on which way will lower the running disparity. In the case of HIPPI-Serial, this guarantees that the running disparity never exceeds +/-35 even momentarily, and never exceeds +/-23 at codeword boundaries. Padding the message with just one additional codeword of chosen content allows one to force the disparity for the whole message to 0. Clearly, by choosing codewords and messages (blocks) large enough, one can get the overhead for zero-disparity encoding "as low as you like". However, for ease of computing in software, I wouldn't copy the HIPPI- Serial scheme exactly. Instead, I'd probably do something like this: Let a "symbol" (codeword) be something convenient like a multiple of the machine's wordsize, say, "k" bits. Then let the plaintext block size be "k-2" symbols, that is, "k*(k-2)" bits. Then let the last symbol of the block be the "polarity" bits for every symbol in the block, and the next-to-last symbol (and its corresponding polarity bit) is adjusted to make the total disparity for the block be zero -- that is, exactly "bit balanced". For k==32, the overhead is ~6%, for k==64 it's ~3%, etc.. Or here's a better idea: Let the block consist of some large number of bits, plus a small pad field, plus a small count field big enough to cover the block. The block is logically split in two pieces at the bit named by the count. The bits to the left are inverted; the bits to the right are left alone. The pad bits are adjusted to make the resulting disparity exactly zero. [You need the pad bits so the algorithm will converge, otherwise "count==x" might be too small while "count==x+1" was too large. Using a Gray-code counter may help here, too.] Or, the counter could count bytes. E.g., consider a 256-byte block, with one byte of "polarity count", two bytes [I think you need two] of adjustable pad, and 253 bytes of payload. That's about 1% overhead. A polarity count of 0 means no payload bytes get inverted; 1-253 means that that many payload bytes are inverted (one's-complement). One byte of pad gets the 1's-compl. of the count; the other byte of pad carries enough ones to balance the residual disparity between the two pieces of the payload. [I *think* that always works, but there may be a fencepost error in there -- I haven't done a formal proof. But even if so, using a Gray-code counter may let you pick up enough bits of pad to fix it.] In any case, using "partitioned polarity inversion" (to coin a phrase), it's possible to achieve zero disparity (exact bit balancing) in an "n"-bit block with about lg(n) bits of overhead, and lg(n)/n --> 0 asymptotically. And it will be *fast* -- much, much faster than the bit permutations themselves. -Rob p.s. I can supply demo code, if the above desciption isn't obvious enough. ----- Rob Warnock, 31-2-510 rpw3@sgi.com SGI Network Engineering http://reality.sgi.com/rpw3/ 1600 Amphitheatre Pkwy. Phone: 650-933-1673 Mountain View, CA 94043 PP-ASEL-IA
Subject: Re: Dynamic Transposition Revisited (long) Date: Fri, 26 Jan 2001 15:59:41 GMT From: jsavard@ecn.ab.SBLOK.ca.nowhere (John Savard) Message-ID: <3a719e43.8871438@news.powersurfr.com> References: <94rtfd$4b7p4$1@fido.engr.sgi.com> Newsgroups: sci.crypt Lines: 37 On 26 Jan 2001 13:18:37 GMT, rpw3@rigden.engr.sgi.com (Rob Warnock) wrote, in part: >One version that seems useful/applicable for Ritter's DT is the scheme >used in the "21b/24b" code used in the HIPPI-Serial standard. One bit >of each codeword says whether the remaining bits of that codeword are to >be inverted or not before being sent (and before being used after being >received). The encoder counts the running disparity of all codewords >sent so far, looks at the pop. count of the current word to be sent, >and either inverts or leaves alone the current word, depending on which >way will lower the running disparity. In the case of HIPPI-Serial, this >guarantees that the running disparity never exceeds +/-35 even momentarily, >and never exceeds +/-23 at codeword boundaries. Padding the message with >just one additional codeword of chosen content allows one to force the >disparity for the whole message to 0. >Clearly, by choosing codewords and messages (blocks) large enough, one >can get the overhead for zero-disparity encoding "as low as you like". Yes, that's quite correct (in percentage terms). >p.s. I can supply demo code, if the above desciption isn't obvious enough. Oh, no; it's quite clear. Incidentally, this coding has some interesting properties. Because the bit that indicates if a block is inverted or not has to be counted in the bit-balance of the output, a) An input string that is heavy in 1s produces smaller variations in bit-balance than one that is similarly heavy in 0s; and b) If one has a completely balanced input string, it is necessary (or at least natural) to invert alternating blocks to maintain balance. John Savard http://home.ecn.ab.ca/~jsavard/crypto.htm
Subject: Re: Dynamic Transposition Revisited (long) Date: Fri, 26 Jan 2001 09:07:03 -0800 From: "Paul Pires" <diode@got.net> Message-ID: <3a71af3b_5@news.newsfeeds.com> References: <3a719e43.8871438@news.powersurfr.com> Newsgroups: sci.crypt Lines: 44 John Savard <jsavard@ecn.ab.SBLOK.ca.nowhere> wrote in message news:3a719e43.8871438@news.powersurfr.com... <snip> > > Oh, no; it's quite clear. > > Incidentally, this coding has some interesting properties. Because the > bit that indicates if a block is inverted or not has to be counted in > the bit-balance of the output, > > a) An input string that is heavy in 1s produces smaller variations in > bit-balance than one that is similarly heavy in 0s; and > > b) If one has a completely balanced input string, it is necessary (or > at least natural) to invert alternating blocks to maintain balance. I'm confused. Wouldn't this produce alternately biased blocks in an alternating fasion? Seems like if it was of use to crack it, the adversary would just bin out the blocks and treat the two halves differently. This alternating is a known function right? even if it wasn't, it's a 1-0 guess. I also am not clear on the goal. Yes there needs to be bit balancing so that a bias in the input is not recognizable in the output but by doing this hiding, don't you sacrafice another valuable property? Seems like the output would fail a taylored randomness test. You are going to get data that has a prefect block reference is displaced. Right? Seems like what you'd want would be a method where the transposition works on a pile that is "Probably" balanced but where the deviation from perfect is not correlated to the input or output. I could be screwy here. Paul > > John Savard > http://home.ecn.ab.ca/~jsavard/crypto.htm -----= Posted via Newsfeeds.Com, Uncensored Usenet News =----- http://www.newsfeeds.com - The #1 Newsgroup Service in the World! -----== Over 80,000 Newsgroups - 16 Different Servers! =-----
Subject: Re: Dynamic Transposition Revisited (long) Date: Fri, 26 Jan 2001 20:36:59 +0100 From: Mok-Kong Shen <mok-kong.shen@t-online.de> Message-ID: <3A71D1DB.4EDDE55@t-online.de> References: <3a71af3b_5@news.newsfeeds.com> Newsgroups: sci.crypt Lines: 20 Paul Pires wrote: > > John Savard <jsavard@ecn.ab.SBLOK.ca.nowhere> wrote: [snip] > Seems like what you'd want would be a method where the transposition > works on a pile that is "Probably" balanced but where the deviation from > perfect is not correlated to the input or output. I could be screwy here. If one is not for some reason restricted to do only bit permutations but can perform also other operations, in particular substitutions, I am not yet very clear whether bit balancing is the economically optimal operation to do (either alone or in combination with other operations) for achieving a certain encryption strength in practice. I mean this issue should be clarified in connection with devising good methods for obtaining bit balancing. M. K. Shen
Subject: Re: Dynamic Transposition Revisited (long) Date: Fri, 26 Jan 2001 20:01:27 GMT From: jsavard@ecn.ab.SBLOK.ca.nowhere (John Savard) Message-ID: <3a71d6b7.23326201@news.powersurfr.com> References: <3a71af3b_5@news.newsfeeds.com> Newsgroups: sci.crypt Lines: 30 On Fri, 26 Jan 2001 09:07:03 -0800, "Paul Pires" <diode@got.net> wrote, in part: >I'm confused. Wouldn't this produce alternately biased blocks in an >alternating fasion? Yes, and they would therefore add up to a balanced total. >I also am not clear on the goal. Yes there needs to be bit balancing so that >a bias in the input is not recognizable in the output but by doing this >hiding, >don't you sacrafice another valuable property? Seems like the output would >fail a taylored randomness test. You are going to get data that has a >prefect >distribution of zero's and ones within a block and something else if the >block >reference is displaced. Right? The property of bit-balancing is _essential_ if Dynamic Transposition is to work; otherwise, some information about the plaintext *will not be encrypted at all*. Thus, the bits are not balanced in order to remove bias from the input; except that, being a transposition, it doesn't change the number of 1 and 0 bits, it is expected the cipher will be secure enough to conceal bias in the input, just as ciphers like DES are expected to do that. John Savard http://home.ecn.ab.ca/~jsavard/crypto.htm
Subject: Re: Dynamic Transposition Revisited (long) Date: Fri, 26 Jan 2001 22:21:52 GMT From: ritter@io.com (Terry Ritter) Message-ID: <3a71f782.7890168@news.io.com> References: <3a71af3b_5@news.newsfeeds.com> Newsgroups: sci.crypt Lines: 40 On Fri, 26 Jan 2001 09:07:03 -0800, in <3a71af3b_5@news.newsfeeds.com>, in sci.crypt "Paul Pires" <diode@got.net> wrote: >[...] >I also am not clear on the goal. Yes there needs to be bit balancing so that >a bias in the input is not recognizable in the output but by doing this >hiding, >don't you sacrafice another valuable property? Seems like the output would >fail a taylored randomness test. You are going to get data that has a >prefect >distribution of zero's and ones within a block and something else if the >block >reference is displaced. Right? It is not necessary for strength or security that a cipher to produce random-like output. Most ciphers do so, but such is not necessary. Here I think the possible output codes do appear with equal probability. This is a characteristic which represents the inefficiency of balanced coding. But since that is a plaintext coding, and is known by both designer and opponent, it does not seem particularly worrisome. >Seems like what you'd want would be a method where the transposition >works on a pile that is "Probably" balanced but where the deviation from >perfect is not correlated to the input or output. I could be screwy here. I have mentioned the possibility of a statistical or "almost" balance, which could be very effective. But it seems like that analysis would be much more complicated. We would have to talk about the distribution of balance, and how strength changes accordingly, which seems beyond what can now be done. --- Terry Ritter ritter@io.com http://www.io.com/~ritter/ Crypto Glossary http://www.io.com/~ritter/GLOSSARY.HTM
Subject: Re: Dynamic Transposition Revisited (long) Date: Fri, 26 Jan 2001 15:37:09 -0800 From: "Paul Pires" <diode@got.net> Message-ID: <3a720aa6_2@news.newsfeeds.com> References: <3a71f782.7890168@news.io.com> Newsgroups: sci.crypt Lines: 61 Terry Ritter <ritter@io.com> wrote in message news:3a71f782.7890168@news.io.com... > > On Fri, 26 Jan 2001 09:07:03 -0800, in > <3a71af3b_5@news.newsfeeds.com>, in sci.crypt "Paul Pires" > <diode@got.net> wrote: > > >[...] > >I also am not clear on the goal. Yes there needs to be bit balancing so that > >a bias in the input is not recognizable in the output but by doing this > >hiding, > >don't you sacrafice another valuable property? Seems like the output would > >fail a taylored randomness test. You are going to get data that has a > >prefect > >distribution of zero's and ones within a block and something else if the > >block > >reference is displaced. Right? > > It is not necessary for strength or security that a cipher to produce > random-like output. Most ciphers do so, but such is not necessary. > Here I think the possible output codes do appear with equal > probability. > > This is a characteristic which represents the inefficiency of balanced > coding. But since that is a plaintext coding, and is known by both > designer and opponent, it does not seem particularly worrisome. Oh, I wasn't saying I found a breach, just that it seems to have an identifiable appearance & therefore offset from a thoretical ideal. More in the way of making a note than pointing at a practical fault. My main comment was in response to suggestions made By John Savard on a way to fix it. First you need to find out if it's broke and if the fix is better than the break. As I said before, I think there are ways to avoid the bit balancing altogether. The interesting part of this is your clear statement of the security premis and that seems to be "transposition by random direction". I took the bit balancing part to be "One way you could do it" as a way to enable the other discussion. > >Seems like what you'd want would be a method where the transposition > >works on a pile that is "Probably" balanced but where the deviation from > >perfect is not correlated to the input or output. I could be screwy here. > > I have mentioned the possibility of a statistical or "almost" balance, > which could be very effective. But it seems like that analysis would > be much more complicated. We would have to talk about the > distribution of balance, and how strength changes accordingly, which > seems beyond what can now be done. I don't know about that. I think it can be done. But that opinion and $2.79 might by a cup of coffee. Paul > > --- > Terry Ritter ritter@io.com http://www.io.com/~ritter/ > Crypto Glossary http://www.io.com/~ritter/GLOSSARY.HTM > -----= Posted via Newsfeeds.Com, Uncensored Usenet News =----- http://www.newsfeeds.com - The #1 Newsgroup Service in the World! -----== Over 80,000 Newsgroups - 16 Different Servers! =-----
Subject: Re: Dynamic Transposition Revisited (long) Date: Sat, 27 Jan 2001 01:31:31 GMT From: ritter@io.com (Terry Ritter) Message-ID: <3a722273.18884599@news.io.com> References: <3a720aa6_2@news.newsfeeds.com> Newsgroups: sci.crypt Lines: 79 On Fri, 26 Jan 2001 15:37:09 -0800, in <3a720aa6_2@news.newsfeeds.com>, in sci.crypt "Paul Pires" <diode@got.net> wrote: >Terry Ritter <ritter@io.com> wrote in message news:3a71f782.7890168@news.io.com... >> >> On Fri, 26 Jan 2001 09:07:03 -0800, in >> <3a71af3b_5@news.newsfeeds.com>, in sci.crypt "Paul Pires" >> <diode@got.net> wrote: >> >> >[...] >> >I also am not clear on the goal. Yes there needs to be bit balancing so that >> >a bias in the input is not recognizable in the output but by doing this >> >hiding, >> >don't you sacrafice another valuable property? Seems like the output would >> >fail a taylored randomness test. You are going to get data that has a >> >prefect >> >distribution of zero's and ones within a block and something else if the >> >block >> >reference is displaced. Right? >> >> It is not necessary for strength or security that a cipher to produce >> random-like output. Most ciphers do so, but such is not necessary. >> Here I think the possible output codes do appear with equal >> probability. >> >> This is a characteristic which represents the inefficiency of balanced >> coding. But since that is a plaintext coding, and is known by both >> designer and opponent, it does not seem particularly worrisome. > >Oh, I wasn't saying I found a breach, just that it seems to have an identifiable >appearance & therefore offset from a thoretical ideal. More in the way of >making a note than pointing at a practical fault. And I was trying to point out that having random ciphertext is not a theoretical strength ideal per se for this cipher. It may support identifying this class of cipher, which is too bad, but it is simply not -- by itself -- a strength issue. Maybe it could become one, but I don't see it. >My main comment was in response to suggestions made By John Savard on a way >to fix it. First you need to find out if it's broke and if the fix is better than the >break. As I said before, I think there are ways to avoid the bit balancing >altogether. The interesting part of this is your clear statement of the security >premis and that seems to be "transposition by random direction". I took the bit >balancing part to be "One way you could do it" as a way to enable the >other discussion. Well, if the plaintext is not bit-balanced, it is going to leak information. If we are satisfied that we have almost bit-balanced data, maybe we could accept that. But the fundamental security of the system would seem to be that any possible plaintext block may have produced any particular ciphertext. To even try to approach this goal, I think needs bit-balancing. In practice, of course, we could almost certainly get by with just having a few "different" bits in each block. >> >Seems like what you'd want would be a method where the transposition >> >works on a pile that is "Probably" balanced but where the deviation from >> >perfect is not correlated to the input or output. I could be screwy here. >> >> I have mentioned the possibility of a statistical or "almost" balance, >> which could be very effective. But it seems like that analysis would >> be much more complicated. We would have to talk about the >> distribution of balance, and how strength changes accordingly, which >> seems beyond what can now be done. > >I don't know about that. I think it can be done. But that opinion and >$2.79 might by a cup of coffee. --- Terry Ritter ritter@io.com http://www.io.com/~ritter/ Crypto Glossary http://www.io.com/~ritter/GLOSSARY.HTM
Subject: Re: Dynamic Transposition Revisited (long) Date: Sat, 27 Jan 2001 12:24:15 -0800 From: "Paul Pires" <diodude@got.net> Message-ID: <3a732ef8_1@news.newsfeeds.com> References: <3a722273.18884599@news.io.com> Newsgroups: sci.crypt Lines: 82 Terry Ritter <ritter@io.com> wrote in message news:3a722273.18884599@news.io.com... > > On Fri, 26 Jan 2001 15:37:09 -0800, in > <3a720aa6_2@news.newsfeeds.com>, in sci.crypt "Paul Pires" > <diode@got.net> wrote: > > >Terry Ritter <ritter@io.com> wrote in message news:3a71f782.7890168@news.io.com... > >> > >> On Fri, 26 Jan 2001 09:07:03 -0800, in > >> <3a71af3b_5@news.newsfeeds.com>, in sci.crypt "Paul Pires" > >> <diode@got.net> wrote: > >> > >> >[...] > >> >I also am not clear on the goal. Yes there needs to be bit balancing so that > >> >a bias in the input is not recognizable in the output but by doing this > >> >hiding, > >> >don't you sacrafice another valuable property? Seems like the output would > >> >fail a taylored randomness test. You are going to get data that has a > >> >prefect > >> >distribution of zero's and ones within a block and something else if the > >> >block > >> >reference is displaced. Right? > >> > >> It is not necessary for strength or security that a cipher to produce > >> random-like output. Most ciphers do so, but such is not necessary. > >> Here I think the possible output codes do appear with equal > >> probability. > >> > >> This is a characteristic which represents the inefficiency of balanced > >> coding. But since that is a plaintext coding, and is known by both > >> designer and opponent, it does not seem particularly worrisome. > > > >Oh, I wasn't saying I found a breach, just that it seems to have an identifiable > >appearance & therefore offset from a thoretical ideal. More in the way of > >making a note than pointing at a practical fault. > > And I was trying to point out that having random ciphertext is not a > theoretical strength ideal per se for this cipher. It may support > identifying this class of cipher, which is too bad, but it is simply > not -- by itself -- a strength issue. Maybe it could become one, but > I don't see it. I really don't have a problem with this concept. It may not be a strength issue. As I stated it, it is unrelated to your reasons why you feel the method might be secure. I would have to tie it in somehow. It is just something to note and then move on for now. <snip> > >> >Seems like what you'd want would be a method where the transposition > >> >works on a pile that is "Probably" balanced but where the deviation from > >> >perfect is not correlated to the input or output. I could be screwy here. > >> > >> I have mentioned the possibility of a statistical or "almost" balance, > >> which could be very effective. But it seems like that analysis would > >> be much more complicated. We would have to talk about the > >> distribution of balance, and how strength changes accordingly, which > >> seems beyond what can now be done. Or talk about how the balance is not related to strength if the method was slightly different. I have something to offer here but this is clearly off topic. It should be noted that if "b" is an enevitable consequence of doing "a", then you can either work on "b" until it is an acceptable burden or look at doing "a" differently so that "b" doesn't occur or it becomes an advantage rather than burden. I just wanted to put in my bid for this discussion once the traffic on your point of interest wanes a bit. Thanks again for the interesting material, Paul > >I don't know about that. I think it can be done. But that opinion and > >$2.79 might by a cup of coffee. > > --- > Terry Ritter ritter@io.com http://www.io.com/~ritter/ > Crypto Glossary http://www.io.com/~ritter/GLOSSARY.HTM > -----= Posted via Newsfeeds.Com, Uncensored Usenet News =----- http://www.newsfeeds.com - The #1 Newsgroup Service in the World! -----== Over 80,000 Newsgroups - 16 Different Servers! =-----
Subject: Re: Dynamic Transposition Revisited (long) Date: Sat, 27 Jan 2001 10:41:53 +0100 From: Mok-Kong Shen <mok-kong.shen@t-online.de> Message-ID: <3A7297E1.30F84557@t-online.de> References: <3a720aa6_2@news.newsfeeds.com> Newsgroups: sci.crypt Lines: 24 Paul Pires wrote: > > Terry Ritter <ritter@io.com> wrote: [snip] > > I have mentioned the possibility of a statistical or "almost" balance, > > which could be very effective. But it seems like that analysis would > > be much more complicated. We would have to talk about the > > distribution of balance, and how strength changes accordingly, which > > seems beyond what can now be done. > > I don't know about that. I think it can be done. But that opinion and > $2.79 might by a cup of coffee. I like to suggest that one at first limits all discussions of strength of DT to using only (the subset of) balanced blocks from the outset. (I used that to simplify argumentation of a point of mine in another follow-up.) That would avoid unnecessarily having the methodology of bit balancing (a more technical issue) clouding up the fundamental issue. M. K. Shen
Subject: Re: Dynamic Transposition Revisited (long) Date: 30 Jan 2001 12:24:29 GMT From: rpw3@rigden.engr.sgi.com (Rob Warnock) Message-ID: <956bpt$51ur$1@fido.engr.sgi.com> References: <3a719e43.8871438@news.powersurfr.com> Newsgroups: sci.crypt Lines: 57 John Savard <jsavard@ecn.ab.SBLOK.ca.nowhere> wrote: +--------------- | rpw3@rigden.engr.sgi.com (Rob Warnock) wrote: | >One version that seems useful/applicable for Ritter's DT is the scheme | >used in the "21b/24b" code used in the HIPPI-Serial standard. One bit | >of each codeword says whether the remaining bits of that codeword are to | >be inverted or not before being sent... +--------------- But later in that same message I proposed a different method, "partitioned polarity inversion" (PPI), which is better IMHO for fixed-sized blocks which are to be *completely* balanced. +--------------- | Incidentally, this coding has some interesting properties. Because the | bit that indicates if a block is inverted or not has to be counted in | the bit-balance of the output, +--------------- True, and in the PPI versions, the count field has to be included in the balance, too. That's why I suggested that PPI have both a count and a pad field, so that the pad field can be used to balance the count field. +--------------- | a) An input string that is heavy in 1s produces smaller variations in | bit-balance than one that is similarly heavy in 0s; and +--------------- In the PPI variant, 1's-heavy, 0's-heavy and 1010...-style inputs will all have inversion points (counts) near the middle. It's inputs that have isolated bunches of 1's or 0's at either side of the input word that cause the most-skewed inversion points. +--------------- | b) If one has a completely balanced input string, it is necessary (or | at least natural) to invert alternating blocks to maintain balance. +--------------- And the HIPPI-Serial standard requires that, actually. But it's not to "maintain balance" -- since if one has a completely balanced input string, well, one has a completely balanced input string! -- but to provide a guaranteed bit transition at frame edges for preserving clocking and frame synchronization. But in the case of DT, such an artificial alternation is unnecessary and should be avoided. It would cause difficulty in undoing the PPI pre-coding (during decryption) if the count field were to be inverted. -Rob ----- Rob Warnock, 31-2-510 rpw3@sgi.com SGI Network Engineering http://reality.sgi.com/rpw3/ 1600 Amphitheatre Pkwy. Phone: 650-933-1673 Mountain View, CA 94043 PP-ASEL-IA
Subject: Re: Dynamic Transposition Revisited (long) Date: Tue, 30 Jan 2001 15:29:30 GMT From: jsavard@ecn.ab.SBLOK.ca.nowhere (John Savard) Message-ID: <3a76dd7a.268498@news.powersurfr.com> References: <956bpt$51ur$1@fido.engr.sgi.com> Newsgroups: sci.crypt Lines: 18 On 30 Jan 2001 12:24:29 GMT, rpw3@rigden.engr.sgi.com (Rob Warnock) wrote, in part: >And the HIPPI-Serial standard requires that, actually. But it's not >to "maintain balance" -- since if one has a completely balanced input >string, well, one has a completely balanced input string! -- but to >provide a guaranteed bit transition at frame edges for preserving >clocking and frame synchronization. Yes, but one does not have a completely balanced *output* string if it consists of every block of the input string *preceded by a 0*. And, since the bit indicating inversion is the first bit of each frame, I don't see how inverting every second block provides a guaranteed bit transition anyways. John Savard http://home.ecn.ab.ca/~jsavard/crypto.htm
Subject: Re: Dynamic Transposition Revisited (long) Date: Fri, 26 Jan 2001 20:15:55 +0100 From: Mok-Kong Shen <mok-kong.shen@t-online.de> Message-ID: <3A71CCEB.19EA2143@t-online.de> References: <94rtfd$4b7p4$1@fido.engr.sgi.com> Newsgroups: sci.crypt Lines: 22 Rob Warnock wrote: > [snip] > One version that seems useful/applicable for Ritter's DT is the scheme > used in the "21b/24b" code used in the HIPPI-Serial standard. One bit > of each codeword says whether the remaining bits of that codeword are to > be inverted or not before being sent (and before being used after being > received). The encoder counts the running disparity of all codewords > sent so far, looks at the pop. count of the current word to be sent, > and either inverts or leaves alone the current word, depending on which > way will lower the running disparity. In the case of HIPPI-Serial, this > guarantees that the running disparity never exceeds +/-35 even momentarily, > and never exceeds +/-23 at codeword boundaries. Padding the message with > just one additional codeword of chosen content allows one to force the > disparity for the whole message to 0. Would you kindly explain what HIPPI is and why one needs balancing there? Thanks. M. K. Shen
Subject: Re: Dynamic Transposition Revisited (long) Date: Fri, 26 Jan 2001 20:03:09 GMT From: jsavard@ecn.ab.SBLOK.ca.nowhere (John Savard) Message-ID: <3a71d7b2.23577146@news.powersurfr.com> References: <3A71CCEB.19EA2143@t-online.de> Newsgroups: sci.crypt Lines: 13 On Fri, 26 Jan 2001 20:15:55 +0100, Mok-Kong Shen <mok-kong.shen@t-online.de> wrote, in part: >Would you kindly explain what HIPPI is and why one needs >balancing there? Thanks. Many forms of modulating data for transmission, or for recording on magnetic media, remove or minimize the DC component to allow the circuitry required to be simpler, as DC components of electrical signals present difficulties. John Savard http://home.ecn.ab.ca/~jsavard/crypto.htm
Subject: Re: Dynamic Transposition Revisited (long) Date: 30 Jan 2001 12:34:44 GMT From: rpw3@rigden.engr.sgi.com (Rob Warnock) Message-ID: <956cd4$554k$1@fido.engr.sgi.com> References: <3a71d7b2.23577146@news.powersurfr.com> Newsgroups: sci.crypt Lines: 29 John Savard <jsavard@ecn.ab.SBLOK.ca.nowhere> wrote: +--------------- | <mok-kong.shen@t-online.de> wrote, in part: | >Would you kindly explain what HIPPI is and why one needs | >balancing there? Thanks. | | Many forms of modulating data for transmission, or for recording on | magnetic media, remove or minimize the DC component to allow the | circuitry required to be simpler, as DC components of electrical | signals present difficulties. +--------------- Exactly so. But removing the D.C. component creates its own problems, such as the so-called "baseline drift" that occurs when the D.C. component is removed with a simple series capacitor. Balancing the input avoids this baseline drift, and keeps the average signal voltage close to the mid-point between the peak "1" & "0" values, which makes the eventual conversion back to D.C. levels less error-prone. But we're drifting off-topic, I fear... -Rob ----- Rob Warnock, 31-2-510 rpw3@sgi.com SGI Network Engineering http://reality.sgi.com/rpw3/ 1600 Amphitheatre Pkwy. Phone: 650-933-1673 Mountain View, CA 94043 PP-ASEL-IA
Subject: Re: Dynamic Transposition Revisited (long) Date: Tue, 30 Jan 2001 15:32:25 GMT From: jsavard@ecn.ab.SBLOK.ca.nowhere (John Savard) Message-ID: <3a76ddf1.387494@news.powersurfr.com> References: <956cd4$554k$1@fido.engr.sgi.com> Newsgroups: sci.crypt Lines: 14 On 30 Jan 2001 12:34:44 GMT, rpw3@rigden.engr.sgi.com (Rob Warnock) wrote, in part: >Exactly so. But removing the D.C. component creates its own problems, >such as the so-called "baseline drift" that occurs when the D.C. component >is removed with a simple series capacitor. The point of "removing the D.C. component" _by means of modulation_ is precisely to avoid baseline drift, since DC components tend to get lost in ordinary types of circuitry. If I had meant throwing away important information from the signal... John Savard http://home.ecn.ab.ca/~jsavard/crypto.htm
Subject: Re: Dynamic Transposition Revisited (long) Date: 3 Feb 2001 10:28:07 GMT From: rpw3@rigden.engr.sgi.com (Rob Warnock) Message-ID: <95gmfn$1agp6$1@fido.engr.sgi.com> References: <3a76ddf1.387494@news.powersurfr.com> Newsgroups: sci.crypt Lines: 91 John Savard <jsavard@ecn.ab.SBLOK.ca.nowhere> wrote: +--------------- | rpw3@rigden.engr.sgi.com (Rob Warnock) wrote, in part: | >Exactly so. But removing the D.C. component creates its own problems, | >such as the so-called "baseline drift" that occurs when the D.C. component | >is removed with a simple series capacitor. | | The point of "removing the D.C. component" _by means of modulation_ is | precisely to avoid baseline drift, since DC components tend to get | lost in ordinary types of circuitry. If I had meant throwing away | important information from the signal... +--------------- But fast fiber-optic systems *don't* "remove the D.C. component by means of modulation" -- they remove it with a simple series blocking capacitor in order to get an A.C. signal that can be more easily amplified without worrying about where the precise D.C. threshold is. This in turn requires using some form of more-or-less balanced encoding, otherwise you get "baseline wander" from long runs of 1's or 0's. [My apologies, by the way. In the quoted text above I said "baseline drift" when I really meant "baseline wander".] Or said another way: Optical transmitters send baseband D.C., but by the time the signal gets to the receiver it has degraded in amplitude and also picked up inter-symbol interference due to dispersion (modal and otherwise). As a result, the correct receiver slicer threshold for best error rates is not known, and worse, it varies. So the receiver blocks the D.C component from the PIN diode, amplifies just the A.C. component (and also applies some "equalization" to compensate for inter-symbol interference, which is a lot easier to do with an A.C. signal), and then slices at zero voltage, which will be the mean of the signal averaged over the period of the lowest frequency (roughly) of the A.C. amplifier's passband -- *not* exactly the correct place is you've had a recent overly-long run of 1's or 0's. The better balanced the encoding, the easier you can make a high-gain/ low-noise amp, the less baseline wander you get, and the better you can quantize (slice) the signal for minimal errors. But more-complex encoding has costs, too. IBM's 8b/10b code won the "coding battles" in the Fibre Channel Standard committee. The 8b/10b code provides fairly-small short-term disparity, but *unlimited* long-term cumulative disparity, which means that you can get a bit of baseline wander with worst-case codes, so the slicer has to be spec'd to handle it. On the plus side, the 8b/10b code has a 1-0 or 0-1 transition ever few bits at worst, so clock-recovery is quite easy, and the receivers don't have to be very wideband. On the minus side again, there's that 25% coding overhead!! Conversely, the H-P 21b/24b code won in the HIPPI-serial standard. That code is *perfectly* balanced over the long term, but the short-term disparity can be as bad as +/-35 bits. This requires a much wider-band receiver (specifically, much better low-frequency response), and since you can go without a 1-0 or 0-1 transition for up to 24 bits, the receiver PLL has to be *very* good. But you only pay a 14% encoding overhead. Is it worth it? "It depends..." [Note: History says that simplicity won, in this case. Gigabit Ethernet and Infiniband both chose Fibre Channel's 8b/10b encoding, while HIPPI-Serial's 21b/24b code was re-used by no other application.] But again, all this is moot, since I never proposed using pure 21b/24b for Ritter's "Dynamic Transposition". Even my first mention of it used it only as a part of a larger block code (to allow *exact* balancing over the whole block), and later in that first post I abandoned that for a different scheme altogether ("partitioned polarity inversion"). -Rob p.s. In case any other hardware nuts have made it this far, you might find it interesting that the phone companies' "SONET" transmission system uses *no* "encoding" per se, thus has no encoding overhead, but depends on the signal being "scrambled" by a small frame-synchronous pseudo-random stream (an LFSR using the polynomial x^7 + x^6 + 1). When a SONET stream contains a bunch of multiplexed sub-channels (such as individual phone calls), this is quite adequate to roughly balance the 1/0 disparity, and also provide plenty of transitions for receive clocking. But for single-service traffic like IP-over-SONET, that ain't good enough -- user data can accidentally (or intentionally!) exactly counter the scrambler polynomial and result in loss of clock and/or sync. See <URL:http://www.ietf.org/proceedings/98aug/I-D/draft-ietf-pppext-pppsonet-scrambler-00.txt> for more discussion of this, which lead to RFC 2615 "PPP over SONET/SDH" <URL:http://www.ietf.org/rfc/rfc2615.txt> replacing RFC 1619. ----- Rob Warnock, 31-2-510 rpw3@sgi.com SGI Network Engineering http://reality.sgi.com/rpw3/ 1600 Amphitheatre Pkwy. Phone: 650-933-1673 Mountain View, CA 94043 PP-ASEL-IA
Subject: Re: Dynamic Transposition Revisited (long) Date: Mon, 22 Jan 2001 00:20:43 GMT From: "Matt Timmermans" <matt@timmermans.nospam-remove.org> Message-ID: <v1La6.139079$f36.5483136@news20.bellglobal.com> References: <3a6b65de.5747763@news.io.com> Newsgroups: sci.crypt Lines: 32 "Terry Ritter" <ritter@io.com> wrote in message news:3a6b65de.5747763@news.io.com... > [... snip lots of stuff I have no quarrel with...] > > OK, it becomes apparent that there is serious confusion here. > > There is a ciphering technology which I created and named "Dynamic > Substitution." There is another, different, technology which I also > created and named "Dynamic Transposition." Here we discuss Dynamic > Transposition. Sorry -- that was just a slip of the keyboard. I had no idea that you've also made something called "dynamic substitution". > Dynamic Transposition is mainly the idea of bit-balancing a data > block, and then bit-permuting that block from a confusion stream. It > is thus also a combiner of data and RNG confusion, but it is > definitely a block cipher combiner, and not a stream cipher combiner. > [...] > Dynamic Transposition is not a particular cipher; it is instead an > entire class of ciphers which is not covered in the crypto texts. A > cipher designer must choose what sort of RNG to use, how to key it, > how to protect it, and how to shuffle. There are various options, the > details of which are not particularly relevant to the Dynamic > Transposition technology itself, which is the issue. And a nifty combiner it is, too. I only have issue with the claims of provable security.
Subject: Re: Dynamic Transposition Revisited (long) Date: Mon, 22 Jan 2001 14:19:36 +0100 From: Mok-Kong Shen <mok-kong.shen@t-online.de> Message-ID: <3A6C3368.DDA1E5F2@t-online.de> References: <3a6b65de.5747763@news.io.com> Newsgroups: sci.crypt Lines: 27 Terry Ritter wrote: > [snip] > Dynamic Substitution is the idea of enciphering data through a keyed > Simple Substitution table, and then changing the contents of that > table. When a character is enciphered through a table, that > particular table transformation may be exposed. We can prevent that > by changing the just-used table entry to some entry in the table (even > itself), selected at pseudo-random. We thus get a state-based, > dynamic, combiner of data and RNG confusion, which is nonlinear and > yet reversible. Dynamic Substitution is a stream cipher combiner. In a recent article ('Another poorman's cipher', 15th Jan) I mentioned that the common way of employing a PRNG's output as key to address a polyalphabetical substitution table leads one to consider a fairly computing intensive, though very simple to implement, special case where the substitution table consists of one single column only and that column is newly generated for each input charater to be encrypted. Is you scheme virtually the same? (From your description it seems that you keep a large but fixed table.) Thanks. M. K. Shen
Subject: Re: Dynamic Transposition Revisited (long) Date: Mon, 22 Jan 2001 19:57:24 GMT From: ritter@io.com (Terry Ritter) Message-ID: <3a6c903d.4813299@news.io.com> References: <3A6C3368.DDA1E5F2@t-online.de> Newsgroups: sci.crypt Lines: 47 On Mon, 22 Jan 2001 14:19:36 +0100, in <3A6C3368.DDA1E5F2@t-online.de>, in sci.crypt Mok-Kong Shen <mok-kong.shen@t-online.de> wrote: >Terry Ritter wrote: >> >[snip] > >> Dynamic Substitution is the idea of enciphering data through a keyed >> Simple Substitution table, and then changing the contents of that >> table. When a character is enciphered through a table, that >> particular table transformation may be exposed. We can prevent that >> by changing the just-used table entry to some entry in the table (even >> itself), selected at pseudo-random. We thus get a state-based, >> dynamic, combiner of data and RNG confusion, which is nonlinear and >> yet reversible. Dynamic Substitution is a stream cipher combiner. > >In a recent article ('Another poorman's cipher', 15th Jan) >I mentioned that the common way of employing a PRNG's >output as key to address a polyalphabetical substitution >table leads one to consider a fairly computing intensive, >though very simple to implement, special case where the >substitution table consists of one single column only and >that column is newly generated for each input charater >to be encrypted. Is you scheme virtually the same? (From >your description it seems that you keep a large but fixed >table.) Thanks. I believe that would be covered by my patent, yes. On the other hand, if you had a fixed set of tables and then selected among them as part of the keying sequence, that is just polyalphabetic and old. The problem with that is "balance," which can be cured by generating a Latin square of appropriate size, a use which I did not patent. A Latin square of order 256 will combine data and confusion bytes in a balanced way. Of course a random Ls will take some time to create, and will also require 64k, most of which will not be used before some of it is re-used enough so we have to change it. It is much, much stronger than XOR, but weakness will grow with use. --- Terry Ritter ritter@io.com http://www.io.com/~ritter/ Crypto Glossary http://www.io.com/~ritter/GLOSSARY.HTM
Subject: Re: Dynamic Transposition Revisited (long) Date: Mon, 22 Jan 2001 22:15:02 +0100 From: Mok-Kong Shen <mok-kong.shen@t-online.de> Message-ID: <3A6CA2D6.13D3012B@t-online.de> References: <3a6c903d.4813299@news.io.com> Newsgroups: sci.crypt Lines: 37 Terry Ritter wrote: > > Mok-Kong Shen<mok-kong.shen@t-online.de> wrote: > > >Terry Ritter wrote: > >> > >[snip] > > > >> Dynamic Substitution is the idea of enciphering data through a keyed > >> Simple Substitution table, and then changing the contents of that > >> table. When a character is enciphered through a table, that > >> particular table transformation may be exposed. We can prevent that > >> by changing the just-used table entry to some entry in the table (even > >> itself), selected at pseudo-random. We thus get a state-based, > >> dynamic, combiner of data and RNG confusion, which is nonlinear and > >> yet reversible. Dynamic Substitution is a stream cipher combiner. > > > >In a recent article ('Another poorman's cipher', 15th Jan) > >I mentioned that the common way of employing a PRNG's > >output as key to address a polyalphabetical substitution > >table leads one to consider a fairly computing intensive, > >though very simple to implement, special case where the > >substitution table consists of one single column only and > >that column is newly generated for each input charater > >to be encrypted. Is you scheme virtually the same? (From > >your description it seems that you keep a large but fixed > >table.) Thanks. > > I believe that would be covered by my patent, yes. When was your patent issued? Could you tell? I am anyway quite surprised that your patent seems to be about of the same nature as Hitachi's rotation patent. M. K. Shen
Subject: Re: Dynamic Transposition Revisited (long) Date: Mon, 22 Jan 2001 22:00:31 GMT From: ritter@io.com (Terry Ritter) Message-ID: <3a6cad41.12242144@news.io.com> References: <3A6CA2D6.13D3012B@t-online.de> Newsgroups: sci.crypt Lines: 68 On Mon, 22 Jan 2001 22:15:02 +0100, in <3A6CA2D6.13D3012B@t-online.de>, in sci.crypt Mok-Kong Shen <mok-kong.shen@t-online.de> wrote: >Terry Ritter wrote: >> >> Mok-Kong Shen<mok-kong.shen@t-online.de> wrote: >> >> >Terry Ritter wrote: >> >> >> >[snip] >> > >> >> Dynamic Substitution is the idea of enciphering data through a keyed >> >> Simple Substitution table, and then changing the contents of that >> >> table. When a character is enciphered through a table, that >> >> particular table transformation may be exposed. We can prevent that >> >> by changing the just-used table entry to some entry in the table (even >> >> itself), selected at pseudo-random. We thus get a state-based, >> >> dynamic, combiner of data and RNG confusion, which is nonlinear and >> >> yet reversible. Dynamic Substitution is a stream cipher combiner. >> > >> >In a recent article ('Another poorman's cipher', 15th Jan) >> >I mentioned that the common way of employing a PRNG's >> >output as key to address a polyalphabetical substitution >> >table leads one to consider a fairly computing intensive, >> >though very simple to implement, special case where the >> >substitution table consists of one single column only and >> >that column is newly generated for each input charater >> >to be encrypted. Is you scheme virtually the same? (From >> >your description it seems that you keep a large but fixed >> >table.) Thanks. >> >> I believe that would be covered by my patent, yes. > >When was your patent issued? Could you tell? I am >anyway quite surprised that your patent seems to be about >of the same nature as Hitachi's rotation patent. While I suppose I should be heartened for my work to get any attention at all, this has been on my web pages for years, which just seems more sad than anything else. Here it is: http://www.io.com/~ritter/#DynSubTech http://www.io.com/~ritter/PATS/DYNSBPAT.HTM http://www.io.com/~ritter/PATS/DYNSBPAT.HTM#Claims "I claim as my invention: 1. A mechanism for combining a first data source and a second data source into result data, including: (a) substitution means for translating values from said first data source into said result data or substitute values, and (b) change means, at least responsive to some aspect of said second data source, for permuting or re-arranging a plurality of the translations or substitute values within said substitution means, potentially after every substitution operation." The "second data source" is usually the confusion stream. --- Terry Ritter ritter@io.com http://www.io.com/~ritter/ Crypto Glossary http://www.io.com/~ritter/GLOSSARY.HTM
Subject: Re: Dynamic Transposition Revisited (long) Date: Tue, 23 Jan 2001 00:37:02 +0100 From: Mok-Kong Shen <mok-kong.shen@t-online.de> Message-ID: <3A6CC41E.5496B292@t-online.de> References: <3a6cad41.12242144@news.io.com> Newsgroups: sci.crypt Lines: 83 Terry Ritter wrote: > > Mok-Kong Shen<mok-kong.shen@t-online.de> wrote: > > >Terry Ritter wrote: > >> > >> Mok-Kong Shen<mok-kong.shen@t-online.de> wrote: > >> > >> >Terry Ritter wrote: > >> >> > >> >[snip] > >> > > >> >> Dynamic Substitution is the idea of enciphering data through a keyed > >> >> Simple Substitution table, and then changing the contents of that > >> >> table. When a character is enciphered through a table, that > >> >> particular table transformation may be exposed. We can prevent that > >> >> by changing the just-used table entry to some entry in the table (even > >> >> itself), selected at pseudo-random. We thus get a state-based, > >> >> dynamic, combiner of data and RNG confusion, which is nonlinear and > >> >> yet reversible. Dynamic Substitution is a stream cipher combiner. > >> > > >> >In a recent article ('Another poorman's cipher', 15th Jan) > >> >I mentioned that the common way of employing a PRNG's > >> >output as key to address a polyalphabetical substitution > >> >table leads one to consider a fairly computing intensive, > >> >though very simple to implement, special case where the > >> >substitution table consists of one single column only and > >> >that column is newly generated for each input charater > >> >to be encrypted. Is you scheme virtually the same? (From > >> >your description it seems that you keep a large but fixed > >> >table.) Thanks. > >> > >> I believe that would be covered by my patent, yes. > > > >When was your patent issued? Could you tell? I am > >anyway quite surprised that your patent seems to be about > >of the same nature as Hitachi's rotation patent. > > While I suppose I should be heartened for my work to get any attention > at all, this has been on my web pages for years, which just seems more > sad than anything else. Here it is: > > http://www.io.com/~ritter/#DynSubTech > http://www.io.com/~ritter/PATS/DYNSBPAT.HTM > http://www.io.com/~ritter/PATS/DYNSBPAT.HTM#Claims > > "I claim as my invention: > > 1. A mechanism for combining a first data source and a second data > source into result data, including: > > (a) substitution means for translating values from said first > data source into said result data or substitute values, and > > (b) change means, at least responsive to some aspect of said > second data source, for permuting or re-arranging a plurality of the > translations or substitute values within said substitution means, > potentially after every substitution operation." > > The "second data source" is usually the confusion stream. I presume that you don't have an EU or German patent on that. So I can continue to have that kind of substitution in my cipher WEAK3-E. BTW, honestly I never considered my dynamic update of tables to be anything novel. It was not in the predecesor WEAK3 but added into WEAK3-E as one of the little bit add-ons to complicate the job of the opponent. I listed as one of the differences between the two versions the following: (see http://home.t-online.de/home/mok-kong.shen/#paper12) 3. After processing a user-specified number of records, all tables used in the algorithm will be refreshed, i.e. generated anew using the compound PRNG. This provides additional stuffs for the analyst to deal with. M. K. Shen -------------------------- http://home.t-online.de/home/mok-kong.shen
Subject: Re: Dynamic Transposition Revisited (long) Date: Sat, 20 Jan 2001 23:48:44 GMT From: jsavard@ecn.ab.SBLOK.ca.nowhere (John Savard) Message-ID: <3a6a229b.36992050@news.powersurfr.com> References: <3a692576.5916670@news.io.com> Newsgroups: sci.crypt Lines: 25 On Sat, 20 Jan 2001 05:43:52 GMT, ritter@io.com (Terry Ritter) wrote, in part: >Dynamic Transposition is about building an unbreakable cipher >*without* needing an unbreakable RNG. This is done by hiding the >breakable sequence behind multiple levels of haystack, each so massive >that they cannot be searched. So the sequence in question cannot be >revealed, which makes it somewhat difficult to attack. In the sense that for any particular N-bit block with N/2 one bits and N/2 zero bits, given known plaintext, the permutation chosen from among the N! possibilities gives the same ciphertext as any permutation from a set of (N/2)!^2 permutations? And you were, in addition, proposing to transpose twice. But these things can certainly all be done with substitutions as well, and more conveniently. However, I certainly will admit that the XOR of several PRNGs is easier to analyze than the composition of several permutations, but the use of substitution does not force us to be as unimaginative as that - and gives us a much wider choice of options than transposition does. John Savard http://home.ecn.ab.ca/~jsavard/crypto.htm
Subject: Re: Dynamic Transposition Revisited (long) Date: Sun, 21 Jan 2001 00:23:24 GMT From: Benjamin Goldberg <goldbb2@earthlink.net> Message-ID: <3A6A2BFE.92927565@earthlink.net> References: <3a6a229b.36992050@news.powersurfr.com> Newsgroups: sci.crypt Lines: 51 John Savard wrote: > > On Sat, 20 Jan 2001 05:43:52 GMT, ritter@io.com (Terry Ritter) wrote, > in part: > > >Dynamic Transposition is about building an unbreakable cipher > >*without* needing an unbreakable RNG. This is done by hiding the > >breakable sequence behind multiple levels of haystack, each so > >massive that they cannot be searched. So the sequence in question > >cannot be revealed, which makes it somewhat difficult to attack. > > In the sense that for any particular N-bit block with N/2 one bits and > N/2 zero bits, given known plaintext, the permutation chosen from > among the N! possibilities gives the same ciphertext as any > permutation from a set of (N/2)!^2 permutations? > > And you were, in addition, proposing to transpose twice. If I select a permutation p1, from among one of N! possibilities, and another permutation, p2, also from one of N! possibilities, and then take the composition p3 = (p1 o p2), then the resulting permutation is also one of the N! possibilities. Additional transpositions don't change any of the properties of the ciphertext. What they do change, is this: Suppose I have found a way to identify from a known plaintext *precisely* what permutation (p3) was applied to it. Now suppose I also have a way to way to go from a single permutation to the generator which, umm, generated it. There is a large difficulty now: To get the generator output, I need p1 and p2. If their selection was unbiased, then there are N! different combinations of (p1, p2) which compose to p3. > But these things can certainly all be done with substitutions as well, > and more conveniently. Except that it is quite difficult to create, for example, a secure 4096 bit substitution, whereas it is easy to create a 4096 bit permutation. > However, I certainly will admit that the XOR of > several PRNGs is easier to analyze than the composition of several > permutations, but the use of substitution does not force us to be as > unimaginative as that - and gives us a much wider choice of options > than transposition does. > > John Savard > http://home.ecn.ab.ca/~jsavard/crypto.htm -- Most scientific innovations do not begin with "Eureka!" They begin with "That's odd. I wonder why that happened?"
Subject: Re: Dynamic Transposition Revisited (long) Date: Wed, 24 Jan 2001 03:00:42 GMT From: AllanW <allan_w@my-deja.com> Message-ID: <94lggn$451$1@nnrp1.deja.com> References: <3a692576.5916670@news.io.com> Newsgroups: sci.crypt Lines: 56 ritter@io.com (Terry Ritter) wrote: > Dynamic Transposition is unusual in that knowing the plaintext and the > associated ciphertext does not reveal the enciphering permutation. > The reason for this is that many different bit-permutations will > produce the bit-for-bit exact same transformation between plaintext > and ciphertext. Therefore, having known plaintext does not reveal the > enciphering permutation, and thus cannot be exploited to begin to > expose the RNG sequence. > > Note that even if the correct permutation were found, the RNG sequence > which created it would not be exposed to any extent, if > double-shuffling was used to create the permutation. The reason for > this is that double-shuffling would use twice the amount of RNG > sequence as needed for a selection among all possible permutations. > Double-shuffling is literally an arbitrary permutation from a known > state, and then another arbitrary permutation from that to any > possible permutation. Any particular result permutation could be the > result of any possible first permutation, with the appropriate second > permutation, or vise versa. Accordingly, one knows no more about what > sequence was involved than before one knew the correct permutation. Consider this alternative to double-shuffling (or they could be used together): Process the entire plaintext once using an N-byte block. But instead of considering the output to be ciphertext, pack it into blocks of M-bytes, where M is less than N/3, and mutually prime to N. For instance, N=2187, M=512. Once again, use filler bits so that the number of 0-bits and 1-bits are balanced, and then shuffle that buffer. I haven't thought this through all the way yet; you might have to use two PRNG's in order to be able to decrypt the numbers later, but if we can find a way to know which PRNG values are used for which decoder (i.e first N for the first buffer, then 3M for the second buffer, and then N more for the big one and so on), then a single PRNG might still suffice. The smaller buffer will need slightly more values than the big one, due to adding a very few bytes to each buffer to balance 0-bits with 1-bits. The point of using double-shuffling is to make known-plaintext attacks more difficult by further masking the actions of the PRNG. But you also acknowledged that the result of two rounds of shuffling is equivalent to one round controlled by some other unknown sequence of random numbers. On the other hand, using two buffers of different sizes makes the two passes have different meaning. Also, in order to know that you're using PRNG values that will succeed, you will have to decode at least 4 of the smaller blocks in order to have enough data to feed into the larger block. Using two different PRNG's might make analysis even more difficult. -- Allan_W@my-deja.com is a "Spam Magnet," never read. Please reply in newsgroups only, sorry. Sent via Deja.com http://www.deja.com/
Subject: Re: Dynamic Transposition Revisited (long) Date: Wed, 24 Jan 2001 06:38:41 GMT From: ritter@io.com (Terry Ritter) Message-ID: <3a6e7857.11243304@news.io.com> References: <94lggn$451$1@nnrp1.deja.com> Newsgroups: sci.crypt Lines: 99 On Wed, 24 Jan 2001 03:00:42 GMT, in <94lggn$451$1@nnrp1.deja.com>, in sci.crypt AllanW <allan_w@my-deja.com> wrote: >ritter@io.com (Terry Ritter) wrote: >> Dynamic Transposition is unusual in that knowing the plaintext and the >> associated ciphertext does not reveal the enciphering permutation. >> The reason for this is that many different bit-permutations will >> produce the bit-for-bit exact same transformation between plaintext >> and ciphertext. Therefore, having known plaintext does not reveal the >> enciphering permutation, and thus cannot be exploited to begin to >> expose the RNG sequence. >> >> Note that even if the correct permutation were found, the RNG sequence >> which created it would not be exposed to any extent, if >> double-shuffling was used to create the permutation. The reason for >> this is that double-shuffling would use twice the amount of RNG >> sequence as needed for a selection among all possible permutations. >> Double-shuffling is literally an arbitrary permutation from a known >> state, and then another arbitrary permutation from that to any >> possible permutation. Any particular result permutation could be the >> result of any possible first permutation, with the appropriate second >> permutation, or vise versa. Accordingly, one knows no more about what >> sequence was involved than before one knew the correct permutation. > >Consider this alternative to double-shuffling (or they could be >used together): Process the entire plaintext once using an >N-byte block. But instead of considering the output to be >ciphertext, pack it into blocks of M-bytes, where M is less than >N/3, and mutually prime to N. For instance, N=2187, M=512. Once >again, use filler bits so that the number of 0-bits and 1-bits >are balanced, and then shuffle that buffer. I haven't thought >this through all the way yet; you might have to use two PRNG's in >order to be able to decrypt the numbers later, but if we can find >a way to know which PRNG values are used for which decoder (i.e >first N for the first buffer, then 3M for the second buffer, and >then N more for the big one and so on), then a single PRNG might >still suffice. The smaller buffer will need slightly more values >than the big one, due to adding a very few bytes to each buffer >to balance 0-bits with 1-bits. > >The point of using double-shuffling is to make known-plaintext >attacks more difficult by further masking the actions of the >PRNG. Yes, sort of. Of course, to get there, we must first assume that the opponent has fully defined an enciphering permutation, and we have no idea how one might do that. That is primary strength. The double-shuffling is just closing another door, and has probably wasted far more discussion than it is worth. >But you also acknowledged that the result of two rounds >of shuffling is equivalent to one round controlled by some >other unknown sequence of random numbers. Well, of course. That's what "hiding" implies. Double-shuffling is analogous to a hash, which is non-reversible specifically because more information is processed than can be represented in the internal state. As a consequence, many different sequences can produce the same result, so even if we know that result, we don't know which sequence produced it. Double-shuffling produces a permutation, a permutation which might have been produced by a single shuffle. But if we create the permutation with a single shuffle, to know the permutation is to know the sequence. Instead, since we create the permutation from twice the amount of information needed for a permutation, to know the permutation is *not* to know the sequence. >On the other hand, >using two buffers of different sizes makes the two passes have >different meaning. Also, in order to know that you're using >PRNG values that will succeed, you will have to decode at least >4 of the smaller blocks in order to have enough data to feed >into the larger block. Using two different PRNG's might make >analysis even more difficult. Right. We can pile this stuff up as high as we want. But exactly what are we solving? If there is a weakness, we need to solve it. But there is no point in doing things at random. The main strength of Dynamic Transposition is that the opponent's cannot define what permutation has occurred, even if they have both the plaintext and the ciphertext. --- Terry Ritter ritter@io.com http://www.io.com/~ritter/ Crypto Glossary http://www.io.com/~ritter/GLOSSARY.HTM
Subject: Re: Dynamic Transposition Revisited (long) Date: Sat, 20 Jan 2001 22:10:10 -0800 From: gcone <gcone@virtualvision.com> Message-ID: <3A6A7D42.2FC84F0B@virtualvision.com> References: <3a67737e.15537635@news.io.com> Newsgroups: sci.crypt Lines: 126 Terry Ritter wrote: [snip] > > A Dynamic Transposition cipher is conceptually very simple: > > (1) We collect plaintext data in bit-balanced (or almost > bit-balanced) blocks. > > (2) We shuffle the bits in those blocks under the > control of a keyed pseudorandom sequence. > The strength of dynamic transposition rests on these two points. The benefit of bit-balancing is explained as follows in the original post: > > When every plaintext block is exactly bit-balanced, any > possible plaintext block is some valid bit-permutation of > any ciphertext block. So, even if an opponent could > exhaustively un-permute a ciphertext block, the result > would just be every possible plaintext block. and a concrete means of bit-sbuffling (a.k.a. pseudorandom permutation) is offered as > > .) The usual solution is the well-known algorithm by > Durstenfeld, called "Shuffle," which Knuth II calls > "Algorithm P (Shuffling)," although any valid permutation > generator would be acceptable. > [snip} > > If we shuffle each block just once, an opponent who somehow > knows the correct resulting permutation can use that > information to reproduce the shuffling RNG sequence, and > thus start to attack the RNG. And even though we think > such an event impossible (since the correct permutation is > hidden by a plethora of different bit-permutations that > each produce exactly the same ciphertext from exactly the > same plaintext), eliminating that possibility (by shuffling > each block twice) is probably worthwhile. This does not > produce more permutations, it just hides shuffling sequence. Algorithm P cannot generate more than M distinct permutations when a linear congruential generator of modulus M is used as the PRNG per Knuth, Vol. 2, section 3.4.2 Random Sampling and Shuffling. (Page 145 in the Third Edition.) In fact, Algorithm P cannot possibly generate more than M distinct permutations when the PRNG is of the form X(n+1) = f(X(n)) such that X(n) can take on only M distinct values. (Knuth, same reference.) Consider a simple multiplicative congruential generator on Z*_p, such as X(n+1) = g * X(n) mod p, where p is prime, g is a generator on Z*_p, and a seed S, 0 < S < p is the initial value X(0). Then we can express the nth value of the linear congruential generator as X(n) = ( S * g^n ) mod p. X(n) takes on p-1 values. Driving Algorithm P with X(n) results in only p-1 distinct permutations. The number of possible permutations of N values is N! ( N factorial, not "surprise" :-) ) A block of N bits would have N! possible permutations. Use Algorithm P driven with a multiplicative congruential generator X(n) to permute the N bits. Algorithm P generates all possible permutations iff p-1 >= N!, or p >= N! + 1. For 8 bits, a prime modulus p > 8! + 1 = 40321 is needed to ensure Algorithm P generates every possible permutation of the 8 bits. For 16 bits, a prime modulus p > 16! + 1 , on the order of 2.092 x 10^13, is needed to ensure Algorithm P generates every possible permutation of the 16 bits. For 512 bit blocks, a prime modulus p > 512! + 1 (a big number) is needed to ensure Algorithm P generates every possible permutation of the 512 bit block. The dynamic transposition cipher relies on every permutation of the N bits in the block being possible (and equally probable.) However, here we see that the prime modulus p of a simple PRNG driving Algorithm P grows factorial in magnitude with the number of bits to permute when we try to ensure every permutation of the N bits is possible. Speedy encryption will require smaller prime moduli as N increases, with the fraction of the N! permutations actually used decreasing as N increases. Certain permutations of the N bits will never occur, and Knuth indicates the excluded permutations are given by a "fairly simple mathematical rule such as a lattice structure." That will help cryptanalysis. This is only a first step in exploring the strength of the dynamic transposition cipher. I can't say anything about how or if this result generalizes to other shuffling algorithms (other than Algorithm P from Knuth), for example. Or is that fraction of the N! permutation "enough" to preclude brute-force attacks? Or can one predict characteristics of the kinds of permutations that will be generated using a simple LCG with Algorithm P (such as the presence of certain cycles in permutations and dependencies between cycles in permutations across encryptions of successive blocks - the Knuth comment on lattice structure seems to point to that possibility.)? But it's a start :-) John A. Malley 102667.2235@compuserve.com This flavor of PRNG is cryptographically insecure.
Subject: Re: Dynamic Transposition Revisited (long) Date: Sun, 21 Jan 2001 23:12:55 GMT From: ritter@io.com (Terry Ritter) Message-ID: <3a6b6cbf.7508871@news.io.com> References: <3A6A7D42.2FC84F0B@virtualvision.com> Newsgroups: sci.crypt Lines: 153 On Sat, 20 Jan 2001 22:10:10 -0800, in <3A6A7D42.2FC84F0B@virtualvision.com>, in sci.crypt gcone <gcone@virtualvision.com> wrote: >Terry Ritter wrote: >[snip] >> >> A Dynamic Transposition cipher is conceptually very simple: >> >> (1) We collect plaintext data in bit-balanced (or almost >> bit-balanced) blocks. >> >> (2) We shuffle the bits in those blocks under the >> control of a keyed pseudorandom sequence. >> > >The strength of dynamic transposition rests on these two points. > >The benefit of bit-balancing is explained as follows in the original >post: > >> >> When every plaintext block is exactly bit-balanced, any >> possible plaintext block is some valid bit-permutation of >> any ciphertext block. So, even if an opponent could >> exhaustively un-permute a ciphertext block, the result >> would just be every possible plaintext block. > >and a concrete means of bit-sbuffling (a.k.a. pseudorandom permutation) >is offered as > >> >> .) The usual solution is the well-known algorithm by >> Durstenfeld, called "Shuffle," which Knuth II calls >> "Algorithm P (Shuffling)," although any valid permutation >> generator would be acceptable. >> >[snip} >> >> If we shuffle each block just once, an opponent who somehow >> knows the correct resulting permutation can use that >> information to reproduce the shuffling RNG sequence, and >> thus start to attack the RNG. And even though we think >> such an event impossible (since the correct permutation is >> hidden by a plethora of different bit-permutations that >> each produce exactly the same ciphertext from exactly the >> same plaintext), eliminating that possibility (by shuffling >> each block twice) is probably worthwhile. This does not >> produce more permutations, it just hides shuffling sequence. > >Algorithm P cannot generate more than M distinct permutations when a >linear congruential generator of modulus M is used as the PRNG per >Knuth, Vol. 2, section 3.4.2 Random Sampling and Shuffling. (Page 145 in >the Third Edition.) So don't do that. The whole point of providing links to my previous work was to show examples of real ciphering components. >In fact, Algorithm P cannot possibly generate more than M distinct >permutations when the PRNG is of the form X(n+1) = f(X(n)) such that >X(n) can take on only M distinct values. (Knuth, same reference.) Again, don't do that. >Consider a simple multiplicative congruential generator on Z*_p, such as >X(n+1) = g * X(n) mod p, where p is prime, g is a generator on Z*_p, and >a seed S, 0 < S < p is the initial value X(0). Then we can express the >nth value of the linear congruential generator as X(n) = ( S * g^n ) >mod p. X(n) takes on p-1 values. Driving Algorithm P with X(n) results >in only p-1 distinct permutations. > >The number of possible permutations of N values is N! ( N factorial, >not "surprise" :-) ) A block of N bits would have N! possible >permutations. Use Algorithm P driven with a multiplicative congruential >generator X(n) to permute the N bits. Algorithm P generates all >possible permutations iff p-1 >= N!, or p >= N! + 1. > >For 8 bits, a prime modulus p > 8! + 1 = 40321 is needed to ensure >Algorithm P generates every possible permutation of the 8 bits. > >For 16 bits, a prime modulus p > 16! + 1 , on the order of 2.092 x >10^13, is needed to ensure Algorithm P generates every possible >permutation of the 16 bits. > >For 512 bit blocks, a prime modulus p > 512! + 1 (a big number) is >needed to ensure Algorithm P generates every possible permutation of the >512 bit block. > >The dynamic transposition cipher relies on every permutation of the N >bits in the block being possible (and equally probable.) However, here >we see that the prime modulus p of a simple PRNG driving Algorithm P >grows factorial in magnitude with the number of bits to permute when we >try to ensure every permutation of the N bits is possible. Speedy >encryption will require smaller prime moduli as N increases, with the >fraction of the N! permutations actually used decreasing as N increases. >Certain permutations of the N bits will never occur, and Knuth indicates >the excluded permutations are given by a "fairly simple mathematical >rule such as a lattice structure." That will help cryptanalysis. > >This is only a first step in exploring the strength of the dynamic >transposition cipher. I can't say anything about how or if this result >generalizes to other shuffling algorithms (other than Algorithm P from >Knuth), for example. Or is that fraction of the N! permutation "enough" >to preclude brute-force attacks? Or can one predict characteristics of >the kinds of permutations that will be generated using a simple LCG with >Algorithm P (such as the presence of certain cycles in permutations and >dependencies between cycles in permutations across encryptions of >successive blocks - the Knuth comment on lattice structure seems to >point to that possibility.)? > >But it's a start :-) I don't know, maybe it will generalize into a better analysis. But it certainly has nothing to do with the modern cryptographic components I know. I recommend using an Additive Generator for many reasons, perhaps the most important of which is the ease with which one can develop RNG's with large amounts of internal state. (It is also fairly fast, and scales well to tiny size and so can be exhaustively tested.) As I recall, the shuffling RNG in my Cloak2 cipher was of degree 9689. If we use 32-bit elements, then we have 310k bits of internal state. I have also found and used a primitive of degree 11,213 (using my old 20MHz 386), and we probably could do better than that now, if necessary. For 512-bit blocks, 512! is about 3876 bits long (see: http://www.io.com/~ritter/JAVASCRP/PERMCOMB.HTM#Factorials ); that is the amount of information needed to select any one from among all possible permutations. (Shuffle will throw away some of that if we use a rejection scheme to flatten the decreasing range, and the jitterizer will throw out more, but those are details.) The amount of information in the 32-bit deg 9689 Additive RNG is thus almost 80 times as large as is needed to define a particular 512-bit permutation. There is thus ample information to perform such a shuffling. It may be that we would find it convenient to use something smaller than a 4k-bit block to reduce the size of the RNG, but these are design tradeoffs, as opposed to technology problems. --- Terry Ritter ritter@io.com http://www.io.com/~ritter/ Crypto Glossary http://www.io.com/~ritter/GLOSSARY.HTM
Subject: Re: Dynamic Transposition Revisited (long) Date: Mon, 22 Jan 2001 12:28:29 -0600 From: Mike Rosing <rosing@physiology.wisc.edu> Message-ID: <3A6C7BCD.BB8C4E8B@physiology.wisc.edu> References: <3a6b6cbf.7508871@news.io.com> Newsgroups: sci.crypt Lines: 13 Terry Ritter wrote: > It may be that we would find it convenient to use something smaller > than a 4k-bit block to reduce the size of the RNG, but these are > design tradeoffs, as opposed to technology problems. Yup, I wasn't thinking of blocks that large. Having a balanced distribution clearly makes finding the permutation hard even with plain-cipher pairs when the block size is of the order of even 1k. Thanks for making the point :-) Patience, persistence, truth, Dr. mike
Subject: Re: Dynamic Transposition Revisited (long) Date: Sat, 20 Jan 2001 23:15:40 -0800 From: "John A. Malley" <102667.2235@compuserve.com> Message-ID: <3A6A8C9C.10BED5A9@compuserve.com> References: <3a67737e.15537635@news.io.com> Newsgroups: sci.crypt Lines: 143 Terry Ritter wrote: [snip] > DYNAMIC TRANSPOSITION > > A Dynamic Transposition cipher is conceptually very simple: > > (1) We collect plaintext data in bit-balanced (or almost > bit-balanced) blocks. > > (2) We shuffle the bits in those blocks under the > control of a keyed pseudorandom sequence. > The security of the dynamic transposition cipher stands on these two points. The benefit of bit-balancing is explained as > > When every plaintext block is exactly bit-balanced, any > possible plaintext block is some valid bit-permutation of > any ciphertext block. So, even if an opponent could > exhaustively un-permute a ciphertext block, the result > would just be every possible plaintext block. No particular > plaintext block could be distinguished as the source of the > ciphertext. A bit-shuffling algorithm driven by a pseudorandom number generator (PRNG) permutes the N bits in a block: > > .) The usual solution is the well-known algorithm by > Durstenfeld, called "Shuffle," which Knuth II calls > "Algorithm P (Shuffling)," although any valid permutation > generator would be acceptable. > [snip] The pseudorandom permutation of a bit-balanced block is conjectured to hide the PRNG values used by the shuffling algorithm since there are so many possible permutations of the N bits whose outputs are the same. It's conjectured this shuffling of a bit-balanced block is so difficult to reverse back to the one, true sequence of PRNG outputs responsible that one need not even use cryptographically secure random number generators. [the following cut and pasted in a new order from the original post] > The main idea is to hide the RNG sequence (actually the > nonlinear sequence of jitterized values), so an opponent > cannot attack the deterministic RNG. Strength is provided > by the block size and guaranteed bit-balance, since, when > shuffled, a plethora of different permutations will take > the plaintext block to exactly the same ciphertext block. > There simply is no one permutation which produces the given > ciphertext. Since a plethora of different permutations will > produce the given ciphertext, trying them all is impractical. > So the opponents will not know the permutation -- even with > known plaintext -- and will not have the information to > attack the RNG. > > > Dynamic Transposition does not need the assumption of > sequence unpredictability, because the sequence is hidden > behind a multitude of different sequences and permutations > which all produce the same result. And if the sequence > itself cannot be exposed, exploiting any predictability in > the sequence will be difficult. (This of course does not > mean that Dynamic Transposition cannot be attacked: > Brute-force attacks on the keys are still imaginable, which > is a good reason to use large random message keys.) > Algorithm P cannot generate more than M distinct permutations when driven by a linear congruential sequence of modulus M. See Knuth Vol. 2, section 3.4.2, Random Sampling and Shuffling. (pg. 145 in the Third Edition.) Algorithm P cannot generate more than M distinct permutation when driven by a PRNG of the form X(n+1) = f( X(n) ) with X taking on M distinct values. Consider driving Algorithm P with a simple linear congruential generator X of the form X(n+1) = g*X(n) mod p, where p is a prime, g is a generator on Z*_p, and the initial value of the LCG is X(0) = S, a seed value such that 0 < S < p. (We can also express X(n) as X(n) = ( S * g^n ) mod p. ) X(n) takes on p - 1 distinct values. Use Algorithm P to shuffle (permute) N bits. There are N! possible permutations of those N bits. We want Algorithm P capable of generating every possible permutation of those N bits when driven by the simple LCG X. X produces p-1 possible permutations. To ensure every permutation of the N bits is possible requires p-1 >= N!, or p >= N! + 1. For 8 bits we need a prime modulus p >= 8! + 1, or 40321, to ensure every permutation of N bits is possible with Algorithm P. For 16 bits we need a prime modulus p >= 16! + 1, or about 2.093 x 10^13, to ensure every permutation of N bits is possible with Algorithm P. For 512 bits we need a prime modulus p >= 512! + 1 (a big number) to ensure every permutation of N bits is possible with Algorithm P. The prime modulus p for the PRNG driving Algorithm P increases factorial as N increases. For speed, pragmatic dynamic transposition ciphering with larger block sizes (N increasing) will end up using a decreasing fraction of the N! possible permutations. For a prime modulus p < N! only ( p-1 / N! ) permutations of an N bit block are possible. Knuth states the excluded permutations "are determined by a fairly simple mathematical rule such as a lattice structure." This is a (potential) aid to cryptanalysis. The cryptographic strength of the dynamic transposition cipher, as dependent on the number of permutations of N bit bit-balanced blocks, decreases as N increases if the prime modulus p of the PRNG here described remains <<< N! to permit pragmatic ciphering. Pragmatic DT ciphering (in this example) could just require one use a prime modulus p between 2^N-1 and 2^N in magnitude. Many (actually most) of the N! permutations would never be possible - and this changes the security of the DT cipher. This is only a first step in sizing up the dynamic transpositioning cipher for cryptanalysis. Does this generalize to other shuffling algorithms driven by linear congruential sequences? How does the choice of PRNG affect the number of permutations possible with a shuffling algorithm? Does the fraction of possible permutations achieved by Algorithm P driven by such a simple LCG PRNG still prevent any attacks other than "brute-force"? Is there a "fairly simple mathematical rule" relating characteristics of permutations produced by Algorithm P driven by this simple LCG PRNG (such as dependencies in the kinds of cycles in the permutations generated successively)? Don't know. Maybe others will see something to extend this. It's a start. John A. Malley 102667.2235@compuserve.com
Subject: Re: Dynamic Transposition Revisited (long) Date: Sun, 21 Jan 2001 00:26:38 -0800 From: "John A. Malley" <102667.2235@compuserve.com> Message-ID: <3A6A9D3E.B43593B4@compuserve.com> References: <3A6A8C9C.10BED5A9@compuserve.com> Newsgroups: sci.crypt Lines: 5 Well, so much for canceling this message. I accidentally posted under a pseudonym - saw it right away and canceled the message. I saw it appeared on Deja now, but got canceled from Compuserve. The second post (appears after this) I rewrote from scratch thinking this post got toasted.
Subject: Re: Dynamic Transposition Revisited (long) Date: Sun, 21 Jan 2001 13:46:09 +0100 From: Mok-Kong Shen <mok-kong.shen@t-online.de> Message-ID: <3A6ADA11.42E8E571@t-online.de> References: <3A6A8C9C.10BED5A9@compuserve.com> Newsgroups: sci.crypt Lines: 28 "John A. Malley" wrote: > [snip] > This is only a first step in sizing up the dynamic transpositioning > cipher for cryptanalysis. Does this generalize to other shuffling > algorithms driven by linear congruential sequences? How does the choice > of PRNG affect the number of permutations possible with a shuffling > algorithm? Does the fraction of possible permutations achieved by > Algorithm P driven by such a simple LCG PRNG still prevent any attacks > other than "brute-force"? Is there a "fairly simple mathematical rule" > relating characteristics of permutations produced by Algorithm P driven > by this simple LCG PRNG (such as dependencies in the kinds of cycles in > the permutations generated successively)? Don't know. Maybe others will > see something to extend this. I think that DT can't be provably secure because there simply can't exist any magic that can turn something that is predictable to unpredictable. Practically, though, using double (or perhaps more) permutations would render the job of analysis more difficult. Of course, one should consider using better pseudo-random sequences from the outset, e.g. through combining a number of PRNGs. M. K. Shen ----------------------------- http://home.t-online.de/home/mok-kong.shen
Subject: Re: Dynamic Transposition Revisited (long) Date: Sun, 21 Jan 2001 14:01:06 GMT From: jsavard@ecn.ab.SBLOK.ca.nowhere (John Savard) Message-ID: <3a6aeb4e.377834@news.powersurfr.com> References: <3A6A8C9C.10BED5A9@compuserve.com> Newsgroups: sci.crypt Lines: 13 On Sat, 20 Jan 2001 23:15:40 -0800, "John A. Malley" <102667.2235@compuserve.com> wrote, in part: >Algorithm P cannot generate more than M distinct permutations when >driven by a linear congruential sequence of modulus M. See Knuth Vol. 2, >section 3.4.2, Random Sampling and Shuffling. (pg. 145 in the Third >Edition.) True, but Terry Ritter would not propose any such thing. He is the inventor of several quality stream ciphers. John Savard http://home.ecn.ab.ca/~jsavard/crypto.htm
Subject: Re: Dynamic Transposition Revisited (long) Date: Sun, 21 Jan 2001 09:19:01 -0800 From: "John A. Malley" <102667.2235@compuserve.com> Message-ID: <3A6B1A05.60FFA2BD@compuserve.com> References: <3a6aeb4e.377834@news.powersurfr.com> Newsgroups: sci.crypt Lines: 69 John Savard wrote: > > On Sat, 20 Jan 2001 23:15:40 -0800, "John A. Malley" > <102667.2235@compuserve.com> wrote, in part: > > >Algorithm P cannot generate more than M distinct permutations when > >driven by a linear congruential sequence of modulus M. See Knuth Vol. 2, > >section 3.4.2, Random Sampling and Shuffling. (pg. 145 in the Third > >Edition.) > > True, but Terry Ritter would not propose any such thing. He is the > inventor of several quality stream ciphers. > The example used shows us the kind of cryptographically insecure PRNG and shuffling algorithm used can affect the practical security of the Dynamic Transposition cipher. A particular shuffling algorithm driven by a particular cryptographically insecure PRNG that takes on only p-1 distinct values cannot generate every possible permutation of the N bits in a block if the prime modulus is kept to a "reasonable" magnitude relative to the number of bits N. Any PRNG takes on only M distinct values (where M depends on the nature of the PRNG.) M could be very, very big. The source cited in Knuth's Vol. 2 states the shuffling Algorithm P can only reach M possible permutations when driven by a PRNG using output feedback that itself only attains M possible output values. So if M is much smaller than N! we'll never actually get all possible permutations of the N bits - and thus the number of permutations of the bit-balanced block that must be considered in cryptanalysis reduce significantly. I don't know if/how this generalizes across different shuffling algorithms driven by any kind of PRNG that takes on M distinct states, but I have a hunch it's not easy (or even possible) to guarantee the shuffling algorithm in the DTC can actually produce every possible permutation of N bits in a bit-balanced block when driven by PRNGs. Here's a thought - A PRNG always takes on M distinct values before it repeats its output sequence. The PRNG can only start from S distinct seed values, so it can only generate S distinct pseudorandom sequences of those M distinct values. Any deterministic shuffling algorithm can only work on the S different pseudorandom sequences out of the PRNG. We need a deterministic shuffling algorithm that takes S different inputs and produces N! different outputs (permutations of N bits.) That's the only constraint we put upon the shuffling algorithm - it outputs a unique permutation of the N bits for each unique input pseudorandom sequence it gets. So a bijective shuffling makes the N! permutations of the N bit bit-balanced block if and only if S >= N!. That happens if and only if there are at least N! possible seed values for the PRNG. For a simple LCG we would then need a modulus M >= N! (which we already showed) and that results in huge moduli and cumbersome calculations for the LCG used with large block sizes N. So if the shuffling algorithm is deterministic and bijective - each unique pseudorandom sequence input to it results in a unique permutation of the N bits in a bit-balanced block - then we must drive the shuffling algorithm with a PRNG that makes at least N! unique output sequences. If S >= M (i.e. the number of distinct output values of the PRNG is at most equal to the number of input seed values) then we need a PRNG that takes on at least N! distinct values. Are there PRNGs that fit this bill and are quick to calculate when N! is very large (for N = 512 bits for example?.) John A. Malley 102667.2235@compuserve.com
Subject: Re: Dynamic Transposition Revisited (long) Date: Sun, 21 Jan 2001 23:27:42 GMT From: ritter@io.com (Terry Ritter) Message-ID: <3a6b7029.8383379@news.io.com> References: <3A6B1A05.60FFA2BD@compuserve.com> Newsgroups: sci.crypt Lines: 20 On Sun, 21 Jan 2001 09:19:01 -0800, in <3A6B1A05.60FFA2BD@compuserve.com>, in sci.crypt "John A. Malley" <102667.2235@compuserve.com> wrote: >[...] >Are there PRNGs that fit this bill and >are quick to calculate when N! is very large (for N = 512 bits for >example?.) http://www.io.com/~ritter/ARTS/CRNG2ART.HTM#Sect4.10 http://www.io.com/~ritter/CLO2DESN.HTM#Components This is very well-known cryptographic technology, and a link to the Cloak2 design description was given in the "Revisited" article. --- Terry Ritter ritter@io.com http://www.io.com/~ritter/ Crypto Glossary http://www.io.com/~ritter/GLOSSARY.HTM
Subject: Re: Dynamic Transposition Revisited (long) Date: Fri, 26 Jan 2001 12:07:40 -0700 From: "Tony T. Warnock" <u091889@cic-mail.lanl.gov> Message-ID: <3A71CAFC.71B10BC0@cic-mail.lanl.gov> References: <3a67737e.15537635@news.io.com> Newsgroups: sci.crypt Lines: 20 I have a suggestion for the initial statistical-balance step (to reduce the later balance extensions.) XOR the input with a DeBruijn sequence. For example a simple method would be to XOR the sequence 0101010101.... Better would be 001100110011.... and even better 0001011100010111.... In the latter case, the XORing sequence is one byte long so one might improve things by rotating this sequence between bytes. Longer sequences are possible 0000100110101111 could be used on pairs of bytes--with rotation. A wierder method (that I developed for a hand-held calculator) works as follows: pick a magic prime P (9949 is good) set up an accumulator A=P/2 (4975 or 4974), the do the following for each input bit x(j): A=2A+x(j); if(A<P) then y(j)=0 else y(j)=1; A=A-P endif This works of 2 is a primitive root of P. (Even better if 4 is also a primitive root.) The whole thing works better in base 3 using P=487.
Subject: Re: Dynamic Transposition Revisited (long) Date: Fri, 26 Jan 2001 22:25:04 GMT From: ritter@io.com (Terry Ritter) Message-ID: <3a71f93e.8334253@news.io.com> References: <3A71CAFC.71B10BC0@cic-mail.lanl.gov> Newsgroups: sci.crypt Lines: 23 On Fri, 26 Jan 2001 12:07:40 -0700, in <3A71CAFC.71B10BC0@cic-mail.lanl.gov>, in sci.crypt "Tony T. Warnock" <u091889@cic-mail.lanl.gov> wrote: >I have a suggestion for the initial statistical-balance step (to reduce >the later balance extensions.) XOR the input with a DeBruijn sequence. >For example a simple method would be to XOR the sequence 0101010101.... >Better would be 001100110011.... and even better 0001011100010111.... In >the latter case, the XORing sequence is one byte long so one might >improve things by rotating this sequence between bytes. Longer >sequences are possible 0000100110101111 could be used on pairs of >bytes--with rotation. If we are going to process plaintext with a known sequence, why should any particular balanced sequence be better than any other? It seems like there will always be some plaintext that will not be helped, or would in fact be made worse. --- Terry Ritter ritter@io.com http://www.io.com/~ritter/ Crypto Glossary http://www.io.com/~ritter/GLOSSARY.HTM

Subject: Fitting Dynamic Transposition into a Binary World Date: Mon, 22 Jan 2001 01:11:58 GMT From: jsavard@ecn.ab.SBLOK.ca.nowhere (John Savard) Message-ID: <3a6b85a7.10821996@news.powersurfr.com> Newsgroups: sci.crypt Lines: 105 Well, I noted that the chief problem with Dynamic Transposition is its bandwidth cost, and that has started me thinking of ways to avoid that cost. As I noted, there are 252 balanced strings of 10 bits, as against 256 arbitrary strings of 8 bits. Because the number of balanced strings, as one increases the length of a string by 2 bits, increases by a factor that is just under 4, one has to go quite a way to find other good coincidences. 2^160 equals 1461501637330902918203684832716283019655932542976 and the number of 164-bit balanced strings is 1454706556296192477283016662986999417820887445240 so one way to use Dynamic Transposition on arbitrary binary sequences would be to have a coding for 160-bit arbitrary sequences into 164-bit balanced sequences. Some sequences of 160 bits wouldn't have an equivalent, and so would have to be converted to something else; either to shorter balanced sequences, which could also be enciphered by Dynamic Transposition, or to some other kind of object to be enciphered in another way. A somewhat fancier possibility would allow at least some binary sequences to be enciphered directly. There are 1532495540865888858358347027150309183618739122183602176 possible strings of 180 bits. Of these, 91012248672832285155575331798825309656983959185522800 contain the same number of ones and zeroes, leaving 1532495540865888858358347027150309183618739122183602176 - 91012248672832285155575331798825309656983959185522800 ------------------------------------------------------- 1441483292193056532202771695351483873961755162998079376 strings of 180 bits that do not contain the same number of ones and zeroes. And, as it happens, there are 1440324277491745714862934407631385920577295594888710800 strings of 184 bits that contain the same number of ones and zeroes. So at this point, we can do the following: 1) Take string of 180 bits to encrypt. 2) Is it balanced? YES: encipher directly using Dynamic Transposition. NO: continue 3) Attempt to code it to a balanced string 184 bits long. Does it have an equivalent in this form? YES: encipher equivalent using Dynamic Transposition. NO: continue and then find some suitable coding again for the leftovers. Of course, this kind of begs the question of how to devise an efficient coding for arbitrary strings into balanced strings. From arbitrary binary strings, one could use a simple numeration of balanced strings... 00000000 = 0000011111 00000001 = 0000101111 00000010 = 0000110111 00000011 = 0000111011 ... 11111011 = 1111100000 11111100 ... coded to something else and maybe there *might* be a simple algorithm to do this for strings too large to put in a table but my second idea, coding unbalanced strings to balanced ones, seems less likely to be workable. Of course, if an algorithm _did_ exist, it would produce a nicely weird mathematical structure. Since the ratio between successive numbers of balanced blocks is just slightly less than 4, one could perhaps use Dynamic Transposition 'all the way down' if one allowed leftovers to be coded as a balanced block plus a single symbol from an alphabet of 2 or 3 symbols - this single symbol to be enciphered somehow based on a function of the whole balanced block between its two transpositions. But eventually we would get to areas where only a small number of leftovers remained, and the division into groups by whatever coding we used to make balanced blocks out of arbitrary blocks wouldn't be blurred. So we would still have to do a conventional encipherment of blocks before and after doing Dynamic Transposition in this case - that isn't Terry Ritter's fault, it's the fault of the debasement I've applied to his method to make it fit the binary world. Note that Rijndael happens to be, handily, available in a 160-bit block size! Ah, well, at least we are having some lucky mathematical coincidences here. John Savard http://home.ecn.ab.ca/~jsavard/crypto.htm
Subject: Re: Fitting Dynamic Transposition into a Binary World Date: Mon, 22 Jan 2001 02:07:24 GMT From: jsavard@ecn.ab.SBLOK.ca.nowhere (John Savard) Message-ID: <3a6b953d.14812056@news.powersurfr.com> References: <3a6b85a7.10821996@news.powersurfr.com> Newsgroups: sci.crypt Lines: 29 On Mon, 22 Jan 2001 01:11:58 GMT, jsavard@ecn.ab.SBLOK.ca.nowhere (John Savard) wrote, in part: >Note that >Rijndael happens to be, handily, available in a 160-bit block size! Of course, if we do the following: Rijndael (with fixed key) fixed mapping to balanced blocks of different sizes Dynamic Transposition fixed mapping to arbitrary 160-bit binary blocks Rijndael (with fixed key) the result will have the same fundamental weakness as PURPLE, in that there will be some blocks that only get enciphered to other blocks from a small group. While launching a codebook attack against a 160-bit block cipher seems a bit much, it still seems advisable to include a binary stream cipher component as well, perhaps before the first mapping and after the second mapping. John Savard http://home.ecn.ab.ca/~jsavard/crypto.htm
Subject: Re: Fitting Dynamic Transposition into a Binary World Date: Mon, 22 Jan 2001 05:44:36 GMT From: jsavard@ecn.ab.SBLOK.ca.nowhere (John Savard) Message-ID: <3a6bc5dc.253519@news.powersurfr.com> References: <3a6b85a7.10821996@news.powersurfr.com> Newsgroups: sci.crypt Lines: 78 On Mon, 22 Jan 2001 01:11:58 GMT, jsavard@ecn.ab.SBLOK.ca.nowhere (John Savard) wrote, in part: >Of course, this kind of begs the question of how to devise an >efficient coding for arbitrary strings into balanced strings. From >arbitrary binary strings, one could use a simple numeration of >balanced strings... >00000000 = 0000011111 >00000001 = 0000101111 >00000010 = 0000110111 >00000011 = 0000111011 >... >11111011 = 1111100000 >11111100 ... coded to something else >and maybe there *might* be a simple algorithm to do this for strings >too large to put in a table >but my second idea, coding unbalanced strings to balanced ones, seems >less likely to be workable. Of course, if an algorithm _did_ exist, it >would produce a nicely weird mathematical structure. I've started to think of an algorithm for the conversion that gives a mapping that eliminates the need for the second idea, because it keeps a close relationship between the original binary sequence and the balanced one to which it is mapped. Essentially, since my target code has the constraint of having the same number of ones and zeroes, why not use the number of ones and zeroes in the input string to indicate how to manipulate it to form the result string. For mapping an arbitrary 8-bit string to a balanced 10-bit string (with four codes left out): If the 8-bit string is balanced, map it to 01 + the original 8-bit string. If the 8-bit string has 3 zeroes and 5 ones, map it to 00 + the original 8-bit string. If the 8-bit string has 5 zeroes and 3 ones, map it to 11 + the original 8-bit string. Then, the code 100 would precede the representations of strings with more than five ones, and the code 101 would precede the representation of strings with more than five zeroes. But after the initial start, it seems to get complicated, and there's no obvious way to do better than a plain enumeration. Instead of going straight to the mapping from an arbitrary 160-bit string to a balanced 164-bit string, there is a good value at an intermediate point. 2^39 is 549,755,813,888 and there are 538,257,874,440 42-bit balanced strings The codes can start like this: 111 + 21 zeroes and 18 ones 110 + 20 zeroes and 19 ones 001 + 19 zeroes and 20 ones 000 + 18 zeroes and 21 ones but again, it's unclear what to do for the next step. Maybe there already is a mapping to balanced strings that has a simple and optimal algorithm, faster than the one for the mapping in binary numerical order. John Savard http://home.ecn.ab.ca/~jsavard/crypto.htm
Subject: Re: Fitting Dynamic Transposition into a Binary World Date: Tue, 23 Jan 2001 00:54:24 GMT From: Benjamin Goldberg <goldbb2@earthlink.net> Message-ID: <3A6CD294.292D0D91@earthlink.net> References: <3a6bc5dc.253519@news.powersurfr.com> Newsgroups: sci.crypt Lines: 48 Perhaps if we worked with one bit at a time, rather than 8 bit bytes? Assume we have a 128 bit block. R = raw bitstream B = balanced bitstream z, o = # zeros and ones To create balanced stream from raw stream: while( z < 64 && o < 64 ) { x = R.nextbit(); B.appendbit(x); x ? ++o : ++z; } if( z < 64 ) B.appendbits(z,0); if( o < 64 ) B.appendbits(o,1); To create raw stream from balanced stream: while( z < 64 && o < 64 ) { x = B.nextbit(); R.appendbit(x); x ? ++o : ++z; } if( z < 64 ) B.skipbits(64-z); if( o < 64 ) B.skipbits(64-o); If input bits are all 0s or all 1s, then for every 64 input bits, there are 128 output bits (100% expansion). If input bits are balanced, then for every 127 input bits, there are 128 output bits (0.79% expansion). Larger blocks produce less expansion. If the data has been compressed beforehand, then it is hopefully nearly balanced, and this method is optimum. Since, with Real World data, data is not always compressed, and thus not always almost balanced, we should XOR the raw stream with something to bring it closer to being bit-balanced. Some examples are: x = R.nextbit() ^ (z+o & 1); or x = R.nextbit() ^ (z>o ? 1 : 0); or x = R.nextbit() ^ lfsr_nextbit(); Which of these will work best depends on the nature of the data. -- Most scientific innovations do not begin with "Eureka!" They begin with "That's odd. I wonder why that happened?"
Subject: Re: Fitting Dynamic Transposition into a Binary World Date: Tue, 23 Jan 2001 01:12:00 GMT From: jsavard@ecn.ab.SBLOK.ca.nowhere (John Savard) Message-ID: <3a6cda10.19691773@news.powersurfr.com> References: <3A6CD294.292D0D91@earthlink.net> Newsgroups: sci.crypt Lines: 14 On Tue, 23 Jan 2001 00:54:24 GMT, Benjamin Goldberg <goldbb2@earthlink.net> wrote, in part: >Perhaps if we worked with one bit at a time, rather than 8 bit bytes? Well, I wasn't thinking of 8-bit bytes necessarily, but yes, I was thinking of converting a *fixed-length* input string of bits into a *fixed-length* output balanced string of bits, so that the amount of expansion was always constant, and therefore predictable. But as you note, that makes the algorithm more complicated. John Savard http://home.ecn.ab.ca/~jsavard/crypto.htm
Subject: Re: Fitting Dynamic Transposition into a Binary World Date: Tue, 23 Jan 2001 01:46:36 GMT From: Benjamin Goldberg <goldbb2@earthlink.net> Message-ID: <3A6CE26E.9990BB77@earthlink.net> References: <3a6cda10.19691773@news.powersurfr.com> Newsgroups: sci.crypt Lines: 38 John Savard wrote: > > On Tue, 23 Jan 2001 00:54:24 GMT, Benjamin Goldberg > <goldbb2@earthlink.net> wrote, in part: > > >Perhaps if we worked with one bit at a time, rather than 8 bit bytes? > > Well, I wasn't thinking of 8-bit bytes necessarily, but yes, I was > thinking of converting a *fixed-length* input string of bits into a > *fixed-length* output balanced string of bits, so that the amount of > expansion was always constant, and therefore predictable. > > But as you note, that makes the algorithm more complicated. Besides that, does conversion of a fixed-length input string to a fixed-length output string provide any kind of size garuntees? I know that mine, in the worst case scenario, produces 100% expansion of data (ie, doubling), but much less than this in most cases. Also, just as important -- what about speed? Mine is not just simple, but it's fairly fast too, if bit operations are fast. How would you go about doing a fixed-length to fixed-length conversion for large blocks in practice? Certainly not a [ridiculously huge] lookup table. PS, I can think of a fixed-length to fixed-length conversion which is as fast as my method, if not faster. while(1) R.nextbit() ? B.appendbit(1), B.appendbit(0) : B.appendbit(0), B.appendbit(1) ; However, I'm certain you don't like the expansion rate this gives :) -- Most scientific innovations do not begin with "Eureka!" They begin with "That's odd. I wonder why that happened?"
Subject: Re: Fitting Dynamic Transposition into a Binary World Date: Tue, 23 Jan 2001 02:00:36 GMT From: jsavard@ecn.ab.SBLOK.ca.nowhere (John Savard) Message-ID: <3a6ce4ed.22472812@news.powersurfr.com> References: <3A6CE26E.9990BB77@earthlink.net> Newsgroups: sci.crypt Lines: 13 On Tue, 23 Jan 2001 01:46:36 GMT, Benjamin Goldberg <goldbb2@earthlink.net> wrote, in part: >How would you go about doing a fixed-length to fixed-length conversion >for large blocks in practice? Certainly not a [ridiculously huge] >lookup table. Well, an earlier post in this thread shows what I was thinking of...and for an example in another domain, look at the page titled 'From 47 bits to 10 letters' on my web site. John Savard http://home.ecn.ab.ca/~jsavard/crypto.htm
Subject: Re: Fitting Dynamic Transposition into a Binary World Date: Tue, 23 Jan 2001 17:29:25 GMT From: jsavard@ecn.ab.SBLOK.ca.nowhere (John Savard) Message-ID: <3a6dbf45.18798188@news.powersurfr.com> References: <3a6ce4ed.22472812@news.powersurfr.com> Newsgroups: sci.crypt Lines: 153 On Tue, 23 Jan 2001 02:00:36 GMT, jsavard@ecn.ab.SBLOK.ca.nowhere (John Savard) wrote, in part: >On Tue, 23 Jan 2001 01:46:36 GMT, Benjamin Goldberg ><goldbb2@earthlink.net> wrote, in part: >>How would you go about doing a fixed-length to fixed-length conversion >>for large blocks in practice? Certainly not a [ridiculously huge] >>lookup table. >Well, an earlier post in this thread shows what I was thinking >of...and for an example in another domain, look at the page titled >'From 47 bits to 10 letters' on my web site. OK: here we go - well, partly. Maybe I'll find a better approach than this, but I think this is still illustrative. As I noted in another post: One can map 37 arbitrary bits (137,438,953,472 possibilities) to 40 balanced bits (137,846,528,820 possibilities) for only an 8.11% increase in bandwidth cost. A 40 bit string can be split into 5 bytes. For a byte, there are: 1 possible arrangement, each, of 8 zeroes or 8 ones. 8 possible arrangements, each, of 7 zeroes and one one, or one zero and 7 ones. 28 possible arrangements, each, of 6 zeroes and 2 ones, or 2 zeroes and 6 ones. 56 possible arrangements, each, of 5 zeroes and 3 ones, or 3 zeroes and 5 ones. 70 possible arrangements of 4 zeroes and 4 ones. Thus: our 37 bit string is a number between 0 and 137,438,953,471. For numbers between 0 and 1,680,699,999, let the bit string be represented by five consecutive balanced bytes, since 1680700000 is 70 to the fifth power. A balanced string of 40 bits can also be made up of one byte with 3 ones, one byte with 5 ones, and three balanced bytes. There are 20 different ways to arrange these three kinds of bytes, and for each such arrangement, 56 * 70 * 70 * 70 * 56, or 1,075,648,000 different bit strings are possible. Thus, for the 21,512,960,000 possible values for our input 37 bit string from 1,680,700,000 to 23,193,659,999, we assign it to a string of this type. The next possibility: (3)(3)(4)(5)(5) - two bytes with three ones, one byte that is balanced, two bytes with five ones. Here, for each arrangement, there are 56*56*70*56*56 or 688,414,720 possibilities, and there are 30 possible arrangements of the three kinds of byte. Thus, 20,652,441,600 possible values may be assigned to patterns of this type, representing the numbers from 23,193,660,000 to 43,846,101,599. Next, we will consider the possibility (2)(4)(4)(4)(6). Again 20 arrangements, and each arrangement has 28*70*70*70*28 possibilities. So we can account for 537,824,000 possible values, from 43,846,101,600 to 44,383,925,599. Now we can start to consider asymmetric possibilities. (3)(3)(4)(4)(6) and (2)(4)(4)(5)(5) each have 30 possible arrangements, and 56*56*70*70*28, or 430,259,200 possibilities each. So between them, these two possibilities account for 25,815,552,000 values, from 44,383,925,600 to 70,199,477,599. Well, we've reached the halfway point! Next in order of consideration will be the pattern (2)(3)(4)(5)(6). This has 120 possible arrangements, and each arrangement has 28*56*70*56*28 or 172,103,680 possibilities. So we now account for 20,652,441,600 possible values, from 70,199,477,600 to 90,851,919,199. Then, the more extreme pattern (2)(2)(4)(6)(6) has 30 arrangements, each with 28*28*70*28*28 or 43,025,920 possibilities, accounting for 1,290,777,600 values, from 90,851,919,200 to 92,142,696,799. Again, we will consider a pair of asymmetric patterns, in this case (2)(3)(3)(6)(6) and (2)(2)(5)(5)(6). Each arrangement of these patterns has 28*56*56*28*28 or 68,841,472 possible values, and each of these patterns has 30 arrangements, for a total of 4,130,488,320 values, which I can use to represent the arbitrary 37-bit patterns with values from 92,142,696,800 to 96,273,185,119. More profoundly asymmetric are (3)(3)(3)(5)(6) and (2)(3)(5)(5)(5). Each of these patterns has 20 arrangements, and each arrangement accounts for 56*56*56*56*28, or 275,365,888 possible values, so we can take care of 11,014,635,520 values, from 96,273,185,120 to 107,287,820,639. Well, it looks like I will have to keep scraping further down in this barrel, because I have 137,438,953,472 values I need to take care of, so there are still over 30 billion values to account for. So I will begin with (1)(4)(4)(4)(7). 20 arrangements, with each arrangement having 8*70*70*70*8 or 21,952,000 possibilities. This accounts for 439,040,000 values, from 107,287,820,640 to 107,726,860,639. Next is (1)(3)(4)(5)(7). 120 arrangements, with each arrangement having 8*56*70*56*8 or 14,049,280 possibilities. This will account for 1,685,913,600 values, from 107,726,860,640 to 109,412,774,239. Next might be (1)(2)(4)(6)(7). 120 arrangements, and each arrangement has 8*28*70*28*8 or 3,512,320 possibilities. This will account for 421,478,400 values, from 109,412,774,240 to 109,834,252,639. Perhaps once we turn again to asymmetric arrangements we will make faster headway. Let us consider (1)(4)(4)(5)(6) and (2)(3)(4)(4)(7). Each of these two patterns has 60 arrangements, and each arrangement has 8*70*70*56*28 or 61,465,600 possible values. So, we can account for 7,375,872,000 possible values, from 109,834,252,640 to 117,210,124,639. Now, let us consider (1)(3)(4)(6)(6) and (2)(2)(4)(5)(7). Again, each of these patterns has 60 arrangements. This time, the arrangements have 8*56*70*28*28 or 24,586,240 possible values, so we are able to account for 2,950,348,800 possible values, from 117,210,124,640 to 120,160,473,439. We still have some more possibilities in this vein. (1)(3)(5)(5)(6) and (2)(3)(3)(5)(7) also each have 60 arrangements, and each arrangement has 8*56*56*56*28 or 39,337,984 possibilities. So we can account for 4,720,558,080 values, from 120,160,473,440 to 124,881,031,519. And then there is (1)(4)(5)(5)(5) and (3)(3)(3)(4)(7). Each of these has 20 arrangements, and each arrangement has 8*70*56*56*56 or 98,344,960 possibilities. Thus, we can account for 3,933,798,400 values, from 124,881,031,520 to 128,814,829,919. Now let us consider (1)(2)(5)(6)(6) and (2)(2)(3)(6)(7). Each of these has 60 arrangements, and each arrangement has 8*28*56*28*28 or 9,834,496 possible values. So we can account for 1,180,139,520 values, from 128,814,829,920 to 129,994,969,439. Still almost 8 billion to go, and I seem to have run out of the good possibilities with only one byte that has only 8 possibilities. Then there is (1)(2)(5)(5)(7) and (1)(3)(3)(6)(7). Each one has 60 arrangements, and each arrangement has 8*28*56*56*8 or 5,619,712 possible values. This allows us to account for 674,365,440 values, from 129,994,969,440 to 130,669,334,879. I think I'll give up for now... John Savard http://home.ecn.ab.ca/~jsavard/crypto.htm
Subject: Re: Fitting Dynamic Transposition into a Binary World Date: 26 Jan 2001 13:49:54 GMT From: rpw3@rigden.engr.sgi.com (Rob Warnock) Message-ID: <94rva2$4c5bj$1@fido.engr.sgi.com> References: <3a6bc5dc.253519@news.powersurfr.com> Newsgroups: sci.crypt Lines: 21 John Savard <jsavard@ecn.ab.SBLOK.ca.nowhere> wrote: +--------------- | Maybe there already is a mapping to balanced strings that has a simple | and optimal algorithm, faster than the one for the mapping in binary | numerical order. +--------------- See my recent posting <URL:news:<94rtfd$4b7p4$1@fido.engr.sgi.com> in the original ST thread. The key notion, extended from the HIPPI-Serial Standard's 21b/24b code, is "partitioned polarity inversion" (to coin a phrase). Summary: Exact bit balancing can be very fast to compute, and asymptotically cheap in bandwidth: ~1% for 256-byte blocks. -Rob ----- Rob Warnock, 31-2-510 rpw3@sgi.com SGI Network Engineering http://reality.sgi.com/rpw3/ 1600 Amphitheatre Pkwy. Phone: 650-933-1673 Mountain View, CA 94043 PP-ASEL-IA
Subject: Re: Fitting Dynamic Transposition into a Binary World Date: Fri, 26 Jan 2001 16:06:14 GMT From: jsavard@ecn.ab.SBLOK.ca.nowhere (John Savard) Message-ID: <3a719f02.9062622@news.powersurfr.com> References: <94rva2$4c5bj$1@fido.engr.sgi.com> Newsgroups: sci.crypt Lines: 43 On 26 Jan 2001 13:49:54 GMT, rpw3@rigden.engr.sgi.com (Rob Warnock) wrote, in part: >John Savard <jsavard@ecn.ab.SBLOK.ca.nowhere> wrote: >+--------------- >| Maybe there already is a mapping to balanced strings that has a simple >| and optimal algorithm, faster than the one for the mapping in binary >| numerical order. >+--------------- >See my recent posting <URL:news:94rtfd$4b7p4$1@fido.engr.sgi.com> >in the original ST thread. The key notion, extended from the HIPPI-Serial >Standard's 21b/24b code, is "partitioned polarity inversion" (to coin >a phrase). Summary: Exact bit balancing can be very fast to compute, >and asymptotically cheap in bandwidth: ~1% for 256-byte blocks. That certainly will serve where Terry Ritter's algorithm would serve. My purposes, however, are different. I want to incorporate the Dynamic Transposition technique in combination with other conventional encipherment techniques. To do this, I need exactly zero bandwidth cost. That is, I need to do the following: For binary blocks of length N, I need to convert some subset of the 2^N possible values to balanced blocks of length M, such that I have used all possible values of balanced blocks of length M. This way, although some blocks get skipped, and aren't encrypted (which can be taken care of in other steps) after I perform a Dynamic Transposition which returns balanced blocks of length M, I can convert them _back_ and resume conventional binary encryption. Two Dynamic Transposition layers with conversions which act on opposite ends of the spectrum of binary values can ensure every block goes through DT at least once. To do that, I need an encoding that is perfect for a given fixed block length, and asymptotic doesn't help me. John Savard http://home.ecn.ab.ca/~jsavard/crypto.htm
Subject: Re: Fitting Dynamic Transposition into a Binary World Date: 24 Jan 2001 05:41:18 GMT From: ka@socrates.hr.lucent.com.no_spam (Kenneth Almquist) Message-ID: <94lptu$h0h@nntpa.cb.lucent.com> References: <3a6b85a7.10821996@news.powersurfr.com> Newsgroups: sci.crypt Lines: 97 jsavard@ecn.ab.SBLOK.ca.nowhere (John Savard) wrote: > 2^160 equals > 1461501637330902918203684832716283019655932542976 > > and the number of 164-bit balanced strings is > 1454706556296192477283016662986999417820887445240 > > so one way to use Dynamic Transposition on arbitrary binary sequences > would be to have a coding for 160-bit arbitrary sequences into 164-bit > balanced sequences. > > Some sequences of 160 bits wouldn't have an equivalent, and so would > have to be converted to something else; either to shorter balanced > sequences, which could also be enciphered by Dynamic Transposition, or > to some other kind of object to be enciphered in another way. This means that your encoding is effectively variable length. For each 160-bit sequence, you need to transmit an indication of whether it has been encoded as a 164-bit balanced sequence, or some other way. This wastes communication bandwidth (though not a lot) and also complicates the encoding because the indication cannot be transmitted in the clear (or you leak information). Might I suggest coding 158-bit arbitrary sequences as 162-bit balanced sequences? There number of 162-bit balanced sequences is 365907784099042279561985786395502921046971688680 while 2^158 is slightly less: 365375409332725729550921208179070754913983135744 This would create a fixed overhead of four additional bits for every 158-bit block, which would add a communication overhead of about 2.53%. If you decided to use a block size of 128 bits (in order to interface to standard 32 and 64 bit hardware more easily), adding four bits per block amounts to a communication overhead of 3.125%, which is still quite reasonable. > Of course, this kind of begs the question of how to devise an > efficient coding for arbitrary strings into balanced strings. From > arbitrary binary strings, one could use a simple numeration of > balanced strings... > > 00000000 = 0000011111 > 00000001 = 0000101111 > 00000010 = 0000110111 > 00000011 = 0000111011 > ... > 11111011 = 1111100000 > 11111100 ... coded to something else > > and maybe there *might* be a simple algorithm to do this for strings > too large to put in a table Finding an algorithm to do this is an interesting challenge. The arbitrary strings can be regarded as binary numbers. We define a mapping between balanced strings and numbers by listing the balanced strings in lexical order and numbering them starting from zero, as you have done above. The first thing to note is that we can compute the number of balanced strings with a given prefix relatively efficiently. For example, let's say that our balanced strings are 162 bits long, meaning they contain 81 "1" bits and 81 "0" bits. If we want to know how many strings have the prefix "110", we count the number of zeros and ones in the prefix. There are two "1" bits and one "0" bits. This means that the remainder of the string must have 79 "1" bits and 80 "0" bits. The number of possibilities is (79+80)! / (79! * 80!). This allows us to process a balanced string from left to right, keeping track of the number of the first (or last) balanced string beginning with the bits we have seen so far. This minimum starts out as zero. As we scan the string, if the next bit is zero, then the minimum is unchanged. If we encounter a one, then we know that all the balanced strings which have a zero where the string we are scanning has a one must lexically precede the string we are scanning. Therefore, we add the number of such strings to the minimum. To compute the number of a balanced string, we perform the operation described in the preceding paragraph, counting the number of "0" and "1" bits as we go along. When we have seen 81 of either type of bit, then there is only one possible value for the remaining bits. (They must be all zeros or all ones.) At that point there is only one balanced string beginning with the bits we have seen so far, and we know what the number of that balanced string is, so we are done. To generate the balanced string corresponding to a number, we modify the algorithm to generate bits rather than reading them. At each step, we first try outputting a "1" bit. If that would make the minimum exceed the number whose bit string we are trying to construct, (or if we have already output 81 "1" bits) we output a "0" instead. These algorithms can be executed moderately efficiently. The combinatorial calculation used to determine the number of balanced strings with a given prefix can be precomputed and stored in a table. At the cost of using larger tables, we could make the decoding algorithm process multiple bits of the balanced string at a time. However, there is no obvious way to generate a balanced string without doing one bit at a time. Kenneth Almquist
Subject: Re: Fitting Dynamic Transposition into a Binary World Date: Wed, 24 Jan 2001 09:45:20 GMT From: jsavard@ecn.ab.SBLOK.ca.nowhere (John Savard) Message-ID: <3a6ea2e2.16381955@news.powersurfr.com> References: <94lptu$h0h@nntpa.cb.lucent.com> Newsgroups: sci.crypt Lines: 34 On 24 Jan 2001 05:41:18 GMT, ka@socrates.hr.lucent.com.no_spam (Kenneth Almquist) wrote, in part: >This means that your encoding is effectively variable length. For each >160-bit sequence, you need to transmit an indication of whether it >has been encoded as a 164-bit balanced sequence, or some other way. >This wastes communication bandwidth (though not a lot) and also >complicates the encoding because the indication cannot be transmitted >in the clear (or you leak information). Might I suggest coding 158-bit >arbitrary sequences as 162-bit balanced sequences? Yes, I'm aware of that, however, you have mistaken my intention. The reason I went two bits over the optimal point for total encoding of binary sequences is that my aim was a perfect encoding. Essentially, what I intended to do overall was this: Take the 160-bit sequences, and after converting most of them to 164-bit balanced sequences, and enciphering the 164-bit balanced sequences using Dynamic Transposition, convert them *back* to 160-bit sequences. So my overall cipher works like this: 160-bit binary sequences in, 160-bit binary sequences out. I supplement Dynamic Transposition with other encryption on binary sequences, and perform two Dynamic Transposition steps with a different choice of which 160-bit binary sequences are chosen as the left-over ones, to avoid weakness from not encrypting a small group of sequences. John Savard http://home.ecn.ab.ca/~jsavard/crypto.htm
Subject: Re: Fitting Dynamic Transposition into a Binary World Date: Wed, 24 Jan 2001 17:26:10 GMT From: Splaat23 <splaat23@my-deja.com> Message-ID: <94n373$e7b$1@nnrp1.deja.com> References: <3a6ea2e2.16381955@news.powersurfr.com> Newsgroups: sci.crypt Lines: 57 I assume the problem you would have with the 158->162 conversion is that the conversion back would not be guaranteed because there are more 162-bit balanced strings than there are 158-bit binary strings. However, if you intend to implement some sort of dynamic transpositon in between the conversions, you can simply, at the end of the transposition, check to see if the result has a binary version. The likelihood of getting an invalid block is not high, but exists. If the block isn't a "valid" balanced block, then simply repeat the transposition (assuming it is not it's own inverse) until you get a valid 162-bit balanced string. This (as far as I can tell, IANA good cryptographer) should not compromise security. - Andrew In article <3a6ea2e2.16381955@news.powersurfr.com>, jsavard@ecn.ab.SBLOK.ca.nowhere (John Savard) wrote: > On 24 Jan 2001 05:41:18 GMT, ka@socrates.hr.lucent.com.no_spam > (Kenneth Almquist) wrote, in part: > > >This means that your encoding is effectively variable length. For each > >160-bit sequence, you need to transmit an indication of whether it > >has been encoded as a 164-bit balanced sequence, or some other way. > >This wastes communication bandwidth (though not a lot) and also > >complicates the encoding because the indication cannot be transmitted > >in the clear (or you leak information). Might I suggest coding 158- bit > >arbitrary sequences as 162-bit balanced sequences? > > Yes, I'm aware of that, however, you have mistaken my intention. > > The reason I went two bits over the optimal point for total encoding > of binary sequences is that my aim was a perfect encoding. > > Essentially, what I intended to do overall was this: > > Take the 160-bit sequences, and after converting most of them to > 164-bit balanced sequences, and enciphering the 164-bit balanced > sequences using Dynamic Transposition, convert them *back* to 160-bit > sequences. > > So my overall cipher works like this: 160-bit binary sequences in, > 160-bit binary sequences out. > > I supplement Dynamic Transposition with other encryption on binary > sequences, and perform two Dynamic Transposition steps with a > different choice of which 160-bit binary sequences are chosen as the > left-over ones, to avoid weakness from not encrypting a small group of > sequences. > > John Savard > http://home.ecn.ab.ca/~jsavard/crypto.htm > Sent via Deja.com http://www.deja.com/
Subject: Re: Fitting Dynamic Transposition into a Binary World Date: Wed, 24 Jan 2001 19:15:38 GMT From: jsavard@ecn.ab.SBLOK.ca.nowhere (John Savard) Message-ID: <3a6f2967.178163@news.powersurfr.com> References: <94n373$e7b$1@nnrp1.deja.com> Newsgroups: sci.crypt Lines: 20 On Wed, 24 Jan 2001 17:26:10 GMT, Splaat23 <splaat23@my-deja.com> wrote, in part: >If the >block isn't a "valid" balanced block, then simply repeat the >transposition (assuming it is not it's own inverse) until you get a >valid 162-bit balanced string. This (as far as I can tell, IANA good >cryptographer) should not compromise security. Yes, I'm aware of this trick, and I remember thinking of it when I was 10 years old as a method of using the 27-letter alphabet to perform trigraphic encipherment of texts strictly confined to the 26-letter alphabet. Since it does require extra time, and especially a _variable_ amount of time, however, it is the sort of thing I prefer to avoid if possible. John Savard http://home.ecn.ab.ca/~jsavard/crypto.htm
Subject: Re: Fitting Dynamic Transposition into a Binary World Date: Wed, 24 Jan 2001 20:08:12 GMT From: Splaat23 <splaat23@my-deja.com> Message-ID: <94ncn2$njc$1@nnrp1.deja.com> References: <3a6f2967.178163@news.powersurfr.com> Newsgroups: sci.crypt Lines: 42 This "trick" greatly simplifies your idea, and the time cost is small. You can calculate the average transpositions per block, it'll be something like 1.00000002 with the few blocks that need to be doubly transposed. And, if your transposition doesn't have a cyclic effect on blocks that are not valid, you can all but guarantee an upper limit on time. In fact, designed properly, your transposition/balancing scheme can probably guarantee <= 2 repetitions. Timing attacks can be foiled by transposing ALL blocks twice and using the first one that is valid. IMHO, I think this is a better implementation than using variable length strings. Variable length vs. variable time (but can be fixed), I think variable length is worse in most cases. - Andrew In article <3a6f2967.178163@news.powersurfr.com>, jsavard@ecn.ab.SBLOK.ca.nowhere (John Savard) wrote: > On Wed, 24 Jan 2001 17:26:10 GMT, Splaat23 <splaat23@my-deja.com> > wrote, in part: > > >If the > >block isn't a "valid" balanced block, then simply repeat the > >transposition (assuming it is not it's own inverse) until you get a > >valid 162-bit balanced string. This (as far as I can tell, IANA good > >cryptographer) should not compromise security. > > Yes, I'm aware of this trick, and I remember thinking of it when I was > 10 years old as a method of using the 27-letter alphabet to perform > trigraphic encipherment of texts strictly confined to the 26-letter > alphabet. > > Since it does require extra time, and especially a _variable_ amount > of time, however, it is the sort of thing I prefer to avoid if > possible. > > John Savard > http://home.ecn.ab.ca/~jsavard/crypto.htm > Sent via Deja.com http://www.deja.com/
Subject: Re: Fitting Dynamic Transposition into a Binary World Date: 24 Jan 2001 20:07:16 GMT From: ka@socrates.hr.lucent.com.no_spam (Kenneth Almquist) Message-ID: <94nclk$mu3@nntpa.cb.lucent.com> References: <3a6ea2e2.16381955@news.powersurfr.com> Newsgroups: sci.crypt Lines: 71 jsavard@ecn.ab.SBLOK.ca.nowhere (John Savard) wrote: > On 24 Jan 2001 05:41:18 GMT, ka@socrates.hr.lucent.com.no_spam > (Kenneth Almquist) wrote, in part: > >> This means that your encoding is effectively variable length. For each >> 160-bit sequence, you need to transmit an indication of whether it >> has been encoded as a 164-bit balanced sequence, or some other way. >> This wastes communication bandwidth (though not a lot) and also >> complicates the encoding because the indication cannot be transmitted >> in the clear (or you leak information). Might I suggest coding 158-bit >> arbitrary sequences as 162-bit balanced sequences? > > Yes, I'm aware of that, however, you have mistaken my intention. > > The reason I went two bits over the optimal point for total encoding > of binary sequences is that my aim was a perfect encoding. OK, I didn't understand what you were trying to do. Why do you want to construct a perfect encoding? If the inputs values are uniformly distributed, any encoding which maps fixed length blocks to variable sized blocks will, on average, increase the size of the text. So the ciphertext will be larger than the plaintext (on average), although the difference will be small. The other issue with encoding is enabling ciphertext only attacks. If we use a mapping from plaintext to balanced blocks which cannot generate all possible balanced blocks, then a ciphertext only attack could take advantage of this. However, such an attack would be much more difficult than a known plaintext attack it provides the attacker with much less information. > Essentially, what I intended to do overall was this: > > Take the 160-bit sequences, and after converting most of them to > 164-bit balanced sequences, and enciphering the 164-bit balanced > sequences using Dynamic Transposition, convert them *back* to 160-bit > sequences. > > So my overall cipher works like this: 160-bit binary sequences in, > 160-bit binary sequences out. > > I supplement Dynamic Transposition with other encryption on binary > sequences, and perform two Dynamic Transposition steps with a > different choice of which 160-bit binary sequences are chosen as the > left-over ones, to avoid weakness from not encrypting a small group of > sequences. You could apply the transpositions to a fixed size block. Treat each possible balanced string as a symbol, and generate a Huffman encoding based on the assumption that each symbol is equally probable. Thus most 164-bit balanced strings would be represented as 160 bit strings, but a few (about 0.465%) would be represented as 159 bit strings. The worst case expansion is 160/159, or 0.629%. The average expansion, if I calculate it correctly, is about 0.0000135%. (These calculations ignore any partial block at the end of the plaintext.) If this expansion ratio seems excessive, we can do better. Call the length of the message to be encrypted L. Determine the smallest transposition block size containing more than 2^L balanced strings. Encrypt the entire message using that transposition block size. Convert the result of the transposition to a binary string of length L+8. This is not optimal, but in most cases there is no point in doing better since we would have to pad the encrypted message to be a multiple of 8 bits long before transmitting or storing it anyway. It is better than using a block cipher in CBC mode, which increases the size of the message by the block size of the cipher. Kenneth Almquist
Subject: Re: Fitting Dynamic Transposition into a Binary World Date: Wed, 24 Jan 2001 20:56:25 GMT From: ritter@io.com (Terry Ritter) Message-ID: <3a6f4168.11505536@news.io.com> References: <94lptu$h0h@nntpa.cb.lucent.com> Newsgroups: sci.crypt Lines: 25 On 24 Jan 2001 05:41:18 GMT, in <94lptu$h0h@nntpa.cb.lucent.com>, in sci.crypt ka@socrates.hr.lucent.com.no_spam (Kenneth Almquist) wrote: >[...] >These algorithms can be executed moderately efficiently. The >combinatorial calculation used to determine the number of balanced >strings with a given prefix can be precomputed and stored in a >table. At the cost of using larger tables, we could make the >decoding algorithm process multiple bits of the balanced string >at a time. However, there is no obvious way to generate a balanced >string without doing one bit at a time. I simply do not understand where you guys are going with this. Is there some reason why you could not use the algorithm in my "revisited" article? It does bit-balance arbitrary data into a fixed-size block, is not limited in block size (and thus gains efficiency with large blocks), and decoding is trivial. Also, it does function byte-by-byte, not bit-by-bit. --- Terry Ritter ritter@io.com http://www.io.com/~ritter/ Crypto Glossary http://www.io.com/~ritter/GLOSSARY.HTM
Subject: Re: Fitting Dynamic Transposition into a Binary World Date: Thu, 25 Jan 2001 00:10:44 GMT From: jsavard@ecn.ab.SBLOK.ca.nowhere (John Savard) Message-ID: <3a6f6e32.17791302@news.powersurfr.com> References: <3a6f4168.11505536@news.io.com> Newsgroups: sci.crypt Lines: 24 On Wed, 24 Jan 2001 20:56:25 GMT, ritter@io.com (Terry Ritter) wrote, in part: >Is there some reason why you could not use the algorithm in my >"revisited" article? I'm sure that I'm the only one who really finds that method inadequate for his purposes. As I understand it, your algorithm is: Given a block size of N bytes: take N-1 bytes of data. If that data has 7 or fewer excess 1s or 0s, add an appropriate last byte. If the excess is more than that, use only the first N-2 bytes, and rectify the excess in the last two bytes. I suppose you could use alternating all ones and all zeroes bytes in the case where the excess is all in the last byte. John Savard http://home.ecn.ab.ca/~jsavard/crypto.htm
Subject: Re: Fitting Dynamic Transposition into a Binary World Date: Thu, 25 Jan 2001 06:23:15 GMT From: ritter@io.com (Terry Ritter) Message-ID: <3a6fc64d.5730544@news.io.com> References: <3a6f6e32.17791302@news.powersurfr.com> Newsgroups: sci.crypt Lines: 35 On Thu, 25 Jan 2001 00:10:44 GMT, in <3a6f6e32.17791302@news.powersurfr.com>, in sci.crypt jsavard@ecn.ab.SBLOK.ca.nowhere (John Savard) wrote: >On Wed, 24 Jan 2001 20:56:25 GMT, ritter@io.com (Terry Ritter) wrote, >in part: > >>Is there some reason why you could not use the algorithm in my >>"revisited" article? > >I'm sure that I'm the only one who really finds that method inadequate >for his purposes. > >As I understand it, your algorithm is: > >Given a block size of N bytes: > >take N-1 bytes of data. If that data has 7 or fewer excess 1s or 0s, >add an appropriate last byte. > >If the excess is more than that, use only the first N-2 bytes, and >rectify the excess in the last two bytes. > >I suppose you could use alternating all ones and all zeroes bytes in >the case where the excess is all in the last byte. Since the description in my "Revisited" article is not working, and since -- for some reason -- I am obviously not getting through, perhaps someone else could help out here. --- Terry Ritter ritter@io.com http://www.io.com/~ritter/ Crypto Glossary http://www.io.com/~ritter/GLOSSARY.HTM
Subject: Re: Fitting Dynamic Transposition into a Binary World Date: Thu, 25 Jan 2001 15:16:28 GMT From: jsavard@ecn.ab.SBLOK.ca.nowhere (John Savard) Message-ID: <3a703ee5.1073355@news.powersurfr.com> References: <3a6fc64d.5730544@news.io.com> Newsgroups: sci.crypt Lines: 80 On Thu, 25 Jan 2001 06:23:15 GMT, ritter@io.com (Terry Ritter) wrote, in part: >Since the description in my "Revisited" article is not working, and >since -- for some reason -- I am obviously not getting through, >perhaps someone else could help out here. Given your description: :Some of the analytic and strength advantages of Dynamic :Transposition depend upon having the same number of 1's :and 0's in each block, which is called "bit-balance." :Exact bit-balance can be achieved by accumulating data to :a block byte-by-byte, only as long as the block can be :balanced by adding appropriate bits at the end. Then the :block is ciphered, and another block filled. :We will always add at least one byte of "balance data," :at the end of a block, and only the first balance byte, :immediately after the data, will contain both 1's and 0's. :We can thus transparently remove the balance data by :stepping from the end of the block, past the first byte :containing both 1's and 0's. and also : It does bit-balance arbitrary data into a : fixed-size block, is not limited in block size (and thus gains : efficiency with large blocks), and decoding is trivial. I thought that I accurately described your algorithm, although I worked backwards rather than forwards in determining when to complete a block, and I explicitly noted one pathological case. A block balanced by your algorithm will always consist of one byte of balancing data containing both 1s and 0s, followed by zero or more bytes of balancing data containing only 1s or only 0s. Thus, an N byte block can be balanced, when containing N-k bytes of input data, if the excess of 1s or 0s in the input data is less than or equal to 7 + 8*(k-1); that is, N-1 bytes can be balanced with an excess of 7 1s or 7 0s or anything in between, N-2 bytes can be balanced with an excess of 15 1s or 15 0s or anything in between. Or can they? It might happen that I have N-2 bytes of balanced data followed by one byte of all 1s. Let's say that each of the first N-2 bytes has 4 1 bits and 4 0 bits, so that the first N-2 bytes are perfectly balanced no matter where I truncate them, for simplicity in what follows. If I try to balance that by your algorithm, I find I have eight excess 1s, so I can't put N-1 bytes of data in my N byte block. How about N-2? Now, my data is *too* balanced. My first balancing byte cannot be all 1s or all 0s, or it won't decode. But my second balancing byte must be all 1s or all 0s. So those two bytes together can't balance! So I have to balance the block by putting in N-3 bytes of balanced bits, followed by one byte with 4 0s and 4 1s, followed by an all-zeroes byte and an all-ones byte (in either order). Thus, although your description of your algorithm didn't include backtracking, there is in fact a slight need for it. A modification of your algorithm, making the single balancing byte of the form 01111111, 00111111, ... 00000001, and using the bit pattern 11110000 *along with* 00000000 and 11111111 as an allowed value for the second and subsequent balancing bytes, would eliminate this complexity. I also note that your original Dynamic Transposition article is from 1991, so it may well be the first proposal ever openly made for any form of polyalphabetic block encipherment. Thus, even if my claim that one can avoid the overhead of bit-balancing by doing something just as good with substitution is true, that has less impact on the significance of Dynamic Transposition than I had thought. John Savard http://home.ecn.ab.ca/~jsavard/crypto.htm
Subject: Re: Fitting Dynamic Transposition into a Binary World Date: Thu, 25 Jan 2001 22:28:38 GMT From: ritter@io.com (Terry Ritter) Message-ID: <3a70a862.16135630@news.io.com> References: <3a703ee5.1073355@news.powersurfr.com> Newsgroups: sci.crypt Lines: 151 On Thu, 25 Jan 2001 15:16:28 GMT, in <3a703ee5.1073355@news.powersurfr.com>, in sci.crypt jsavard@ecn.ab.SBLOK.ca.nowhere (John Savard) wrote: >On Thu, 25 Jan 2001 06:23:15 GMT, ritter@io.com (Terry Ritter) wrote, >in part: > >>Since the description in my "Revisited" article is not working, and >>since -- for some reason -- I am obviously not getting through, >>perhaps someone else could help out here. > >Given your description: > >:Some of the analytic and strength advantages of Dynamic >:Transposition depend upon having the same number of 1's >:and 0's in each block, which is called "bit-balance." >:Exact bit-balance can be achieved by accumulating data to >:a block byte-by-byte, only as long as the block can be >:balanced by adding appropriate bits at the end. Then the >:block is ciphered, and another block filled. > >:We will always add at least one byte of "balance data," >:at the end of a block, and only the first balance byte, >:immediately after the data, will contain both 1's and 0's. >:We can thus transparently remove the balance data by >:stepping from the end of the block, past the first byte >:containing both 1's and 0's. > >and also > >: It does bit-balance arbitrary data into a >: fixed-size block, is not limited in block size (and thus gains >: efficiency with large blocks), and decoding is trivial. > >I thought that I accurately described your algorithm, although I >worked backwards rather than forwards in determining when to complete >a block, and I explicitly noted one pathological case. > >A block balanced by your algorithm will always consist of one byte of >balancing data containing both 1s and 0s, followed by zero or more >bytes of balancing data containing only 1s or only 0s. That is what I said. It was insufficient. >Thus, an N byte block can be balanced, when containing N-k bytes of >input data, if the excess of 1s or 0s in the input data is less than >or equal to 7 + 8*(k-1); that is, N-1 bytes can be balanced with an >excess of 7 1s or 7 0s or anything in between, N-2 bytes can be >balanced with an excess of 15 1s or 15 0s or anything in between. Or >can they? > >It might happen that I have N-2 bytes of balanced data followed by one >byte of all 1s. Let's say that each of the first N-2 bytes has 4 1 >bits and 4 0 bits, so that the first N-2 bytes are perfectly balanced >no matter where I truncate them, for simplicity in what follows. > >If I try to balance that by your algorithm, I find I have eight excess >1s, so I can't put N-1 bytes of data in my N byte block. > >How about N-2? Now, my data is *too* balanced. My first balancing byte >cannot be all 1s or all 0s, or it won't decode. But my second >balancing byte must be all 1s or all 0s. So those two bytes together >can't balance! > >So I have to balance the block by putting in N-3 bytes of balanced >bits, followed by one byte with 4 0s and 4 1s, followed by an >all-zeroes byte and an all-ones byte (in either order). Right. Good catch. Only now do I understand a previous comment that suggested allowing f0 balancing, as well as ff and 00. The bit-balancing flag need have only 7 codes (e.g., 01, 03, 07, 0f, 1f, 3f, 7f) to cover all the balancing it can do with a 1-bit flag. All reaming codes could be used as part of the balancing process after the flag. We know about 00 and ff, but the others are available for use; perhaps 55. (Something like this is needed to pad out a partial block anyway, and I actually implemented it, but felt that was too much detail for the article.) My goal would be to minimize the extraction effort. Clearly, the balancing or encoding end is going to need byte-by-byte bit-counts in any case. If necessary I would spend a little more on the encoding end to reduce decoding to as much of a triviality as possible. So for decoding we would first suck up any padding 55's at the end. Then the rest is the same -- we skip back until we get something other than ff or 00, and skip that too. That's it. >Thus, although your description of your algorithm didn't include >backtracking, there is in fact a slight need for it. While I would prefer to have no backtracking, putting one char back is a common feature in some algorithms. >A modification of your algorithm, making the single balancing byte of >the form 01111111, 00111111, ... 00000001, That's what I do. >and using the bit pattern >11110000 *along with* 00000000 and 11111111 as an allowed value for >the second and subsequent balancing bytes, would eliminate this >complexity. Yes. With backtracking, that should solve the problem. The actual padding value would need to be bit-balanced, and not a value used for the first balancing byte, but otherwise could be anything. I would expect this additional value would be fairly rare, perhaps 1 time in 16, thus representing a half a bit of overhead per block on average. >I also note that your original Dynamic Transposition article is from >1991, so it may well be the first proposal ever openly made for any >form of polyalphabetic block encipherment. Oh, surely not! I don't see Dynamic Transposition as "polyalphabetic." Normally, "polyalphabetic" refers to some necessarily small fixed set of alphabets. But Dynamic Transposition does not select from among a small subset of permutations; instead the idea is to build any possible permutation. In the design, I was attempting to achieve Shannon perfect secrecy in the context of individual blocks, instead of messages. It was also an attempt to cipher indirectly, in a way that did not expose the actual cipher transformation and thus invite attack. >Thus, even if my claim that >one can avoid the overhead of bit-balancing by doing something just as >good with substitution is true, that has less impact on the >significance of Dynamic Transposition than I had thought. I doubt that substitution could be "just as good." Resolving that issue would seem to require a comparable alternative design which could be examined and compared. Thanks for exposing the problem with the bit-balancing description in the "Revisited" article. --- Terry Ritter ritter@io.com http://www.io.com/~ritter/ Crypto Glossary http://www.io.com/~ritter/GLOSSARY.HTM
Subject: Re: Fitting Dynamic Transposition into a Binary World Date: Thu, 25 Jan 2001 22:50:38 GMT From: jsavard@ecn.ab.SBLOK.ca.nowhere (John Savard) Message-ID: <3a70ab29.9223579@news.powersurfr.com> References: <3a70a862.16135630@news.io.com> Newsgroups: sci.crypt Lines: 44 On Thu, 25 Jan 2001 22:28:38 GMT, ritter@io.com (Terry Ritter) wrote, in part: >I doubt that substitution could be "just as good." Resolving that >issue would seem to require a comparable alternative design which >could be examined and compared. But the specific point we disagree on, I think, is that "all possible transpositions" are not, in my opinion, as good as "all possible substitutions" of the whole block, so the fact that any substitution design won't provide that is not a strike against substitution. That's basically because "all possible transpositions" are not the same thing as all possible substitutions over the set of balanced blocks. This is a specific mathematical issue, and I'm puzzled why it's not getting across. What I have on my page at http://home.ecn.ab.ca/~jsavard/crypto/co041205.htm is just 'conceptual', and hence not completely specified, but then neither is Dynamic Transposition (the stream cipher component isn't fully defined). I can't remember the details now, but I wouldn't be surprised if your original Dynamic Transposition design might have even inspired it in part; I think I remember mentioning in an E-mail to you, though, that I was thinking of using a block cipher as a combiner simply as an alternative to Dynamic Substitution. I can understand that interest in novel combiners is limited, but I would have thought that even in the 'mainstream', combining a byte at a time with byte-wide fixed S-boxes, sort of like a rotor machine, would be popular to avoid the bit-flipping problem of the plain XOR combiner. My design at the URL given above, though, was more a tour-de-force showing how a gigantic key could be used to some effect. (There is a second design, which I consider more potentially practical, lower on the page, but that isn't germane - even though _it_ uses transposition...of key material.) Unlike Dynamic Transposition, which is practical (if _unfashionable_), this is big, slow, and cumbersome. John Savard http://home.ecn.ab.ca/~jsavard/crypto.htm
Subject: Re: Fitting Dynamic Transposition into a Binary World Date: Fri, 26 Jan 2001 22:06:54 GMT From: ritter@io.com (Terry Ritter) Message-ID: <3a71f4aa.7161928@news.io.com> References: <3a70ab29.9223579@news.powersurfr.com> Newsgroups: sci.crypt Lines: 100 On Thu, 25 Jan 2001 22:50:38 GMT, in <3a70ab29.9223579@news.powersurfr.com>, in sci.crypt jsavard@ecn.ab.SBLOK.ca.nowhere (John Savard) wrote: >On Thu, 25 Jan 2001 22:28:38 GMT, ritter@io.com (Terry Ritter) wrote, >in part: > >>I doubt that substitution could be "just as good." Resolving that >>issue would seem to require a comparable alternative design which >>could be examined and compared. > >But the specific point we disagree on, I think, is that "all possible >transpositions" are not, in my opinion, as good as "all possible >substitutions" of the whole block, Agree. >so the fact that any substitution >design won't provide that is not a strike against substitution. Disagree. First, the number of permutations from which to select in a practical Dynamic Transposition cipher is huge. I don't expect a substitution technique to come anywhere close to that. In fact, the worst part is that a full substitution keyspace is much, much larger. So we can at least talk about having enough state to select any possible permutation, but we can't even imagine having enough state to select any possible substitution. Which means a practical substitution keyspace is at least potentially biased in a way which might be exploited. Next, the Dynamic Transposition permutations are of equal probability, if only we assume the generator sequences are random-like with independent values. I have discussed that; the approximation to this goal can be quite good indeed, since reducing functions hide value imperfections. Certainly I don't expect to see a stronger argument for any substitution-based system, unless of course we are talking keyed Latin square ciphering. >That's basically because "all possible transpositions" are not the >same thing as all possible substitutions over the set of balanced >blocks. Yes. >This is a specific mathematical issue, and I'm puzzled why >it's not getting across. You are; I reject your conclusions. >What I have on my page at > >http://home.ecn.ab.ca/~jsavard/crypto/co041205.htm > >is just 'conceptual', and hence not completely specified, but then >neither is Dynamic Transposition (the stream cipher component isn't >fully defined). Dynamic Transposition is not a particular cipher, it is a type of ciphering which can be realized in many ways. Designers can use different RNG structures, different balancing, different block size, different permutation algorithms, and so on. The result would still be a recognizable Dynamic Transposition cipher having properties in common with the various other implementations. >I can't remember the details now, but I wouldn't be >surprised if your original Dynamic Transposition design might have >even inspired it in part; I think I remember mentioning in an E-mail >to you, though, that I was thinking of using a block cipher as a >combiner simply as an alternative to Dynamic Substitution. > >I can understand that interest in novel combiners is limited, but I >would have thought that even in the 'mainstream', combining a byte at >a time with byte-wide fixed S-boxes, sort of like a rotor machine, >would be popular to avoid the bit-flipping problem of the plain XOR >combiner. > >My design at the URL given above, though, was more a tour-de-force >showing how a gigantic key could be used to some effect. (There is a >second design, which I consider more potentially practical, lower on >the page, but that isn't germane - even though _it_ uses >transposition...of key material.) Unlike Dynamic Transposition, which >is practical (if _unfashionable_), this is big, slow, and cumbersome. Then I am at a loss to understand the comparison. You argue that a design with a relatively small fixed set of small substitutions and is slow is somehow superior to a design which selects from more (indeed, all possible of that size) permutations, and is also usably fast. Where is the superiority? --- Terry Ritter ritter@io.com http://www.io.com/~ritter/ Crypto Glossary http://www.io.com/~ritter/GLOSSARY.HTM
Subject: Re: Fitting Dynamic Transposition into a Binary World Date: Thu, 25 Jan 2001 06:38:33 GMT From: Benjamin Goldberg <goldbb2@earthlink.net> Message-ID: <3A6FC9EF.AA380C9E@earthlink.net> References: <3a6f4168.11505536@news.io.com> Newsgroups: sci.crypt Lines: 51 Terry Ritter wrote: > > On 24 Jan 2001 05:41:18 GMT, in <94lptu$h0h@nntpa.cb.lucent.com>, in > sci.crypt ka@socrates.hr.lucent.com.no_spam (Kenneth Almquist) wrote: > > >[...] > >These algorithms can be executed moderately efficiently. The > >combinatorial calculation used to determine the number of balanced > >strings with a given prefix can be precomputed and stored in a > >table. At the cost of using larger tables, we could make the > >decoding algorithm process multiple bits of the balanced string > >at a time. However, there is no obvious way to generate a balanced > >string without doing one bit at a time. > > I simply do not understand where you guys are going with this. > > Is there some reason why you could not use the algorithm in my > "revisited" article? It does bit-balance arbitrary data into a > fixed-size block, is not limited in block size (and thus gains > efficiency with large blocks), and decoding is trivial. Also, it does > function byte-by-byte, not bit-by-bit. For various reasons, it is preferable for both encode (raw->balanced) and decode (balanced->raw) to be bijections -- the simplest reason being that if one does encode/encrypt/decode, there will be minimal expansion. (1) Suppose that the bit-by-bit encoding method I suggested is used -- if the raw data is unbiased, then there is exactly 1 bit of expansion in the encode function. After enciphering, there is a minimum of 1 bit of compaction in the decode function. I'm sure that you can see that, on average, there will be 0 ciphertext expansion if encode/encrypt/decode is done. Unfortunatly, this is probablistic, and there is no garuntee of not expanding. (2) Suppose that one of the fixed-length-to-fixed-length methods are used, specifically, one that maps N-bit-long raw strings onto M-bit-long balanced strings, with encode being one-to-one but not onto if the domain is defined as the set of all M bit balanced strings. With something like this, a user would encode a string, then repeatedly encipher it until the decode function is defined for that balanced string, and then decode it. With this method, we are garunteed that there will be no ciphertext expansion. The drawback being of course that there is no garuntee that encryption will take place in constant time. This might open the cipher up to timing attacks. -- Most scientific innovations do not begin with "Eureka!" They begin with "That's odd. I wonder why that happened?"
Subject: Re: Fitting Dynamic Transposition into a Binary World Date: Fri, 26 Jan 2001 02:59:49 GMT From: Benjamin Goldberg <goldbb2@earthlink.net> Message-ID: <3A70E824.EE970941@earthlink.net> References: <3a6f4168.11505536@news.io.com> Newsgroups: sci.crypt Lines: 50 Terry Ritter wrote: > > On 24 Jan 2001 05:41:18 GMT, in <94lptu$h0h@nntpa.cb.lucent.com>, in > sci.crypt ka@socrates.hr.lucent.com.no_spam (Kenneth Almquist) wrote: > > >[...] > >These algorithms can be executed moderately efficiently. The > >combinatorial calculation used to determine the number of balanced > >strings with a given prefix can be precomputed and stored in a > >table. At the cost of using larger tables, we could make the > >decoding algorithm process multiple bits of the balanced string > >at a time. However, there is no obvious way to generate a balanced > >string without doing one bit at a time. > > I simply do not understand where you guys are going with this. > > Is there some reason why you could not use the algorithm in my > "revisited" article? It does bit-balance arbitrary data into a > fixed-size block, is not limited in block size (and thus gains > efficiency with large blocks), and decoding is trivial. Also, it does > function byte-by-byte, not bit-by-bit. There is indeed a reason not to use the algorithm in your "revisited" article -- consider the case where we have a block of ciphertext, but don't have a block of plaintext. We have some candidate keys gotten somehow [presumably from analysis of other blocks; let's not worry about how this might have been done], and we try decrypting this ciphertext block with each of them. Your balancing scheme does not produce all possible bit-balanced blocks. Like with many padding schemes, it is easy to see upon trial decryption that a candidate key is incorrect simply because it could not possibly have produced that [padded] block. You might consider making it so that the contrasting byte is randomly selected from among all bytes which have the appropriate number of bits, but this now introduces a possible side channel (this is the same problem as with using random padding to fill out the length of a stream to be a multiple of a block cipher's blocksize). To eliminate this problem, the decoding scheme should be bijective. For the scheme I gave, this is true. Ony might consider complaining that it is innefficient to work with one bit at a time, rather than a byte at a time, but as has been said, the bit manipulations of the cipher are a tiny fraction of the time consumed, compared to the PRNG calls which create the transposition. -- Most scientific innovations do not begin with "Eureka!" They begin with "That's odd. I wonder why that happened?"

Subject: Producing "bit-balanced" strings efficiently for Dynamic Transposition Date: Tue, 23 Jan 2001 17:34:57 GMT From: jsavard@ecn.ab.SBLOK.ca.nowhere (John Savard) Message-ID: <3a6dc094.19133675@news.powersurfr.com> Newsgroups: sci.crypt Lines: 8 I decided to hunt for things on the web: I found http://www.research.att.com/~amo/doc/arch/balancing.vectors.troff which may be relevant. John Savard http://home.ecn.ab.ca/~jsavard/crypto.htm
Subject: Re: Producing "bit-balanced" strings efficiently for Dynamic Transposition Date: Wed, 24 Jan 2001 14:36:53 GMT From: jsavard@ecn.ab.SBLOK.ca.nowhere (John Savard) Message-ID: <3a6ee71d.278780@news.powersurfr.com> References: <3a6dc094.19133675@news.powersurfr.com> Newsgroups: sci.crypt Lines: 42 Well, I finally came up with an algorithm that is reasonably workable. To convert 37-bit arbitrary strings into 40-bit bit-balanced strings, I need an _enumeration_ of 40-bit balanced strings. Instead of directly splitting up a 40-bit bit-balanced string into five 8-bit strings, I can keep the number of cases manageable if I split it up into two 20-bit strings. Thus, I have the cases (10)(10) (11)(9) and (9)(11) (12)(8) and (8)(12) ... (20)(0) and (0)(20) To actually produce the bits of 40-bit string number such-and-such, I simply _continue the process_. Thus, a bit-balanced 20-bit string becomes two 10-bit strings: (5)(5) (6)(4) and (4)(6) ... (10)(0) and (0)(10) and a 20-bit string with 11 zeroes and 9 ones also becomes two 10-bit strings, this time with the cases (5)(4) and (4)(5) (6)(3) and (3)(6) ... (9)(0) and (0)(9) and I can even split the 10 bit strings into two 5-bit strings. Doing a cascading conversion in this way means that I have a limited number of cases at each step, so the tables I work with have reasonable size. John Savard http://home.ecn.ab.ca/~jsavard/crypto.htm
Subject: Re: Producing "bit-balanced" strings efficiently for Dynamic Transposition Date: 26 Jan 2001 14:00:54 GMT From: rpw3@rigden.engr.sgi.com (Rob Warnock) Message-ID: <94rvum$4dbha$1@fido.engr.sgi.com> References: <3a6ee71d.278780@news.powersurfr.com> Newsgroups: sci.crypt Lines: 39 John Savard <jsavard@ecn.ab.SBLOK.ca.nowhere> wrote: +--------------- | Well, I finally came up with an algorithm that is reasonably workable. | To convert 37-bit arbitrary strings into 40-bit bit-balanced strings, +--------------- Hmmm... I think for truly *arbitrary* 37-bit strings you'll need at least a 44-bit result. +--------------- | I need an _enumeration_ of 40-bit balanced strings. Instead of | directly splitting up a 40-bit bit-balanced string into five 8-bit | strings, I can keep the number of cases manageable if I split it up | into two 20-bit strings. ... | To actually produce the bits of 40-bit string number such-and-such, I | simply _continue the process_. | Thus, a bit-balanced 20-bit string becomes two 10-bit strings: ... | and I can even split the 10 bit strings into two 5-bit strings. | | Doing a cascading conversion in this way means that I have a limited | number of cases at each step, so the tables I work with have | reasonable size. +--------------- If you recurse all the way down to 1-bit "strings", I think you end up with the same number of overhead bits as my "partitioned polarity inversion" method, namely, lg N. And the latter is simpler and faster. -Rob ----- Rob Warnock, 31-2-510 rpw3@sgi.com SGI Network Engineering http://reality.sgi.com/rpw3/ 1600 Amphitheatre Pkwy. Phone: 650-933-1673 Mountain View, CA 94043 PP-ASEL-IA
Subject: Re: Producing "bit-balanced" strings efficiently for Dynamic Transposition Date: Fri, 26 Jan 2001 15:55:29 GMT From: jsavard@ecn.ab.SBLOK.ca.nowhere (John Savard) Message-ID: <3a719da7.8715199@news.powersurfr.com> References: <94rvum$4dbha$1@fido.engr.sgi.com> Newsgroups: sci.crypt Lines: 12 On 26 Jan 2001 14:00:54 GMT, rpw3@rigden.engr.sgi.com (Rob Warnock) wrote, in part: >Hmmm... I think for truly *arbitrary* 37-bit strings you'll need at >least a 44-bit result. Although the method you've outlined is indeed capable of achieving any desired level of efficiency, it is not optimal; for any given block size, it produces balanced strings that are larger than necessary. John Savard http://home.ecn.ab.ca/~jsavard/crypto.htm

Subject: Message-ID: <94qsjr$qfk$1@slb2.atl.mindspring.net> Newsgroups: sci.crypt Lines: 20 In 1964 Durstenfeld published his well-known Shuffle algorithm that generates a random N-permutation by means of successive pairwise transpositions, which seems to be the "dynamic" part of Terry Ritter's "Dynamic Transposition" cipher. Has there been substantive improvement in such algorithms since 1964, or does Durstenfeld's remain about as good as any? Also, is ARC4's byte-stream generator adequate as a CSPRNG? If so, is there a straightforward way to adapt such a byte-stream to generate PRNs uniformly distributed on 1..n? If the answers to these last questions are in the affirmative, then I wonder whether it might be reasonable to have a 2-stage cipher that first uses ARC4 as usual (e.g. as in Ciphersaber), followed by Durstenfeld Transpositions (Shuffles or equivalent) whose rand(1..n) procedure also uses ARC4's stream. (?) --r.e.s.
Subject: Re: Durstenfeld Transpositions & ARC4 Date: Fri, 26 Jan 2001 04:59:35 GMT From: ritter@io.com (Terry Ritter) Message-ID: <3a710371.2551605@news.io.com> References: <94qsjr$qfk$1@slb2.atl.mindspring.net> Newsgroups: sci.crypt Lines: 54 On Thu, 25 Jan 2001 19:57:58 -0800, in <94qsjr$qfk$1@slb2.atl.mindspring.net>, in sci.crypt "r.e.s." <rs.1@mindspring.com> wrote: >In 1964 Durstenfeld published his well-known Shuffle algorithm >that generates a random N-permutation by means of successive >pairwise transpositions, which seems to be the "dynamic" part >of Terry Ritter's "Dynamic Transposition" cipher. Has there >been substantive improvement in such algorithms since 1964, or >does Durstenfeld's remain about as good as any? There exist other permutation techniques of similar age. Shuffle seems about as good as any, and is widely understood. For the state of the art up to 1991, see: http://www.io.com/~ritter/ARTS/CRNG2ART.HTM#Sect6.7 >Also, is ARC4's byte-stream generator adequate as a CSPRNG? The ARC4 state is awfully small for Dynamic Transposition, especially if we shuffle twice. We want more active state in the RNG than is used in a single encryption, and probably do want at least 128 bits in the block. Since rejection in Shuffle (to achieve variable-range) throws away values (and may throw away a lot), probably it is not large enough. >If >so, is there a straightforward way to adapt such a byte-stream >to generate PRNs uniformly distributed on 1..n? The conventional technique is "rejection." I have described this many times. See, for example: http://www.io.com/~ritter/KEYSHUF.HTM in "The Shuffling Subsystem" section. >If the answers to these last questions are in the affirmative, >then I wonder whether it might be reasonable to have a 2-stage >cipher that first uses ARC4 as usual (e.g. as in Ciphersaber), >followed by Durstenfeld Transpositions (Shuffles or equivalent) >whose rand(1..n) procedure also uses ARC4's stream. (?) Durstenfeld did not invent bit-permutation ciphers, nor did he invent the idea of bit-balancing the data for a bit-permutation cipher. That type of cipher is called Dynamic Transposition. --- Terry Ritter ritter@io.com http://www.io.com/~ritter/ Crypto Glossary http://www.io.com/~ritter/GLOSSARY.HTM
Subject: Re: Durstenfeld Transpositions & ARC4 Date: Thu, 25 Jan 2001 23:11:02 -0800 From: "r.e.s." <rs.1@mindspring.com> Message-ID: <94r7tp$ei7$1@slb2.atl.mindspring.net> References: <3a710371.2551605@news.io.com> Newsgroups: sci.crypt Lines: 76 "Terry Ritter" wrote... | "r.e.s." wrote: | >In 1964 Durstenfeld published his well-known Shuffle algorithm | >that generates a random N-permutation by means of successive | >pairwise transpositions, which seems to be the "dynamic" part | >of Terry Ritter's "Dynamic Transposition" cipher. Has there | >been substantive improvement in such algorithms since 1964, or | >does Durstenfeld's remain about as good as any? | | There exist other permutation techniques of similar age. Shuffle | seems about as good as any, and is widely understood. For the | state of the art up to 1991, see: | | http://www.io.com/~ritter/ARTS/CRNG2ART.HTM#Sect6.7 Thanks, I'll take a look. | >Also, is ARC4's byte-stream generator adequate as a CSPRNG? | | The ARC4 state is awfully small for Dynamic Transposition, | especially | if we shuffle twice. We want more active state in the RNG than is | used in a single encryption, and probably do want at least 128 bits | in | the block. Since rejection in Shuffle (to achieve variable-range) | throws away values (and may throw away a lot), probably it is not | large enough. I wasn't proposing to use Dynamic Transposition, but what you say is interesting -- especially since the Ciphersaber FAQ says... "RC4 is a powerful pseudo-random number generator, with a much bigger internal state, then [sic] the ones that come with most programming systems." | >If | >so, is there a straightforward way to adapt such a byte-stream | >to generate PRNs uniformly distributed on 1..n? | | The conventional technique is "rejection." I have described this | many times. See, for example: | | http://www.io.com/~ritter/KEYSHUF.HTM | | in "The Shuffling Subsystem" section. Ah, well... I had hoped there might be something more efficient than rejection methods -- they can be so inefficient that I had ruled them out without saying so. | >If the answers to these last questions are in the affirmative, | >then I wonder whether it might be reasonable to have a 2-stage | >cipher that first uses ARC4 as usual (e.g. as in Ciphersaber), | >followed by Durstenfeld Transpositions (Shuffles or equivalent) | >whose rand(1..n) procedure also uses ARC4's stream. (?) | | Durstenfeld did not invent bit-permutation ciphers, nor did he | invent | the idea of bit-balancing the data for a bit-permutation cipher. | That type of cipher is called Dynamic Transposition. The Shuffle algorithm is for generating random permutations of *anything*, right? So surely you don't consider that simply using it for bit-permutations is proprietary? (One could also use Shuffle for byte-permutations, but I believe both to be non-proprietary uses. NB: I'm specifically *not* referring to other cipher components such as bit-balancing.) --r.e.s.
Subject: Re: Durstenfeld Transpositions & ARC4 Date: Fri, 26 Jan 2001 09:21:21 GMT From: Benjamin Goldberg <goldbb2@earthlink.net> Message-ID: <3A714197.1D112085@earthlink.net> References: <94r7tp$ei7$1@slb2.atl.mindspring.net> Newsgroups: sci.crypt Lines: 49 r.e.s. wrote: > > "Terry Ritter" wrote... [snip] > | >If so, is there a straightforward way to adapt such a byte-stream > | >to generate PRNs uniformly distributed on 1..n? > | > | The conventional technique is "rejection." I have described this > | many times. See, for example: > | > | http://www.io.com/~ritter/KEYSHUF.HTM > | > | in "The Shuffling Subsystem" section. > > Ah, well... I had hoped there might be something more efficient > than rejection methods -- they can be so inefficient that I had > ruled them out without saying so. Although the particular rejection method Ritter used is particularly innefficient (intentionally so, to increase unpredictability), it is possible to do so with much more efficiency. One of the posts I made to this thread (The one at 1:13am) describes two kinds of rejection methods which are much more efficient. The first method I suggested there is the most efficient (in terms of bits used) possible for selecting in a range, but has the drawback that it must use one bit at a time from the stream. If your PRNG is not built for 1 bit at a time, you must either buffer a word to take a bit at a time, or discard all but one bit of a PRNG call. The second method I suggested there is the most efficient (in terms of number of PRNG calls, assuming that the PRNG produces a many bit word for each call) possible, but has the drawback that many more bits are discarded. There is, however, a third method of selecting a number in a range, which discards no bits *at all*. To do this, initialize a static arithmetic decoder, such that each of the values in the range are equiprobable. Now, using your PRNG as the "compressed data" source, decode one value. There is a [miniscule] chance of underflow, but this can, in practice, be ignored. Be warned, I am not absolutely certain that the arithmetic encoder's output will be absolutely unbiased within the range. I suspect (gut feeling) it might not be, but I'm not sure. -- Most scientific innovations do not begin with "Eureka!" They begin with "That's odd. I wonder why that happened?"
Subject: Re: Durstenfeld Transpositions & ARC4 Date: Fri, 02 Feb 2001 01:27:26 GMT From: reinhold@world.std.com Message-ID: <95d2dm$qk7$1@nnrp1.deja.com> References: <94r7tp$ei7$1@slb2.atl.mindspring.net> Newsgroups: sci.crypt Lines: 32 In article <94r7tp$ei7$1@slb2.atl.mindspring.net>, "r.e.s." <rs.1@mindspring.com> wrote: > "Terry Ritter" wrote... > > | >Also, is ARC4's byte-stream generator adequate as a CSPRNG? > | > | The ARC4 state is awfully small for Dynamic Transposition, > | especially > | if we shuffle twice. We want more active state in the RNG than is > | used in a single encryption, and probably do want at least 128 bits > | in > | the block. Since rejection in Shuffle (to achieve variable-range) > | throws away values (and may throw away a lot), probably it is not > | large enough. > > I wasn't proposing to use Dynamic Transposition, but what you > say is interesting -- especially since the Ciphersaber FAQ says... > > "RC4 is a powerful pseudo-random number generator, with a much > bigger internal state, then [sic] the ones that come with most > programming systems." > RC4's internal state is a permutation on 256 elements, so there are 256! possible states. That works out to about 1684 bits. How big a state do you think you need? Arnold Reinhold Sent via Deja.com http://www.deja.com/
Subject: Re: Durstenfeld Transpositions & ARC4 Date: Thu, 1 Feb 2001 20:42:32 -0800 From: "Scott Fluhrer" <sfluhrer@ix.netcom.com> Message-ID: <95de7n$bbk$1@nntp9.atl.mindspring.net> References: <95d2dm$qk7$1@nnrp1.deja.com> Newsgroups: sci.crypt Lines: 38 <reinhold@world.std.com> wrote in message news:95d2dm$qk7$1@nnrp1.deja.com... > In article <94r7tp$ei7$1@slb2.atl.mindspring.net>, > "r.e.s." <rs.1@mindspring.com> wrote: > > "Terry Ritter" wrote... > > > > | >Also, is ARC4's byte-stream generator adequate as a CSPRNG? > > | > > | The ARC4 state is awfully small for Dynamic Transposition, > > | especially > > | if we shuffle twice. We want more active state in the RNG than is > > | used in a single encryption, and probably do want at least 128 bits > > | in > > | the block. Since rejection in Shuffle (to achieve variable-range) > > | throws away values (and may throw away a lot), probably it is not > > | large enough. > > > > I wasn't proposing to use Dynamic Transposition, but what you > > say is interesting -- especially since the Ciphersaber FAQ says... > > > > "RC4 is a powerful pseudo-random number generator, with a much > > bigger internal state, then [sic] the ones that come with most > > programming systems." > > > > RC4's internal state is a permutation on 256 elements, so there are 256! > possible states. That works out to about 1684 bits. How big a state do > you think you need? Obnit: RC4's internal state is a permutation of 256 elements, and two 8 bit variables, giving a total of 256! * 256^2 possible states. You have shamefully understated it... (:-) -- poncho
Subject: Re: Durstenfeld Transpositions & ARC4 Date: Fri, 02 Feb 2001 09:09:27 GMT From: Benjamin Goldberg <goldbb2@earthlink.net> Message-ID: <3A7A7967.928A7C47@earthlink.net> References: <95de7n$bbk$1@nntp9.atl.mindspring.net> Newsgroups: sci.crypt Lines: 43 Scott Fluhrer wrote: > > <reinhold@world.std.com> wrote in message > news:95d2dm$qk7$1@nnrp1.deja.com... > > In article <94r7tp$ei7$1@slb2.atl.mindspring.net>, > > "r.e.s." <rs.1@mindspring.com> wrote: > > > "Terry Ritter" wrote... > > > > > > | >Also, is ARC4's byte-stream generator adequate as a CSPRNG? > > > | > > > | The ARC4 state is awfully small for Dynamic Transposition, > > > | especially if we shuffle twice. We want more active state in > > > | the RNG than is used in a single encryption, and probably do > > > | want at least 128 bits in the block. Since rejection in Shuffle > > > | (to achieve variable-range) throws away values (and may throw > > > | away a lot), probably it is not large enough. > > > > > > I wasn't proposing to use Dynamic Transposition, but what you > > > say is interesting -- especially since the Ciphersaber FAQ says... > > > > > > "RC4 is a powerful pseudo-random number generator, with a much > > > bigger internal state, then [sic] the ones that come with most > > > programming systems." > > > > > > > RC4's internal state is a permutation on 256 elements, so there are > > 256! possible states. That works out to about 1684 bits. How big a > > state do you think you need? > > Obnit: RC4's internal state is a permutation of 256 elements, and two > 8 bit variables, giving a total of 256! * 256^2 possible states. You > have shamefully understated it... (:-) Obnit Obnit: RC4's keying mechanism avoids a large set of initial internal states, so that there are fewer than 256! * 256^2 starting states. The reason for this is to avoid a starting state that is on one of a particular set of short cycles. It is not known whether there are any other RC4 short cycles other than the set of short cycles which the keyschedule avoids. -- Most scientific innovations do not begin with "Eureka!" They begin with "That's odd. I wonder why that happened?"
Subject: Re: Durstenfeld Transpositions & ARC4 Date: Thu, 1 Feb 2001 20:49:49 -0800 From: "r.e.s." <rs.1@mindspring.com> Message-ID: <95de6f$v7l$1@slb7.atl.mindspring.net> References: <95d2dm$qk7$1@nnrp1.deja.com> Newsgroups: sci.crypt Lines: 46 <reinhold@world.std.com> wrote ... | "r.e.s." <rs.1@mindspring.com> wrote: | > "Terry Ritter" wrote... | > [r.e.s. wrote:] | > | >Also, is ARC4's byte-stream generator adequate as a CSPRNG? | > | | > | The ARC4 state is awfully small for Dynamic Transposition, | > | especially | > | if we shuffle twice. We want more active state in the RNG than is | > | used in a single encryption, and probably do want at least 128 bits | > | in | > | the block. Since rejection in Shuffle (to achieve variable-range) | > | throws away values (and may throw away a lot), probably it is not | > | large enough. | > | > I wasn't proposing to use Dynamic Transposition, but what you | > say is interesting -- especially since the Ciphersaber FAQ says... | > | > "RC4 is a powerful pseudo-random number generator, with a much | > bigger internal state, then [sic] the ones that come with most | > programming systems." | > | | RC4's internal state is a permutation on 256 elements, so there are 256! | possible states. That works out to about 1684 bits. How big a state do | you think you need? Suppose we use ARC4 to encipher successive N-byte blocks, and perform a Durstenfeld Transposition ("Shuffle") on the N bytes of each block -- using the ARC4 byte-stream to generate the PRNs called by the Shuffle algorithm. Considering Terry Ritter's negative conclusion above, in spite of the 1684-bit state, for what values of N *would* ARC4's state be sufficient? --r.e.s.
Subject: Re: Durstenfeld Transpositions & ARC4 Date: Fri, 02 Feb 2001 09:17:29 GMT From: Benjamin Goldberg <goldbb2@earthlink.net> Message-ID: <3A7A7B4A.D0F0515B@earthlink.net> References: <3a710371.2551605@news.io.com> Newsgroups: sci.crypt Lines: 25 Terry Ritter wrote: > > On Thu, 25 Jan 2001 19:57:58 -0800, in > <94qsjr$qfk$1@slb2.atl.mindspring.net>, in sci.crypt "r.e.s." > <rs.1@mindspring.com> wrote: > [snip] > >Also, is ARC4's byte-stream generator adequate as a CSPRNG? > > The ARC4 state is awfully small for Dynamic Transposition, especially > if we shuffle twice. We want more active state in the RNG than is > used in a single encryption, and probably do want at least 128 bits in > the block. Since rejection in Shuffle (to achieve variable-range) > throws away values (and may throw away a lot), probably it is not > large enough. The purpose of having a particular size state PRNG is to allow every possible permutation to be a possible result of our PRPG (psuedo random permutation generator). No significant change is made to the size of the range of possible permutations by double shuffling, so there's no reason to require a larger PRNG state if double shuffling is used. -- Most scientific innovations do not begin with "Eureka!" They begin with "That's odd. I wonder why that happened?"
Subject: Re: Durstenfeld Transpositions & ARC4 Date: Fri, 2 Feb 2001 08:10:14 -0800 From: "r.e.s." <rs.1@mindspring.com> Message-ID: <95em3e$gbj$1@nntp9.atl.mindspring.net> References: <3A7A7B4A.D0F0515B@earthlink.net> Newsgroups: sci.crypt Lines: 46 "Benjamin Goldberg" <goldbb2@earthlink.net> wrote ... | Terry Ritter wrote: | > | > On Thu, 25 Jan 2001 19:57:58 -0800, in | > <94qsjr$qfk$1@slb2.atl.mindspring.net>, in sci.crypt "r.e.s." | > <rs.1@mindspring.com> wrote: | > | [snip] | > >Also, is ARC4's byte-stream generator adequate as a CSPRNG? | > | > The ARC4 state is awfully small for Dynamic Transposition, especially | > if we shuffle twice. We want more active state in the RNG than is | > used in a single encryption, and probably do want at least 128 bits in | > the block. Since rejection in Shuffle (to achieve variable-range) | > throws away values (and may throw away a lot), probably it is not | > large enough. | | The purpose of having a particular size state PRNG is to allow every | possible permutation to be a possible result of our PRPG (psuedo random | permutation generator). No significant change is made to the size of | the range of possible permutations by double shuffling, so there's no | reason to require a larger PRNG state if double shuffling is used. A rejection method (like the ones posted by Benjamin Goldberg and David Wagner, adapted to ARC4's byte-stream), might require two consecutive non-rejected bytes from ARC4 for each PRN it generates. If q(k) is the rejection probability for the PRN produced from two such bytes, being required in the range 1..k, then it seems that Durstenfeld's Shuffle can be expected to require B(N) = 2 * ( 1/q(N) + 1/q(N-1) + ... + 1/q(2) ) of bytes from ARC4 when used to shuffle a block of N bytes. So, if the plaintext consists of L bytes, then approximately (L/N)*B(N) bytes are required from ARC4. Can approximate values for q(k) be found, allowing an estimate of B(N), to get a more quantitative handle on the question? --r.e.s.
Subject: Re: Durstenfeld Transpositions & ARC4 Date: Fri, 2 Feb 2001 08:25:43 -0800 From: "r.e.s." <rs.1@mindspring.com> Message-ID: <95en0f$oj7$1@slb5.atl.mindspring.net> References: <95em3e$gbj$1@nntp9.atl.mindspring.net> Newsgroups: sci.crypt Lines: 21 correction: "r.e.s." <rs.1@mindspring.com> wrote ... [...] | If q(k) is the rejection probability for the PRN ^^^^^^^^^ acceptance | produced from two such bytes, being required in the range 1..k, | then it seems that Durstenfeld's Shuffle can be expected to | require B(N) = 2 * ( 1/q(N) + 1/q(N-1) + ... + 1/q(2) ) of | bytes from ARC4 when used to shuffle a block of N bytes. So, | if the plaintext consists of L bytes, then approximately | (L/N)*B(N) bytes are required from ARC4. | | Can approximate values for q(k) be found, allowing an estimate | of B(N), to get a more quantitative handle on the question? | | --r.e.s.
Subject: Re: Durstenfeld Transpositions & ARC4 Date: Fri, 02 Feb 2001 21:02:10 GMT From: ritter@io.com (Terry Ritter) Message-ID: <3a7b2018.3770819@news.io.com> References: <3A7A7B4A.D0F0515B@earthlink.net> Newsgroups: sci.crypt Lines: 66 On Fri, 02 Feb 2001 09:17:29 GMT, in <3A7A7B4A.D0F0515B@earthlink.net>, in sci.crypt Benjamin Goldberg <goldbb2@earthlink.net> wrote: >Terry Ritter wrote: >> >> On Thu, 25 Jan 2001 19:57:58 -0800, in >> <94qsjr$qfk$1@slb2.atl.mindspring.net>, in sci.crypt "r.e.s." >> <rs.1@mindspring.com> wrote: >> >[snip] >> >Also, is ARC4's byte-stream generator adequate as a CSPRNG? >> >> The ARC4 state is awfully small for Dynamic Transposition, especially >> if we shuffle twice. We want more active state in the RNG than is >> used in a single encryption, and probably do want at least 128 bits in >> the block. Since rejection in Shuffle (to achieve variable-range) >> throws away values (and may throw away a lot), probably it is not >> large enough. > >The purpose of having a particular size state PRNG is to allow every >possible permutation to be a possible result of our PRPG (psuedo random >permutation generator). No significant change is made to the size of >the range of possible permutations by double shuffling, so there's no >reason to require a larger PRNG state if double shuffling is used. Well, here it is from the original article: "If we shuffle each block just once, an opponent who somehow knows the correct resulting permutation can use that information to reproduce the shuffling RNG sequence, and thus start to attack the RNG." "This does not produce more permutations, it just hides shuffling sequence." Now, since that was insufficient, let me try it again: "The" purpose of having a lot of state in the RNG is to allow every possible *pair* of two permutations. Essentially, Shuffle is reversible. With only one shuffling, if the opponents get the permutation, they immediately have a pretty good idea what sequence produced that permutation. (The information would be inexact, but far, far better than nothing.) With two independent shufflings, there is no such implication. If there is only enough state in the RNG for a single shuffling, any second shuffling will be correlated to the first by way of the limited RNG state, and, thus, not independent. So knowing the final permutation might allow the development of the double-length shuffling sequence which produced the known permutation. That has not helped. The purpose of having enough information for *two* *independent* shufflings is to isolate the sequence generator from the permutation. Two shufflings use twice as much information (sequence) as needed to form a permutation, so two shufflings use twice as much information as the resulting permutation can represent. Even if the opponents do get the final permutation, the uncertainty in the sequence used to build that permutation will be as though we had a non-reversible shuffle. --- Terry Ritter ritter@io.com http://www.io.com/~ritter/ Crypto Glossary http://www.io.com/~ritter/GLOSSARY.HTM
Subject: Re: Durstenfeld Transpositions & ARC4 Date: Sat, 03 Feb 2001 06:55:01 GMT From: Benjamin Goldberg <goldbb2@earthlink.net> Message-ID: <3A7BAAE1.F465F918@earthlink.net> References: <3a7b2018.3770819@news.io.com> Newsgroups: sci.crypt Lines: 74 Terry Ritter wrote: > > On Fri, 02 Feb 2001 09:17:29 GMT, in > <3A7A7B4A.D0F0515B@earthlink.net>, in sci.crypt Benjamin Goldberg > <goldbb2@earthlink.net> wrote: > > >Terry Ritter wrote: > >> > >> On Thu, 25 Jan 2001 19:57:58 -0800, in > >> <94qsjr$qfk$1@slb2.atl.mindspring.net>, in sci.crypt "r.e.s." > >> <rs.1@mindspring.com> wrote: > >> > >[snip] > >> >Also, is ARC4's byte-stream generator adequate as a CSPRNG? > >> > >> The ARC4 state is awfully small for Dynamic Transposition, > >> especially if we shuffle twice. We want more active state in the > >> RNG than is used in a single encryption, and probably do want at > >> least 128 bits in the block. Since rejection in Shuffle (to > >> achieve variable-range) throws away values (and may throw away a > >> lot), probably it is not large enough. > > > >The purpose of having a particular size state PRNG is to allow every > >possible permutation to be a possible result of our PRPG (psuedo > >random permutation generator). No significant change is made to the > >size of the range of possible permutations by double shuffling, so > >there's no reason to require a larger PRNG state if double shuffling > >is used. > > Well, here it is from the original article: > > "If we shuffle each block just once, an opponent who somehow > knows the correct resulting permutation can use that > information to reproduce the shuffling RNG sequence, and > thus start to attack the RNG." "This does not > produce more permutations, it just hides shuffling sequence." > > Now, since that was insufficient, let me try it again: > > "The" purpose of having a lot of state in the RNG is to allow every > possible *pair* of two permutations. > > Essentially, Shuffle is reversible. With only one shuffling, if the > opponents get the permutation, they immediately have a pretty good > idea what sequence produced that permutation. (The information would > be inexact, but far, far better than nothing.) With two independent > shufflings, there is no such implication. > > If there is only enough state in the RNG for a single shuffling, any > second shuffling will be correlated to the first by way of the limited > RNG state, and, thus, not independent. So knowing the final > permutation might allow the development of the double-length shuffling > sequence which produced the known permutation. That has not helped. > > The purpose of having enough information for *two* *independent* > shufflings is to isolate the sequence generator from the permutation. > Two shufflings use twice as much information (sequence) as needed to > form a permutation, so two shufflings use twice as much information as > the resulting permutation can represent. Even if the opponents do get > the final permutation, the uncertainty in the sequence used to build > that permutation will be as though we had a non-reversible shuffle. Aaah, now I get it. Now I've a silly question: With the double-shuffling method, is it possible for us to [safely] use as our PRNG a simple LFSR, with nothing fancy added to obscure the state (like shrinking, variable clocking, etc)? Assuming it's large enough state, that is. Also, would it still be secure if the LFSR polynomial is sparse (eg, a trinomail)? -- A solution in hand is worth two in the book. Who cares about birds and bushes?
Subject: Re: Durstenfeld Transpositions & ARC4 Date: Sat, 03 Feb 2001 08:04:29 GMT From: ritter@io.com (Terry Ritter) Message-ID: <3a7bbb7c.1114985@news.io.com> References: <3A7BAAE1.F465F918@earthlink.net> Newsgroups: sci.crypt Lines: 40 n Sat, 03 Feb 2001 06:55:01 GMT, in <3A7BAAE1.F465F918@earthlink.net>, in sci.crypt Benjamin Goldberg <goldbb2@earthlink.net> wrote: >[...] >Now I've a silly question: With the double-shuffling method, is it >possible for us to [safely] use as our PRNG a simple LFSR, with nothing >fancy added to obscure the state (like shrinking, variable clocking, >etc)? That is a cryptographic design issue: tradeoffs must be made in the context of a lack of information. Were there a literature of cipher design, such that the strengths of various mechanisms was known, as used in various ways, we could just pick and choose. Everybody would agree. Nobody would guess. But there is no cipher design handbook. Personally, I try to have multiple levels of strength in a cipher. That is, I try to use different mechanisms which each seem strong in different ways, so that if one fails (that is, if I am mistaken about the strength), there is a backup, and a backup to that. >Assuming it's large enough state, that is. Also, would it still be >secure if the LFSR polynomial is sparse (eg, a trinomail)? Again, it is a design issue. The answer depends upon context, and what one can imagine what an opponent might be able to do. There is some evidence that relatively small trinomials produce sequences that are "less random" than one might like, but that issue does not extend to what I think of as cryptographic sizes. It might even be argued that sometimes what we really want is a large distance between terms, which of course would imply trinomials. On the other hand, if poly choice is part of keying, it probably makes sense to use 9-nomials. But then you have to find them. --- Terry Ritter ritter@io.com http://www.io.com/~ritter/ Crypto Glossary http://www.io.com/~ritter/GLOSSARY.HTM Originator: daw@mozart.cs.berkeley.edu (David Wagner)
Subject: Re: Durstenfeld Transpositions & ARC4 Date: 26 Jan 2001 05:35:15 GMT From: daw@mozart.cs.berkeley.edu (David Wagner) Message-ID: <94r2aj$cb2$1@agate.berkeley.edu> References: <94qsjr$qfk$1@slb2.atl.mindspring.net> Newsgroups: sci.crypt Lines: 34 r.e.s. wrote: >In 1964 Durstenfeld published his well-known Shuffle algorithm >that generates a random N-permutation by means of successive >pairwise transpositions, which seems to be the "dynamic" part >of Terry Ritter's "Dynamic Transposition" cipher. Has there >been substantive improvement in such algorithms since 1964, or >does Durstenfeld's remain about as good as any? (I presume this is the standard one, found, e.g., in Knuth.) No, you can't do any better. Standard lower bounds for sorting demonstrate that you need O(N log N) swaps to randomly permute an array of N elements. (This comes from the fact that shuffling is basically run a comparison-based sort in reverse.) See Knuth. >Also, is ARC4's byte-stream generator adequate as a CSPRNG? Noone knows. But, it is used in >99% of all SSL connections, so if it's not, lots of SSL transactions are in trouble! >If so, is there a straightforward way to adapt such a byte-stream >to generate PRNs uniformly distributed on 1..n? Any CSPRNG can be used to generate random value uniformly distributed on 1..n. Many algorithms have been posted here before. Here's a simplistic one. Choose k so that 2^k > n. Let l = n*floor(2^k/n). Using k random bits, generate a random number from 0..2^k-1; call it x. If x<l, then use x mod n as your next output value. If x>=l, then repeat (i.e., keep choosing x until you find one less than l).
Subject: Re: Durstenfeld Transpositions & ARC4 Date: 25 Jan 2001 21:38:53 -0800 From: Paul Rubin <phr-n2001@nightsong.com> Message-ID: <7x8znykh3m.fsf@ruckus.brouhaha.com> References: <94r2aj$cb2$1@agate.berkeley.edu> Newsgroups: sci.crypt Lines: 12 daw@mozart.cs.berkeley.edu (David Wagner) writes: > >Also, is ARC4's byte-stream generator adequate as a CSPRNG? > > Noone knows. But, it is used in >99% of all SSL connections, > so if it's not, lots of SSL transactions are in trouble! Nah. It's already known that you can distinguish an ARC4 stream from a random stream by examining about 2**30 bytes of output. That makes it fair to say it's inadequate as a CSPRNG; but I wouldn't say any SSL transactions are in trouble as a result.
Subject: Re: Durstenfeld Transpositions & ARC4 Date: 26 Jan 2001 07:52:08 GMT From: daw@mozart.cs.berkeley.edu (David Wagner) Message-ID: <94rab8$dt6$1@agate.berkeley.edu> References: <7x8znykh3m.fsf@ruckus.brouhaha.com> Newsgroups: sci.crypt Lines: 13 Paul Rubin wrote: >daw@mozart.cs.berkeley.edu (David Wagner) writes: >> Noone knows. But, it is used in >99% of all SSL connections, >> so if it's not, lots of SSL transactions are in trouble! > >Nah. It's already known that you can distinguish an ARC4 stream from >a random stream by examining about 2**30 bytes of output. That makes >it fair to say it's inadequate as a CSPRNG; but I wouldn't say any SSL >transactions are in trouble as a result. Good point! You are quite right (as always), and I withdraw my erroneous claim. Thank you very much for the correction.
Subject: Re: Durstenfeld Transpositions & ARC4 Date: Fri, 26 Jan 2001 05:54:50 GMT From: Benjamin Goldberg <goldbb2@earthlink.net> Message-ID: <3A711135.D9BA1E0A@earthlink.net> References: <94r2aj$cb2$1@agate.berkeley.edu> Newsgroups: sci.crypt Lines: 25 David Wagner wrote: > > r.e.s. wrote: > >In 1964 Durstenfeld published his well-known Shuffle algorithm > >that generates a random N-permutation by means of successive > >pairwise transpositions, which seems to be the "dynamic" part > >of Terry Ritter's "Dynamic Transposition" cipher. Has there > >been substantive improvement in such algorithms since 1964, or > >does Durstenfeld's remain about as good as any? > > (I presume this is the standard one, found, e.g., in Knuth.) > > No, you can't do any better. Standard lower bounds for sorting > demonstrate that you need O(N log N) swaps to randomly permute > an array of N elements. (This comes from the fact that shuffling > is basically run a comparison-based sort in reverse.) > > See Knuth. With the standard Shuffle algorithm, it takes N swaps. The (N log N) is the number of random bits needed. -- Most scientific innovations do not begin with "Eureka!" They begin with "That's odd. I wonder why that happened?"
Subject: Re: Durstenfeld Transpositions & ARC4 Date: Fri, 26 Jan 2001 10:31:01 +0100 From: Mok-Kong Shen <mok-kong.shen@t-online.de> Message-ID: <3A7143D5.4B167D82@t-online.de> References: <94r2aj$cb2$1@agate.berkeley.edu> Newsgroups: sci.crypt Lines: 24 David Wagner wrote: > [snip] > Any CSPRNG can be used to generate random value uniformly > distributed on 1..n. Many algorithms have been posted here > before. > > Here's a simplistic one. Choose k so that 2^k > n. > Let l = n*floor(2^k/n). Using k random bits, generate a > random number from 0..2^k-1; call it x. If x<l, then use > x mod n as your next output value. If x>=l, then repeat > (i.e., keep choosing x until you find one less than l). Sorry, I don't understand. If the CSPRNG is very non-uniform, you meant that using this processing the result is uniform? I doubt that could be the case. I suppose the technique is for using a uniform PRNG of a different range to produce an output uniform in the desired range 0..n-1. M. K. Shen ----------------------------- http://home.t-online.de/home/mok-kong.shen
Subject: Re: Durstenfeld Transpositions & ARC4 Date: Fri, 26 Jan 2001 12:26:22 +0100 From: Mok-Kong Shen <mok-kong.shen@t-online.de> Message-ID: <3A715EDE.2AA55631@t-online.de> References: <3A7143D5.4B167D82@t-online.de> Newsgroups: sci.crypt Lines: 18 Mok-Kong Shen wrote: > > David Wagner wrote: > > > [snip] > Sorry, I don't understand. If the CSPRNG is very non-uniform, > you meant that using this processing the result is uniform? > I doubt that could be the case. I suppose the technique > is for using a uniform PRNG of a different range to produce > an output uniform in the desired range 0..n-1. Apology for the absolute nonsense that I wrote above. A CSPRNG must of course be uniform. M. K. Shen
Subject: Re: Durstenfeld Transpositions & ARC4 Date: Fri, 26 Jan 2001 06:13:38 GMT From: Benjamin Goldberg <goldbb2@earthlink.net> Message-ID: <3A711595.CBB574FF@earthlink.net> References: <94qsjr$qfk$1@slb2.atl.mindspring.net> Newsgroups: sci.crypt Lines: 53 r.e.s. wrote: [snip] > Also, is ARC4's byte-stream generator adequate as a CSPRNG? If > so, is there a straightforward way to adapt such a byte-stream > to generate PRNs uniformly distributed on 1..n? There are many ways to make a PRNG produce uniform values 1..n. Here's two: This one takes 1 bit at a time from your RNG: function modran(integer n) { integer val=0, max=1; for( ; ; ) { val = val * 2 + randbit(); max = max * 2; if( max >= n ) if( val < n ) return val + 1; //you did say 1..n else { val = val - n; max = max - n; } } } This one uses an RNG which outputs 16 bits at a time: function modran(integer n) { integer max = 65536 - (65536 % n); integer val; do { val = ranno(); } while( val >= max ); return val % n; // or: return val / (max/n); // Which one is better depends on the RNG. } The bit at a time modran is more efficient in terms of how many random bits it uses, but if your PRNG outputs many bits at a time, the second one might be faster. If your PRNG is very slow, like BBS, then the first might be faster. I would suggest testing each with the values of 'n' you expect to use, to find out which is faster. -- Most scientific innovations do not begin with "Eureka!" They begin with "That's odd. I wonder why that happened?"

Subject: Re: Producing "bit-balanced" strings efficiently for Dynamic Transposition Date: Fri, 26 Jan 2001 15:19:06 -0700 From: "Tony T. Warnock" <u091889@cic-mail.lanl.gov> Message-ID: <3A71F7DA.2915DB3A@cic-mail.lanl.gov> References: <94rvum$4dbha$1@fido.engr.sgi.com> <3a6ee71d.278780@news.powersurfr.com> Newsgroups: sci.crypt Lines: 4 Arbitrary 37 bit strings will fit into 40 bits. 2**37 is just larger than 40!/20!**2. I didn't check to see if the suggested algorithm would actually work.
Subject: Re: Producing "bit-balanced" strings efficiently for Dynamic Transposition Date: Fri, 26 Jan 2001 23:21:06 GMT From: ritter@io.com (Terry Ritter) Message-ID: <3a720611.11617437@news.io.com> References: <3A71F7DA.2915DB3A@cic-mail.lanl.gov> Newsgroups: sci.crypt Lines: 44 On Fri, 26 Jan 2001 15:19:06 -0700, in <3A71F7DA.2915DB3A@cic-mail.lanl.gov>, in sci.crypt "Tony T. Warnock" <u091889@cic-mail.lanl.gov> wrote: >Arbitrary 37 bit strings will fit into 40 bits. 2**37 is just larger >than 40!/20!**2. I didn't check to see if the suggested algorithm would >actually work. As a programmer, I normally think of taking whole chars (bytes) from a file, processing them, and sending them back to a file. In contrast, a lot of the bit-balancing schemes would leave me with partial bytes. Dynamic Transposition can handle that fine, but then I have to store the result -- as integral bytes. Then I have to deal with partial-byte data when deciphering. Now, I could innovate a variable-length record structure in which each record includes a length in bits. But what I *really* want is to get bytes, process bytes, and store bytes, with no structure at all. One alternative, if I could take 16 bits to 18 in some convenient way, is to take 8 bytes (64 bits) to 9 bytes (72 bits), or any multiple thereof. But then I have to pad the plaintext to 8-byte blocks, which means I have to remove the padding somehow when deciphering, so it seems that either the padding must have a unique byte code (not possible with binary data), or I again need a structure with a bit-length value. Another alternative might be to interpose an object which gets bits from a file, puts bits to a file, and generally hides the bit-length structure. In that way we can put it out of our mind, but nothing is going to make it particularly efficient. It is at least 8x as much work as we would normally spend getting a byte, and then 8x as much putting it away. The shuffling effort should still dominate, but not by nearly as large a factor. So it seems to me that there is a lot to be said for trying to bit-balance byte data using whole bytes. --- Terry Ritter ritter@io.com http://www.io.com/~ritter/ Crypto Glossary http://www.io.com/~ritter/GLOSSARY.HTM

Terry Ritter, his current address, and his top page.

Last updated: 2001-05-06