Huge Block Size Discussion


Contents


========
Path: news.io.com!arlut.utexas.edu!cs.utexas.edu!howland.erols.net!surfnet.nl!swsbe6.switch.ch!scsing.switch.ch!elna.ethz.ch!caronni
From: caronni@tik.ethz.ch (Germano Caronni)
Newsgroups: sci.crypt
Subject: Ciphers with *Huge* Block Size ?
Date: 22 Aug 1996 17:18:35 GMT
Organization: Swiss Federal Institute of Technology (ETH), Zurich, CH
Lines: 33
Message-ID: <4vi4pb$fsq@elna.ethz.ch>
Reply-To: gec@acm.org
NNTP-Posting-Host: kom30-e.ethz.ch

Hello everybody,

current data encryption techniques usually encrypt a data stream or
small blocks of data. While this makes perfect sense for communication
where you need to encrypt and decrypt 'on the fly', it is perhaps not
the optimal solution for archiving, and generally not the strongest way
to secure data.  These are just idle thoughts, and I would be quite
interested to hear what you think about it.

Imagine a data file (or semantic entity, or whatever) being encrypted
as a whole, which means that each of the output bits depends on each of
the input bits (and naturally the secret key). This way, many kinds of
attacks may be a bit less feasible e.g. imagine doing a known plaintext
attack on something which simply can not be known in its entity, but of
which certain parts are well known (e.g. contents of encapsulated IP
headers, or headers of archived and encrypted mail) -- or imagine doing
a linear CA attack when the relevant block size is about a Megabyte.

Current block cipher chaining methods (CBC, OFB, CFB, ...) do not offer
this property, the best result is that the last encrypted 'block'
depends on all previous elements.

Comments to this one?

Germano



--
<...cookie space for rent...>

Germano Caronni    caronni@tik.ee.ethz.ch    http://www.tik.ee.ethz.ch/~caronni
PGP-Key-ID:7B7AE5E1    gec@acm.org             997C6DC4AF930A5D2D5D6AEAA196C33B


========
Path: news.io.com!arlut.utexas.edu!cs.utexas.edu!bcm.tmc.edu!pendragon!news.msfc.nasa.gov!newsfeed.internetmci.com!howland.erols.net!math.ohio-state.edu!jussieu.fr!oleane!plug.news.pipex.net!pipex!hole.news.pipex.net!pipex!news.ukpats.org.uk!lade.news.pipex.net!pipex!tube.news.pipex.net!pipex!usenet
From: george.barwood@dial.pipex.com (George Barwood)
Newsgroups: sci.crypt
Subject: Re: Ciphers with *Huge* Block Size ?
Date: Thu, 22 Aug 1996 18:22:16 GMT
Organization: UUNet PIPEX server (post doesn't reflect views of UUNet PIPEX)
Lines: 22
Message-ID: <4vi8eg$b36@tube.news.pipex.net>
References: <4vi4pb$fsq@elna.ethz.ch>
NNTP-Posting-Host: al077.du.pipex.com
X-Newsreader: Forte Free Agent 1.0.82

caronni@tik.ethz.ch (Germano Caronni) wrote:

>Imagine a data file (or semantic entity, or whatever) being encrypted
>as a whole, which means that each of the output bits depends on each of
>the input bits (and naturally the secret key). This way, many kinds of
>attacks may be a bit less feasible e.g. imagine doing a known plaintext
>attack on something which simply can not be known in its entity, but of
>which certain parts are well known (e.g. contents of encapsulated IP
>headers, or headers of archived and encrypted mail) -- or imagine doing
>a linear CA attack when the relevant block size is about a Megabyte.

I think that the computation cost would  be proportional to at least
the square of the block size, which for a 1Meg block might be
excessive. Most block ciphers try to be secure while running as fast
as possible which explains typical block sizes of 64-128 bits.
(Disclaimer: I am not an expert - the above may be rubbish)



========
Path: news.io.com!arlut.utexas.edu!cs.utexas.edu!chi-news.cic.net!newspump.sol.net!nntp04.primenet.com!nntp.primenet.com!mr.net!news.sgi.com!enews.sgi.com!news.mathworks.com!newsfeed.internetmci.com!in2.uu.net!news.abs.net!news.synapse.net!tanda!marc
From: marc@tanda.on.ca (Marc Thibault)
Newsgroups: sci.crypt
Subject: Re: Ciphers with *Huge* Block Size ?
Message-ID: <5J54sD1w165w@tanda.on.ca>
Date: Fri, 23 Aug 96 08:00:51 EDT
References: <4vi8eg$b36@tube.news.pipex.net>
Distribution: world
Organization: Tanda
Lines: 35

george.barwood@dial.pipex.com (George Barwood) writes:
> I think that the computation cost would  be proportional to at least
> the square of the block size, which for a 1Meg block might be

    That's not intuitively obvious. As the block size goes up it should
    be possible to reduce the complexity of the algorithm and still
    maintain strength. If this is the case, the computational cost per
    byte would go down. On the other hand, the cost of cryptanalysis
    could go up dramatically with block size.

    It is odd, though, that full-file algorithms aren't among the pack
    and that there doesn't seem to be any literature on them - even to
    explain why they are a bad idea.

    I once put together a little code that treated a full file to a
    reversible shuffling algorithm followed by adding a strong PRN
    (really big random integer the same size as the file) and then
    another shuffling. The shuffling and PRN were functions of the
    variable-length key. It was in APL/360 and long gone, so I can't
    offer any source.

    One of the interesting things about this approach was that, since the
    shuffling is effectively a variable-length transposition, a
    single-bit change in the length of the file had dramatic effects on
    the output. I think I could have made it much more interesting by
    salting the file with something random in both content and length,
    but I was young and ignorant at the time.
    Cheers,
            Marc

              This is not a secure channel - Assume Nothing

 http://www.hookup.net/~marct
 Key fingerprint =  76 21 A3 B2 41 77 BC E8  C9 1C 74 02 80 48 A0 1A


========
Path: news.io.com!insync!uuneo.neosoft.com!news.uh.edu!swrinde!howland.erols.net!nntp04.primenet.com!nntp.primenet.com!news.cais.net!news.abs.net!aplcenmp!netnews.jhuapl.edu!usenet
From: Bryan Olson 
Newsgroups: sci.crypt
Subject: Re: Ciphers with *Huge* Block Size ?
Date: Thu, 22 Aug 1996 15:30:14 -0400
Organization: Johns Hopkins Applied Physics Lab
Lines: 38
Message-ID: <321CB546.7AF8@jhuapl.edu>
References: <4vi4pb$fsq@elna.ethz.ch>
Reply-To: Bryan_Olson@jhuapl.edu
NNTP-Posting-Host: snake.jhuapl.edu
Mime-Version: 1.0
Content-Type: text/plain; charset=us-ascii
Content-Transfer-Encoding: 7bit
X-Mailer: Mozilla 3.0 (WinNT; I)

Germano Caronni wrote:

> current data encryption techniques usually encrypt a data stream or
> small blocks [...] perhaps not the optimal solution for archiving, 
> and generally not the strongest way to secure data.

> Imagine a data file [...] encrypted as a whole [...] 
> each of the output bits depends on each of
> the input bits (and naturally the secret key). This way, many kinds of
> attacks may be a bit less feasible e.g. imagine doing a known plaintext
> attack on something which simply can not be known in its entity, but of
> which certain parts are well known (e.g. contents of encapsulated IP
> headers, or headers of archived and encrypted mail) -- or imagine doing
> a linear CA attack when the relevant block size is about a Megabyte.
> 
> Current block cipher chaining methods (CBC, OFB, CFB, ...) do not offer
> this property, the best result is that the last encrypted 'block'
> depends on all previous elements.
> 

While block size should be large enough to foil exhaustive search
over blocks, there's little evidence that it needs to be larger.

If you just want every bit to depend on every other, there are
chaining techniques which do so using conventional block ciphers.
For example:

Compute a hash of every block except the last.  Encrypt the last
block using the hash as an initialization vector.  Use the 
ciphertext of the last block as an initialization vector for
the rest of the message, using CFB or CBC mode.

Now every bit depends on every other, and the whole thing can
be reversed without storing an initialization vector.  I think
Peter Gutman used a technique similar to this in his SFS
(Secure File System).

--Bryan


========
Path: news.io.com!arlut.utexas.edu!cs.utexas.edu!swrinde!howland.erols.net!cam-news-hub1.bbnplanet.com!news.mathworks.com!newsfeed.internetmci.com!in3.uu.net!news.injersey.com!news
From: John Michener 
Newsgroups: sci.crypt
Subject: Re: Ciphers with *Huge* Block Size ?
Date: 22 Aug 1996 22:28:00 GMT
Organization: Asbury Park Press, Inc.
Lines: 20
Message-ID: <4vimtg$ak3@news.injersey.com>
References: <4vi4pb$fsq@elna.ethz.ch>
NNTP-Posting-Host: ppp064-trnt.injersey.com
Mime-Version: 1.0
Content-Type: text/plain; charset=us-ascii
Content-Transfer-Encoding: 7bit
X-Mailer: Mozilla 1.22KIT (Windows; U; 16bit)

Look at the construction of SP block codes.  They typically 
involve the successive application of a diffusion mechanism and a 
set of S boxes.  View the primitive block code, DES, as a very
large S-Box.  Encrypt the the data with DES to mix all the bits 
within consecutive 8 byte blocks.  Transpose the data either 
bitwise, one bit per subsequent 64 bit block to expand the 
diffusion 64X, or 1 byte ber 8 byte block to expand the diffusion 
8 X.  Repeat the process.  When the expansion fills the 
super-block, repeat the process at least as many times as you have 
already done. (hopefully with different DES / block keys). I am 
afraid that the process will be very slow, and it is not obvious 
that the cryptographic efficiency is particularily high.  3DES is 
uncrackable anyway, and the super large block size certainly 
creates problems in error diffusion and propagation.

To do the job right, you have to analyze what is happening from 
the respect of known cryptanalytic techniques, such as linear or 
differential cryptanalysis.  The approach I mentioned above is 
very rough.



========
Path: news.io.com!news2.cais.net!news.cais.net!hunter.premier.net!news-res.gsl.net!news.gsl.net!news.mathworks.com!newsfeed.internetmci.com!in3.uu.net!ott.istar!istar.net!van.istar!west.istar!strat.enernet.com!cuugnet!not-for-mail
From: millerl@cuug.ab.ca (Lloyd Miller)
Newsgroups: sci.crypt
Subject: Re: Ciphers with *Huge* Block Size ?
Date: 22 Aug 1996 17:16:48 -0600
Organization: Calgary UNIX Users' Group
Lines: 16
Message-ID: <4vipp1$2j8@hp.cuug.ab.ca>
References: <4vi4pb$fsq@elna.ethz.ch>
NNTP-Posting-Host: hp.cuug.ab.ca
X-Newsreader: TIN [version 1.2 PL2]

Germano Caronni (caronni@tik.ethz.ch) wrote:
: Imagine a data file (or semantic entity, or whatever) being encrypted
: as a whole, which means that each of the output bits depends on each of
: the input bits (and naturally the secret key). This way, many kinds of
: attacks may be a bit less feasible e.g. imagine doing a known plaintext
: attack on something which simply can not be known in its entity, but of
: which certain parts are well known (e.g. contents of encapsulated IP
: headers, or headers of archived and encrypted mail) -- or imagine doing
: a linear CA attack when the relevant block size is about a Megabyte.

: Current block cipher chaining methods (CBC, OFB, CFB, ...) do not offer
: this property, the best result is that the last encrypted 'block'
: depends on all previous elements.

You just do two passes with a OFB type system. Do the second pass
backwards. I think one of the DOS device encryptors does this.


========
Path: news.io.com!arlut.utexas.edu!cs.utexas.edu!howland.erols.net!newsxfer2.itd.umich.edu!bloom-beacon.mit.edu!pad-thai.cam.ov.com!gza-client1.cam.ov.com!not-for-mail
From: don@cam.ov.com (Donald T. Davis)
Newsgroups: sci.crypt
Subject: Re: Ciphers with *Huge* Block Size ?
Date: 22 Aug 1996 20:49:48 -0400
Organization: OpenVision Technologies, Inc.
Lines: 16
Message-ID: <4viv7c$6rk@gza-client1.cam.ov.com>
References: <4vi4pb$fsq@elna.ethz.ch>
NNTP-Posting-Host: gza-client1.cam.ov.com


gec@acm.org writes:
> Imagine a data file (or semantic entity, or whatever) being encrypted
> as a whole, which means that each of the output bits depends on each of the
> input bits (and naturally the secret key). ... Current block cipher chaining
> methods (CBC, OFB, CFB, ...) do not offer this property, the best result is
> that the last encrypted 'block' depends on all previous elements.

before encrypting for the archive, prepend the file's
message-digest to the plaintext. this will make every
ciphertext bit depend on every plaintext bit. it won't
work on connections or streamed data, but should work
fine for archives and file encryption.

				-don davis, boston


========
Path: news.io.com!arlut.utexas.edu!cs.utexas.edu!math.ohio-state.edu!uwm.edu!newsspool.doit.wisc.edu!news.doit.wisc.edu!news
From: Medical Electronics Lab 
Newsgroups: sci.crypt
Subject: Re: Ciphers with *Huge* Block Size ?
Date: Fri, 23 Aug 1996 11:20:55 -0500
Organization: Dept. Neurophysiology, U. Wisconsin
Lines: 28
Message-ID: <321DDA67.5952@neurophys.wisc.edu>
References: <4vi4pb$fsq@elna.ethz.ch>
NNTP-Posting-Host: pcaf.neurophys.wisc.edu
Mime-Version: 1.0
Content-Type: text/plain; charset=us-ascii
Content-Transfer-Encoding: 7bit
X-Mailer: Mozilla 2.02 (WinNT; I)

Germano Caronni wrote:
> Imagine a data file (or semantic entity, or whatever) being encrypted
> as a whole, which means that each of the output bits depends on each of
> the input bits (and naturally the secret key). This way, many kinds of
> attacks may be a bit less feasible e.g. imagine doing a known plaintext
> attack on something which simply can not be known in its entity, but of
> which certain parts are well known (e.g. contents of encapsulated IP
> headers, or headers of archived and encrypted mail) -- or imagine doing
> a linear CA attack when the relevant block size is about a Megabyte.
> 
> Current block cipher chaining methods (CBC, OFB, CFB, ...) do not offer
> this property, the best result is that the last encrypted 'block'
> depends on all previous elements.
> 
> Comments to this one?

One comment pointed out that the encryption time expands with the
block size.  I did look at this once (very briefly) with the idea
of using blocks on the order of 32K or so using elliptic curve math.
It can be done.  It would take a depressingly long time to encrypt
a single block, many minutes on a 400 MHz Alpha (as in >100).  It
might be possible to use many processors in parallel and go faster,
but the expense is ridiculous and the enhancement of security
miniscule compared to other options.  It was fun to think about
tho ;-)

Patience, persistence, truth,
Dr. mike


========
Path: news.io.com!insync!news.ios.com!news2.cais.net!news.cais.net!tezcat!cam-news-hub1.bbnplanet.com!news.mathworks.com!newsfeed.internetmci.com!in3.uu.net!info.htcomp.net!NewsWatcher!user
From: wtshaw@htcomp.net (W T Shaw)
Newsgroups: sci.crypt
Subject: Re: Ciphers with *Huge* Block Size ?
Date: 23 Aug 1996 18:44:23 GMT
Organization: Another Netscape News Server User
Lines: 23
Message-ID: <wtshaw-2308961347300001@207.17.188.113>
References: <4vi4pb$fsq@elna.ethz.ch> <321DDA67.5952@neurophys.wisc.edu>
NNTP-Posting-Host: 207.17.188.113

In article <321DDA67.5952@neurophys.wisc.edu>, Medical Electronics Lab
<rosing@neurophys.wisc.edu> wrote:
>
> One comment pointed out that the encryption time expands with the
> block size.  I did look at this once (very briefly) with the idea
> of using blocks on the order of 32K or so using elliptic curve math.
> It can be done.  It would take a depressingly long time to encrypt
> a single block, many minutes on a 400 MHz Alpha (as in >100).  It
> might be possible to use many processors in parallel and go faster,
> but the expense is ridiculous and the enhancement of security
> miniscule compared to other options.  It was fun to think about
> tho ;-)
>
This depends on algorithm; the one I use with large blocks is pleasingly
fast, not the speed of light, but a happy medium that is still slow enough
to let you watch the dynamic processing being done: it's a great demo that
I enjoy doing.
/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\
wtshaw@htcomp.net   Mac Crypto Programs
 You should at least know how to use ROT13.
"Fhpprff vf n Wbhearl, Abg n Qrfgvangvba."
          http://www.htcomp.net/wts/
\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/


========
Path: news.io.com!usenet
From: Terry Ritter 
Newsgroups: sci.crypt
Subject: Re: Ciphers with *Huge* Block Size ?
Date: Fri, 23 Aug 1996 11:51:31 -0500
Organization: Ritter Software Engineering
Lines: 72
Message-ID: <321DE177.2193@io.com>
Reply-To: ritter@io.com
NNTP-Posting-Host: dialup-01-111.austin.io.com
Mime-Version: 1.0
Content-Type: text/plain; charset=us-ascii
Content-Transfer-Encoding: 7bit
X-Mailer: Mozilla 3.0 (Win95; I)

In <321CB546.7AF8@jhuapl.edu> Bryan Olson <Bryan_Olson@jhuapl.edu>
 writes:

>Germano Caronni wrote:
>
>> current data encryption techniques usually encrypt a data stream or
>> small blocks [...] perhaps not the optimal solution for archiving,
>> and generally not the strongest way to secure data.
>
>> Imagine a data file [...] encrypted as a whole [...]
>> each of the output bits depends on each of
>> the input bits (and naturally the secret key). This way, many kinds of
>> attacks may be a bit less feasible e.g. imagine doing a known plaintext
>> attack on something which simply can not be known in its entity, but of
>> which certain parts are well known (e.g. contents of encapsulated IP
>> headers, or headers of archived and encrypted mail) -- or imagine doing
>> a linear CA attack when the relevant block size is about a Megabyte.

 I have been working on this for some time.  I have developed
 Fenced DES and Variable Size Block Cipher technologies, both of
 which are patent pending, both of which are described in
 excruciating detail on my pages:

      http://www.io.com/~ritter/


>While block size should be large enough to foil exhaustive search
>over blocks, there's little evidence that it needs to be larger.

 First, in cryptography, it is *not* appropriate to demand proof of
 weakness before adding strength to a cipher.  This is a fundamental
 difference between real cryptography and ivory-tower Science.

 Next, it is clear that codebook attacks can start to be effective
 when we get two ciphertext blocks which are the same.  If we assume
 an absolutely flat plaintext distribution (we assume cipher block
 chain (CBC) mode, or other plaintext randomizer) and 64-bit blocks,
 we can expect to see this with about 2**32 blocks, and rapidly
 worsening thereafter.  This is 2**35 bytes or about 34 GB.  While
 this may seem huge now, we will soon see disk drives of this size,
 and in two decades (a reasonable time to expect a new cipher design
 to last), 34 GB will be humorously small.  Yes, we can re-key and
 continue for larger data, but if we have to consider this (and any
 modern design must), we *do* have evidence that the block size
 could usefully be larger.

 In fact, the whole need to use CBC (or other plaintext randomization)
 is based on the small amount of language entropy in 8 character
 blocks:  If we assume 1.5 bits of entropy per character, we are
 looking at 12 bits of entropy -- effectively only 4096 choices
 (times 8, if we consider each position in the block).  No wonder
 that electronic codebook (ECB) mode is deprecated.

 On the other hand, a 200-byte block of plaintext language should
 have about 300 bits of entropy, so we could expect to find two the
 same in about 2**150 blocks.  Thus, if we have a large block, ECB
 becomes practical, which could be useful in certain situations,
 and this *is* evidence that the ciphering block size could
 usefully be larger.

 And we have yet to consider the potential security advantage of
 using dynamically pseudo-random size blocks -- along with random
 padding -- in messages.  (I have implemented practical ciphers
 with dynamically-variable blocks, described them on sci.crypt,
 and archived the messages on my pages.) The lack of fixed block
 boundaries presents serious problems for any current attack on
 a block cipher.  So, yet again, we see evidence that ciphering
 block size could usefully be larger, and also could usefully be
 dynamically variable.

 ---
 Terry Ritter   ritter@io.com   http://www.io.com/~ritter/


========
Path: news.io.com!arlut.utexas.edu!cs.utexas.edu!math.ohio-state.edu!uwm.edu!newsfeed.internetmci.com!newsxfer2.itd.umich.edu!uunet!in2.uu.net!news.new-york.net!news.columbia.edu!news.cs.columbia.edu!versed.smarts.com!usenet
From: Jerry Leichter 
Newsgroups: sci.crypt
Subject: Re: Ciphers with *Huge* Block Size ?
Date: Fri, 23 Aug 1996 14:05:51 -0400
Organization: System Management ARTS
Lines: 89
Message-ID: <321DF2FF.7114@smarts.com>
References: <321DE177.2193@io.com>
NNTP-Posting-Host: just.smarts.com
Mime-Version: 1.0
Content-Type: text/plain; charset=us-ascii
Content-Transfer-Encoding: 7bit
X-Mailer: Mozilla 3.0b5aGold (X11; I; SunOS 5.5 sun4d)
To: ritter@io.com

Terry Ritter wrote:
> [Interesting examples of uses for ciphers with large, or even
> variable, block sizes.]

The counter-arguments that have held sway are more application-oriented
and historical than theoretical:

	1.  Hardware implementations of large block sizes are more
		difficult.  *How* difficult, of course, changes as
		technology advances.  Certainly when DES was first
		defined, it was at the limits of reasonable-cost
		hardware.  Today you could certainly do a much "wider"
		chip.

	2.  The biggest demand is for encryption of communication
		streams.  (We know how to do physical security, so
		keeping disks safe is *relatively* easier.  But it's
		very difficult to keep "wires" secure except in unusual
		circumstances.)

		This adds a number of constraints.  For example, stream
		mode operation is highly desireable; you can't afford
		to buffer large amounts of information just for
		encryption.  (The original question in this thread
		concerned encryption modes in which the output depended
		on the *entire* input, no matter how large.  Such a
		mode would be entirely unusable in a communications
		framework.)

	3.  The larger the block, the larger the average space that
		is wasted to fill out short blocks.  In a communications
		framework, many messages are quite short.  A really
		large block could end up devoting more space to fill
		than to useful message!

 	4.  All other things being equal, a large-block cipher is
		probably going to be slower than a small-block cipher
		on a per-bit basis:  If each bit of the output depends
		on each bit of the input, then the minimum possible
		operations per bit doubles when you double the block
		size.  (This kind of argument is necessarily vague and
		incomplete, since with a larger block size you may be
		able to get away with, say, fewer rounds for equivalent
		security.  But I know of no ciphers that actually come
		out ahead for large block sizes on this basis.)

	5.  Many of these issues grow less important with each passing
		year as chips get faster (though encryption of traffic
		on the highest speed lines remains problematical -
		transmission data rates are going up faster than CPU
		speeds these days, and the trend lines indicate that
		this will continue.  If it does, we'll have to go for
		parallelism of some sort - perhaps just pipelining is
		enough - and that's much easier to do with smaller
		block sizes.)  However, there is an existing infra-
		structure now.  Block sizes are often "frozen in" to
		existing designs and difficult and expensive to change.
		That's one reason people like triple-DES:  It can be
		used as a drop-in replacement for DES, with effects
		only on the relatively low-frequency activity of key
		distribution.

	6.  Ritter's arguments are strongest when applied to file (or
		disk) encryption, and larger block sizes might be very
		appropriate there (up to some limit, since you don't
		want the cipher block size to force you to transfer
		significantly more data to and from the disk).  However,
		there is also a strong pressure to standardize on one
		encryption technology - DES today, who-knows-what
		tomorrow.  If one decides that it's a good idea to use
		the same cryptosystem for both files and communications
		(which has its pluses, like the ability to use fast
		encryption chips developed for communications in disk
		controllers), then constraints on the communications
		side "leak through" to the file side, even if they
		aren't really relevant there.

		(Of course, from a security point of view, the more
		strong cryptosystems you have, the better.  All the
		available evidence indicates that NSA produces many
		different cryptosystems for different purposes.  That's
		just prudent design.  The same is actually true for any
		component with security implications - a security hole
		in, say, Netscape implies a vulnerability on the part
		of almost all Web users these days.  Nevertheless, the
		pressures for standardization are so great that they are
		often impossible to resist.)

							-- Jerry

[Ritter responds.]

========
Path: news.io.com!insync!uuneo.neosoft.com!imci3!newsfeed.internetmci.com!in2.uu.net!info.htcomp.net!NewsWatcher!user
From: wtshaw@htcomp.net (W T Shaw)
Newsgroups: sci.crypt
Subject: Re: Ciphers with *Huge* Block Size ?
Date: 23 Aug 1996 19:00:57 GMT
Organization: Another Netscape News Server User
Lines: 73
Message-ID: <wtshaw-2308961403560001@207.17.188.113>
References: <321DE177.2193@io.com>
NNTP-Posting-Host: 207.17.188.113

In article <321DE177.2193@io.com>, ritter@io.com wrote:
>
>  I have been working on this for some time.  I have developed
>  Fenced DES and Variable Size Block Cipher technologies, both of
>  which are patent pending, both of which are described in
>  excruciating detail on my pages:
>
>       http://www.io.com/~ritter/
>
>
> >While block size should be large enough to foil exhaustive search
> >over blocks, there's little evidence that it needs to be larger.
>
>  First, in cryptography, it is *not* appropriate to demand proof of
>  weakness before adding strength to a cipher.  This is a fundamental
>  difference between real cryptography and ivory-tower Science.
>
>  Next, it is clear that codebook attacks can start to be effective
>  when we get two ciphertext blocks which are the same.  If we assume
>  an absolutely flat plaintext distribution (we assume cipher block
>  chain (CBC) mode, or other plaintext randomizer) and 64-bit blocks,
>  we can expect to see this with about 2**32 blocks, and rapidly
>  worsening thereafter.  This is 2**35 bytes or about 34 GB.  While
>  this may seem huge now, we will soon see disk drives of this size,
>  and in two decades (a reasonable time to expect a new cipher design
>  to last), 34 GB will be humorously small.  Yes, we can re-key and
>  continue for larger data, but if we have to consider this (and any
>  modern design must), we *do* have evidence that the block size
>  could usefully be larger.
> 
>  In fact, the whole need to use CBC (or other plaintext randomization)
>  is based on the small amount of language entropy in 8 character
>  blocks:  If we assume 1.5 bits of entropy per character, we are
>  looking at 12 bits of entropy -- effectively only 4096 choices 
>  (times 8, if we consider each position in the block).  No wonder
>  that electronic codebook (ECB) mode is deprecated.
>
>  On the other hand, a 200-byte block of plaintext language should
>  have about 300 bits of entropy, so we could expect to find two the
>  same in about 2**150 blocks.  Thus, if we have a large block, ECB
>  becomes practical, which could be useful in certain situations,
>  and this *is* evidence that the ciphering block size could
>  usefully be larger.
> 
>  And we have yet to consider the potential security advantage of
>  using dynamically pseudo-random size blocks -- along with random
>  padding -- in messages.  (I have implemented practical ciphers
>  with dynamically-variable blocks, described them on sci.crypt,
>  and archived the messages on my pages.) The lack of fixed block
>  boundaries presents serious problems for any current attack on
>  a block cipher.  So, yet again, we see evidence that ciphering
>  block size could usefully be larger, and also could usefully be
>  dynamically variable.
> 
>  ---
>  Terry Ritter   ritter@io.com   http://www.io.com/~ritter/

By keeping block size as small as is common, the strengths of the
algorithms are crippled so as to make it difficult to make them
increasingly stronger.

Close examination of Terry's algorithms will reveal the achievement
therein.  Good, new algorithms, such as these, would demand entirely new
cryptoanalytical efforts, thus making them less vulnerable to immediate
attack.  The quantum leap in size of the blocks also greatly complicates
the matter for analysis.  There is tremendous potential for commercial
application of these algorithms.
/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\
wtshaw@htcomp.net   Mac Crypto Programs
 You should at least know how to use ROT13.
"Fhpprff vf n Wbhearl, Abg n Qrfgvangvba."
          http://www.htcomp.net/wts/
\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/


========
Path: news.io.com!arlut.utexas.edu!cs.utexas.edu!howland.erols.net!newsfeed.internetmci.com!in3.uu.net!news.abs.net!aplcenmp!netnews.jhuapl.edu!usenet
From: Bryan Olson 
Newsgroups: sci.crypt
Subject: Re: Ciphers with *Huge* Block Size ?
Date: Mon, 26 Aug 1996 17:25:09 -0400
Organization: Johns Hopkins Applied Physics Lab
Lines: 27
Message-ID: <32221635.1686@jhuapl.edu>
References: <321DE177.2193@io.com> <wtshaw-2308961403560001@207.17.188.113>
Reply-To: Bryan_Olson@jhuapl.edu
NNTP-Posting-Host: snake.jhuapl.edu
Mime-Version: 1.0
Content-Type: text/plain; charset=us-ascii
Content-Transfer-Encoding: 7bit
X-Mailer: Mozilla 3.0 (WinNT; I)

W T Shaw wrote:
>
> In article <321DE177.2193@io.com>, ritter@io.com (Terry Ritter)
> wrote:
> >
> >  I have been working on this for some time.  I have developed
> >  Fenced DES and Variable Size Block Cipher technologies [...]

> Close examination of Terry's algorithms will reveal the achievement
> therein.  Good, new algorithms, such as these, would demand entirely new
> cryptoanalytical efforts, thus making them less vulnerable to immediate
> attack.  The quantum leap in size of the blocks also greatly complicates
> the matter for analysis.  There is tremendous potential for commercial
> application of these algorithms.

I'd invite anyone who has closely examined Terry's "Fenced DES"
to comment on the attack I proposed in another thread of this
same topic.  If I've understood Terry's algorithm correctly, the
attack should work, and I may write up a better exposition for
the research group.

The attack doesn't use any really new or novel methods, and the
large block seemsto make it easier.  I still agree that large
blocks could have great security advantages, but I would not
trade per-bit confusion/diffusion for block size.

--Bryan


========
Path: news.io.com!news2.cais.net!news.cais.net!tezcat!cam-news-hub1.bbnplanet.com!news.mathworks.com!news.kei.com!newsfeed.internetmci.com!in2.uu.net!news.abs.net!aplcenmp!netnews.jhuapl.edu!usenet
From: Bryan Olson 
Newsgroups: sci.crypt
Subject: Re: Ciphers with *Huge* Block Size ?
Date: Fri, 23 Aug 1996 15:46:41 -0400
Organization: Johns Hopkins Applied Physics Lab
Lines: 40
Message-ID: <321E0AA1.4517@jhuapl.edu>
References: <321DE177.2193@io.com>
Reply-To: Bryan_Olson@jhuapl.edu
NNTP-Posting-Host: snake.jhuapl.edu
Mime-Version: 1.0
Content-Type: text/plain; charset=us-ascii
Content-Transfer-Encoding: 7bit
X-Mailer: Mozilla 3.0 (WinNT; I)

Terry Ritter wrote:
>
> In <321CB546.7AF8@jhuapl.edu> Bryan Olson <Bryan_Olson@jhuapl.edu>
>  writes:
>
>
> >While block size should be large enough to foil exhaustive search
> >over blocks, there's little evidence that it needs to be larger.
>
>  First, in cryptography, it is *not* appropriate to demand proof of
>  weakness before adding strength to a cipher.  This is a fundamental
>  difference between real cryptography and ivory-tower Science.
>

No argument there (well, except for the academic-bashing).
I'm not against large blocks.  I'm pointing out that no one
has shown they're more secure. The advantages that Germano
cited, and most that you cite, can be achieved by chaining
smaller blocks.

I know you've devised several large-block ciphers, but your
security results are far too speculative.  One of the basic
principles in block-cipher design is that each key bit and
data bit should interact with the others many times in
internal transformations.  Large-block ciphers seem to
require more operations to achieve the same level of 
diffusion, and consequently most settle for less.

I suspect that there is a security advantage to larger blocks.
The same level of diffusion ought to be harder to reverse in
a larger block.  No, I can't prove it.

Finally, I don't like designs which work extensively on 
isolated sections of the key and/or data space.  DES shows
good design in that all the key bits come into play 
repeatedly.  "Triple DES" and "Fenced DES" choose to trust
in some unknown properties of the cipher, rather than follow 
the established principles of its design.

--Bryan


========
Path: news.io.com!usenet
From: ritter@io.com (Terry Ritter)
Newsgroups: sci.crypt
Subject: Re: Ciphers with *Huge* Block Size ?
Date: Sat, 24 Aug 1996 07:55:20 GMT
Lines: 140
Message-ID: <4vmcm7$8jd@nntp-1.io.com>
References: <321DE177.2193@io.com> <321E0AA1.4517@jhuapl.edu>
NNTP-Posting-Host: dialup-01-088.austin.io.com
X-Newsreader: Forte Free Agent 1.0.82


 In <321E0AA1.4517@jhuapl.edu> Bryan Olson <Bryan_Olson@jhuapl.edu>
 writes:

>I'm not against large blocks.  I'm pointing out that no one
>has shown they're more secure. The advantages that Germano
>cited, and most that you cite, can be achieved by chaining
>smaller blocks.

 The very need to chain small blocks is itself mainly a consequence
 of tiny block size, and its general use an admission of the known
 weakness of small blocks.  With a large block, we generally have
 sufficient language entropy to safely cipher in electronic
 codebook mode (ECB).  (One might argue for chaining as a
 modification-detection mechanism, but nowadays a cryptographic
 hash handles this.)


>I know you've devised several large-block ciphers, but your
>security results are far too speculative.

 Fine.  What would you like to see?

 Are you convinced about the strength of DES?  If so, what evidence
 convinced you?  If you are instead convinced by the testimony of
 various crypto gods, there is not much I care to do about that.


>One of the basic
>principles in block-cipher design is that each key bit and
>data bit should interact with the others many times in
>internal transformations.

 No, this is not strictly true.

 The principle you are looking for is called "avalanche," and states
 a *goal* that every output bit should be "affected" by every input
 bit.  (This is no more than a consequence of the observation that
 a block cipher is a machine which attempts to emulate a Simple
 Substitution table of impractical size.  Every Simple Substitution
 will avalanche.)

 The testable result is that a change on any input bit should change
 (almost) half of the output bits, on average.  Any implication that
 one can only get such a result if bits *explicitly* interact "many
 times" is not warranted, as far as I know.


>Large-block ciphers seem to
>require more operations to achieve the same level of
>diffusion, and consequently most settle for less.

 No.  Modern mixing guarantees distribution of any input change to
 every substitution in the next layer in n log n operations.
 (This is either log n layers of n-byte multi-precision mixing, or
 log n layers of byte-by-byte mixing.)  Feistel mixing gives no such
 guarantees, and, indeed, virtually requires experiment to decide
 how many rounds are "enough."  (Authors of similar Feistel ciphers
 often let the user decide how many rounds are enough, as though
 users will have the background to do this.)

 The use of keyed invertible substitutions guarantees that *any*
 change in a substitution input value *will* avalanche the output
 from the substitution.  (The use of keyed substitutions also
 protects against Differential and Linear attacks, since the
 transformation is keyed and not known externally.)  Again, Feistel
 operations have no guarantees.

 The separate use of balanced, complete mixing layers and a "fencing"
 layer of keyed, invertible substitutions guarantees avalanche
 across large blocks.  The use of another set of layers protects
 against various "meet in the middle" or "fix in the middle"
 attacks (and may not always be necessary).  The use of a fencing
 layer before linear mixing protects the mixing, provided that
 the subsequent ciphering layers have not been exposed.


>Finally, I don't like designs which work extensively on
>isolated sections of the key and/or data space.

 Then you should *love* my ciphers.  I place heavy emphasis on
 keying substitution tables.  This is done by shuffling under
 the control of a substantial RNG (typically, a 992 bit Additive
 RNG).  The RNG is initialized by 32 31-bit CRC's using different
 primitives each time.  This means that every entry in every table
 is affected by every bit in the key.

 Then, the guaranteed mixing in these designs means that every
 input bit "affects" *every* substitution (each of which has been
 affected by every key bit) in two of three layers.

 I can provide much *better* guarantees of key interaction than
 anything I've seen on the DES design.


>DES shows
>good design in that all the key bits come into play
>repeatedly.

 This sounds about like: "DES shows good design because it is
 very like DES, which is known to be a good design."  Not only
 is this circular, but we do *not* *know* that DES *is* a good
 design.  In fact, we actually do know that DES is obsolete.

 While DES does repeatedly use the key bits, the DES design does
 *not* show that this is the way ciphering *must* be done, or even
 *should* be done, to achieve strength.  An example of one design
 is not a good reference for ciphers built on fundamentally different
 principles of operation, even if similar results are desired.

 As a matter of fact, this common design concept "let's just make
 something pretty much like DES, only a little better" is very
 seriously flawed due to a lack of known mathematical basis for
 Feistel-type designs.  The fact that many do this does not make it
 a good design technique.


>"Triple DES" and "Fenced DES" choose to trust
  ^^^^^^^^^^
 ("Variable Size Block Ciphers"?)

>in some unknown properties of the cipher, rather than follow
>the established principles of its design.

 Your so-called "established principles" do not, alas, begin to
 provide the mathematical guarantees of operation provided by
 Fencing and Mixing cipher designs.  It seems strange you would
 bring this up, for it is the conventional wisdom of Feistel
 ciphering which is weak here, not these designs.


>
>--Bryan
>

---
Terry Ritter   ritter@io.com   http://www.io.com/~ritter/


========
Path: news.io.com!arlut.utexas.edu!cs.utexas.edu!howland.erols.net!cam-news-hub1.bbnplanet.com!cpk-news-hub1.bbnplanet.com!cpk-news-feed1.bbnplanet.com!netnews.jhuapl.edu!usenet
From: Bryan Olson 
Newsgroups: sci.crypt
Subject: Re: Ciphers with *Huge* Block Size ?
Date: Sun, 25 Aug 1996 21:36:45 -0400
Organization: Johns Hopkins Applied Physics Lab
Lines: 229
Message-ID: <3220FFAD.27BB@jhuapl.edu>
References: <321DE177.2193@io.com> <321E0AA1.4517@jhuapl.edu> <4vmcm7$8jd@nntp-1.io.com>
Reply-To: Bryan_Olson@jhuapl.edu
NNTP-Posting-Host: snake.jhuapl.edu
Mime-Version: 1.0
Content-Type: text/plain; charset=us-ascii
Content-Transfer-Encoding: 7bit
X-Mailer: Mozilla 3.0 (WinNT; I)

Terry Ritter wrote:
>
>  In <321E0AA1.4517@jhuapl.edu> Bryan Olson <Bryan_Olson@jhuapl.edu>
>  writes:
>

> >I know you've devised several large-block ciphers, but your
> >security results are far too speculative.
>
>  Fine.  What would you like to see?
>

I'd like to see fewer proposed ciphers with much more careful
analysis.  I took another look at your "Fenced DES" on your web
pages (I was happy to see you kept my critical posts too).  You
say,
  "If we assume that the internal block mixing is indeed
   effective, the overall strength of 4x Fenced DES is at least
   four times that of 1x Fenced DES, for a total of 480 bits."

How carefully did you analyze your own cipher before you proposed
it?  Are you saying that exhausting a 480 bit space was the best
attack you could come up with?  Would that include an on-line
selected plaintext attack?

(To follow the next section, readers will have to understand
Ritter's "Fenced DES".  See: http://www.io.com/~ritter/FENCED.HTM
)

You actually start on an attack that I think could be productive,
but you seem to bring it up only to discount it.

    (Of course, for substantial input changes, it is possible to
    generate values which will leave one or more of the DES
    operations unaffected, although this is quite unlikely to
    happen by chance. In the mixing transform, such a circumstance
    requires specific related 128-bit values on each of the two
    input ports, and these values are hidden behind the input
    substitutions. We could manage to keep a DES operation
    unaffected if we knew the arrangement of the values in all the
    input substitutions, but that uncertainty is part of the 
    strength of the cipher. And it seems unlikely that we could 
    tell from the outside that we were successful -- that fewer 
    than four DES operations have avalanched -- because any 
    avalanche will enter the output block mixing and so be
    reflected  over the whole width of the large block, with the
    specific resulting values hidden by the output substitutions.) 

In your block mixing functions, it seems that a change of one bit
position in either half of the input can change only the output bit 
in the same position and the one to the immediate left (circularly).
Am I wrong on that?  If so, the change between a plaintext A and A'
needed to make three of the internal DES applications receive the 
same input are not all that "substantial".  Even with the 
substitutions we could certainly search for them.

I can pick four bytes that will be mixed only with each other in
both levels of pre-DES mixing.  In all 2^32 four byte values, there
should be several (maybe 64, I haven't ground it out) pairs of A and A' 
that result in a 0 difference for inputs of three of the DES 
applications.   I'll call A and A' a "match" if they have that property.

You say it "seems unlikely that we could tell from the outside that we 
were successful" in finding such a match.  If A and A' are a match, I
can immediately find many other matches by changing bytes which don't
interact with the critical four.  That gives me a good space to search
for any statistical anomaly which would indicate a match.

So what anomaly do I search for?

Let's turn our attention to an output of the DES boxes.  If I have a 
match, three of them are the same under A as under A'.  In your analysis
you assume the output of the fourth will be unrelated between A and A',
and this will make all the outputs appear unrelated.  But what if just
some output bits from this S box turn out to be the under A and A', and
in a fortuitous combination.  Specifically:  
    1. One of the eight bytes (on the byte boundary)
    2. The two most significant bits in the byte to the right of 1.

If that happens, then all 4 DES outputs will have 10 bits positions 
which are the same under A as under A'.  The block mixing will preserve 
8 of them, lined up with a single substitution at the end.  Thus in the
output, we'll see four bytes, each three bytes apart, which are
identical
under A and A'.

By chance, the four identical bytes should appear once in 2^24 pairs A
and
A', for each of the 8 byte positions.  Conditions 1 and two (above)
should
hold about once in 2^10 pairs.  If they hold under a non-match, then the
four-byte coincidence still has the same probability, but if they hold
under a match, then the coincidence is a certainty.

So if my four critical bytes do _not_ force a match, I expect to see the 
four-byte pattern one in 2^21 pairs.  If they are a match, I'll see it
once in 2^7 pairs, where I form the pairs by holding these four input
bytes
constant and varying others (not immediately to their circular right).

So I have to search over a 2^32 space, and run a few hundred pairs for
each to find such a match.  Once I've found it, I have a lot of 
information about the four permutations which formed my four critical
bytes.  Repeating the process to find all the matches over all 32
permutions, I should be able to break the initial permutations. Looking
at what happens in the output, I can probably infer a lot about the
final
permutations too.

That's as far as I'll take the attack for now.  Perhaps I've 
misunderstood something about the design; please let me know if any
of the steps I've described is impossible.  I'll elaborate if parts of
my explanation are unclear.  If as much as I've described can be done,
I certainly think it's a strong enough toe-hold for the analyst that 
Fenced DES is dead.


Back to the thread...

>  Are you convinced about the strength of DES?

Absolutely, positively not.  I'm convinced of its weakness, and
I don't think it should be used as a component in a larger cipher.

> >One of the basic
> >principles in block-cipher design is that each key bit and
> >data bit should interact with the others many times in
> >internal transformations.
> 
>  No, this is not strictly true.
> 
>  The principle you are looking for is called "avalanche," and states
>  a *goal* that every output bit should be "affected" by every input
>  bit.  (This is no more than a consequence of the observation that
>  a block cipher is a machine which attempts to emulate a Simple
>  Substitution table of impractical size.  Every Simple Substitution
>  will avalanche.)
> 
>  The testable result is that a change on any input bit should change
>  (almost) half of the output bits, on average.  Any implication that
>  one can only get such a result if bits *explicitly* interact "many
>  times" is not warranted, as far as I know.
> 

No, the criteria I meant was the one I said.  Avalanche is not enough
(and to be fair neither is my criteria; they're both necessary but not
sufficient).  All the bits need to be effected in a complex way.  If
sections of the key are used just once, it often allows the analyst to
partition the effect of those bits from the effects of others.  That
is why meet-in-the-middle works, why your NxM DES failed, and how I
attacked Fenced DES.

> 
>  The use of keyed invertible substitutions guarantees that *any*
>  change in a substitution input value *will* avalanche the output
>  from the substitution.  (The use of keyed substitutions also
>  protects against Differential and Linear attacks, since the
>  transformation is keyed and not known externally.)  Again, Feistel
>  operations have no guarantees.
> 
>  The separate use of balanced, complete mixing layers and a "fencing"
>  layer of keyed, invertible substitutions guarantees avalanche
>  across large blocks.  The use of another set of layers protects
>  against various "meet in the middle" or "fix in the middle"
>  attacks (and may not always be necessary).  The use of a fencing
>  layer before linear mixing protects the mixing, provided that
>  the subsequent ciphering layers have not been exposed.
> 

I'll let you assess my attack before I argue the problems with that.

> >Finally, I don't like designs which work extensively on
> >isolated sections of the key and/or data space.
> 
>  Then you should *love* my ciphers.  

Uh...

[...] 
>  I can provide much *better* guarantees of key interaction than
>  anything I've seen on the DES design.
> 

Obviously I disagree.

> >DES shows
> >good design in that all the key bits come into play
> >repeatedly.
>
>  This sounds about like: "DES shows good design because it is
>  very like DES, which is known to be a good design."  Not only
>  is this circular, but we do *not* *know* that DES *is* a good
>  design.  In fact, we actually do know that DES is obsolete.
> 
>  While DES does repeatedly use the key bits, the DES design does
>  *not* show that this is the way ciphering *must* be done, or even
>  *should* be done, to achieve strength.  An example of one design
>  is not a good reference for ciphers built on fundamentally different
>  principles of operation, even if similar results are desired.
>

See also IDEA, Blowfish, and most other block ciphers which have
resisted attack.

>  As a matter of fact, this common design concept "let's just make
>  something pretty much like DES, only a little better" is very
>  seriously flawed due to a lack of known mathematical basis for
>  Feistel-type designs.  The fact that many do this does not make it
>  a good design technique.
> 

Which side are you arguing?  You're the one trying to use DES but add
some simple layers to make it stronger.

>  Your so-called "established principles" do not, alas, begin to
>  provide the mathematical guarantees of operation provided by
>  Fencing and Mixing cipher designs.  It seems strange you would
>  bring this up, for it is the conventional wisdom of Feistel
>  ciphering which is weak here, not these designs.
> 
> Terry Ritter   ritter@io.com   http://www.io.com/~ritter/

I think I probably have a few numbers wrong in my analysis, but 
not so far wrong as invalidate the attack on Fenced DES.  If 
the attack doesn't work, I'd like to know why.  If it does, 
well I have to respect you for showing your failures on your web
page.

--Bryan


========
Path: news.io.com!usenet
From: ritter@io.com (Terry Ritter)
Newsgroups: sci.crypt
Subject: Re: Ciphers with *Huge* Block Size ?
Date: Tue, 27 Aug 1996 18:05:14 GMT
Organization: Illuminati Online
Lines: 530
Message-ID: <4vvdbj$3de@anarchy.io.com>
References: <321DE177.2193@io.com> <321E0AA1.4517@jhuapl.edu> <4vmcm7$8jd@nntp-1.io.com> <3220FFAD.27BB@jhuapl.edu>
NNTP-Posting-Host: dialup-01-146.austin.io.com
X-Newsreader: Forte Free Agent 1.0.82


In <3220FFAD.27BB@jhuapl.edu> Bryan Olson <Bryan_Olson@jhuapl.edu>
wrote:

>I'd like to see fewer proposed ciphers with much more careful
>analysis.

 But no analysis is *possible* without a design.  And new designs
 also are *not* *possible* without considering new basic structures.
 Some may fail, others may not.  Big deal.

 Frankly, I'd like to see the academic community examining a wide
 variety of different structures, categorizing them, and working
 out some common insights.  But this *is* "dangerous work":  Really
 new proposals can "fail" and damage a frail academic reputation.

 Heaven help us all if there turns out to be a general attack on
 Feistel ciphering.


>I took another look at your "Fenced DES" on your web
>pages (I was happy to see you kept my critical posts too).

 I believe in presenting both sides of a controversy.  I also
 believe in documenting the mistakes of the past so as to hopefully
 reduce such mistakes in the future.  It may well be that both the
 readers and the author will learn more from design failures than
 successes.


>You
>say,
>  "If we assume that the internal block mixing is indeed
>   effective, the overall strength of 4x Fenced DES is at least
>   four times that of 1x Fenced DES, for a total of 480 bits."
>
>How carefully did you analyze your own cipher before you proposed
>it?

 What kind of question is this?  How careful is "careful"?
 How should I measure it (let me count the ways!).

 I was about as careful as I would be in completing an exam, given
 various other obligations and resource limitations.  I did the
 best that I could at the time, but certainly did not claim that
 the cipher was unbreakable.  Considering that academia has failed
 to produce such a cipher, that would be nuts.  The *whole* *point*
 in publication is for somebody else to find a flaw if it exists.

 In producing software, I have found it useful to figuratively
 "reward" myself for finding bugs, despite the fact that each and
 every bug can be seen as my own programming "failure."  The idea
 that humans can produce perfect programs is a goal, not reality.
 We instead must use strategies which minimize errors and then
 expose and fix those errors which do occur.  But other than
 producing yet another Feistel cipher which is difficult or
 impossible to analyze, I am not aware of effective strategies to
 minimize errors in cipher design.  Instead I accept the fact that
 bugs may exist in my ciphers, work at exposing those bugs and
 understanding the deeper ramifications of ones which are found.

 If your implication is that any proposal which is flawed would
 not have been made if one was sufficiently "careful," I can list
 many "serious" academic proposals, from very respected people,
 which turned out -- in retrospect -- to be flawed.  This is, in
 fact, the way we expect Science to progress.  Rigid expectations
 of "care" are not appropriate to the world as I see it.


>Are you saying that exhausting a 480 bit space was the best
>attack you could come up with?  Would that include an on-line
>selected plaintext attack?

 Do you suggest that I am writing this stuff knowing that it
 has problems?  If had come up with something I considered better,
 it would be there.


>(To follow the next section, readers will have to understand
>Ritter's "Fenced DES".  See: http://www.io.com/~ritter/FENCED.HTM
>)
>
>You actually start on an attack that I think could be productive,
>but you seem to bring it up only to discount it.

 I brought the attack up because it seemed obvious, and presented
 it as I did because it seemed unproductive with just a little more
 thought.  Had I not brought it up, you might have accused me of
 hiding it.  Obviously I can't win either way, which is OK, as long
 as we all see the same game.


>    (Of course, for substantial input changes, it is possible to
>    generate values which will leave one or more of the DES
>    operations unaffected, although this is quite unlikely to
>    happen by chance. In the mixing transform, such a circumstance
>    requires specific related 128-bit values on each of the two
>    input ports, and these values are hidden behind the input
>    substitutions. We could manage to keep a DES operation
>    unaffected if we knew the arrangement of the values in all the
>    input substitutions, but that uncertainty is part of the
>    strength of the cipher. And it seems unlikely that we could
>    tell from the outside that we were successful -- that fewer
>    than four DES operations have avalanched -- because any
>    avalanche will enter the output block mixing and so be
>    reflected  over the whole width of the large block, with the
>    specific resulting values hidden by the output substitutions.)
>
>In your block mixing functions, it seems that a change of one bit
>position in either half of the input can change only the output bit
>in the same position and the one to the immediate left (circularly).
>Am I wrong on that?

 Just technically.  We have a field; sometimes the field
 polynomial is activated and we get its bits too.  Usually not.


>If so, the change between a plaintext A and A'
>needed to make three of the internal DES applications receive the
>same input are not all that "substantial".  Even with the
>substitutions we could certainly search for them.

 Hardly "certainly."  Search implies recognition of difference.
 It would be easy to cycle through all combinations of four bytes
 and know that somewhere in there the combination(s) we wanted
 must exist.  The problem is knowing when that occurred.


>I can pick four bytes that will be mixed only with each other in
>both levels of pre-DES mixing.

 This doesn't take much "picking"; this is the situation in
 general.  The description I would give is that each of the four
 bytes is distributed to each of the internal ciphers.  Changing
 one bit of any one byte *is* guaranteed to change each of the inputs
 to each of internal ciphers.  Changing bits in multiple bytes
 does not have this same guarantee.


>In all 2^32 four byte values, there
>should be several (maybe 64, I haven't ground it out) pairs of A and A'
>that result in a 0 difference for inputs of three of the DES
>applications.   I'll call A and A' a "match" if they have that property.
>
>You say it "seems unlikely that we could tell from the outside that we
>were successful" in finding such a match.  If A and A' are a match, I
>can immediately find many other matches by changing bytes which don't
>interact with the critical four.  That gives me a good space to search
>for any statistical anomaly which would indicate a match.

 If we *assume* that A and A' match, we can do lots of things.
 The issue is finding that match.


>So what anomaly do I search for?
>
>Let's turn our attention to an output of the DES boxes.  If I have a
                                                  [box]
>match, three of them are the same under A as under A'.

 Yes.


>In your analysis
>you assume the output of the fourth will be unrelated between A and A',
>and this will make all the outputs appear unrelated.

 Yes.


>But what if just
>some output bits from this S box turn out to be the under A and A',
                                                    ^[same]
>and in a fortuitous combination.  Specifically:
>    1. One of the eight bytes (on the byte boundary)
>    2. The two most significant bits in the byte to the right of 1.

 You are proposing a situation where particular bit-position values
 occur "at random" from an internal DES operation.  The probability
 of this would seem to be 1 in 2**10.

 [Also note that DES is not an "S-box" but a "cipher."  The
 difference is structural:  S-boxes, as the term is used in Feistel
 ciphering (e.g., DES) are not constrained to be permutation
 functions -- that is, Simple Substitutions, but ciphers are.]


>If that happens, then all 4 DES outputs will have 10 bits positions
>which are the same under A as under A'.

 Yes.  If that happens.


>The block mixing will preserve
>8 of them, lined up with a single substitution at the end.  Thus in the
>output, we'll see four bytes, each three bytes apart, which are
>identical
>under A and A'.

 Generally, yes.  Three levels of mixing poly will each tend to
 activate about half the time, though.


>By chance, the four identical bytes should appear once in 2^24
>pairs A and A', for each of the 8 byte positions.

 Yes.


>Conditions 1 and two (above) should hold about once in 2^10 pairs.

 Yes.


>If they hold under a non-match, then the four-byte coincidence
>still has the same probability, but if they hold
>under a match, then the coincidence is a certainty.

 ?


>So if my four critical bytes do _not_ force a match, I expect to
>see the
>four-byte pattern one in 2^21 pairs.  If they are a match, I'll
>see it
>once in 2^7 pairs, where I form the pairs by holding these four
>input bytes
>constant and varying others (not immediately to their circular
>right).

 !


>So I have to search over a 2^32 space, and run a few hundred pairs
>for each to find such a match.

 This may be in the right ballpark; there may be some effect from
 the intrusion of the mixing polys in each output mixing level.


>Once I've found it, I have a lot of
>information about the four permutations which formed my four
>critical bytes.  Repeating the process to find all the matches over
>all 32 permutions, I should be able to break the initial permutations.

 I'd like to see a discussion of how one would continue from there
 to actually find substitution values and start attacking one of
 the internal DES operations.

 I don't see how to recover the substitution element values, but I
 don't stand on this as a redeeming strength.


>Looking at what happens in the output, I can probably infer a lot
>about the final permutations too.
>
>That's as far as I'll take the attack for now.  Perhaps I've
>misunderstood something about the design; please let me know if any
>of the steps I've described is impossible.  I'll elaborate if parts of
>my explanation are unclear.  If as much as I've described can be done,
>I certainly think it's a strong enough toe-hold for the analyst that
>Fenced DES is dead.

 Not at all.  Boy, are you looking for a cheap victory :-).  You have
 described an interactive chosen-plaintext attack.  This is hardly the
 same thing as a known-plaintext or ciphertext-only attack.

 This attack essentially shows the need for additional system-design
 requirements.  Few of us expect to be able to take an arbitrary
 cipher, drop it into an arbitrary application and produce a secure
 system.  Instead, the entire system must be designed to properly
 use the cipher and avoid its weaknesses.  This is normal practice.

 This attack indicates that any system using this Fenced DES design
 must prevent a chosen-plaintext attack of the necessary size.  This
 can be done by not allowing The Opponent to perform chosen-plaintext
 ciphering, or by forcing a key change before the necessary amount of
 data has been ciphered.  Thus, your adaptive chosen-plaintext attack
 is itself easily rendered ineffective.

 Obviously, the attack has been on the Fencing and Mixing structure,
 and has not broken DES with this small effort.  The guarantee that
 Fenced DES could not possibly be weaker than DES remains unbroken.

 The attack also directly indicates a solution, and that is to mix
 repeatedly until each and every mixing output byte is a function
 of each and every mixing input byte.  This would require three more
 (linear!) mixing levels on each side, and would probably *still* be
 faster than Triple-DES.  The mixing itself would be similar to my
 mixing ciphers which use an internal array of keyed substitutions
 instead of ciphers.

 One of the fundamental questions in a Fencing and Mixing cipher is
 whether or not linear mixing can be a major part of a strong cipher.
 This attack has not answered that question.

 To absolutely "kill" any particular Fenced DES design, one would
 have to show that the Fencing and Mixing structure has not
 improved DES strength at all.  To do this, one would need to find
 a (known-plaintext or ciphertext-only) attack which has a total
 complexity of 56 or perhaps 58 bits.

 To show that a particular Fenced DES design is probably not worth
 using in the proposed applications, one would need to find a
 (known-plaintext or ciphertext-only) attack which has a total
 complexity of under 120 bits.


>Back to the thread...
>
>>  Are you convinced about the strength of DES?
>
>Absolutely, positively not.  I'm convinced of its weakness, and
>I don't think it should be used as a component in a larger cipher.

 Very amusing.

 The point is that, for a lot of business people who are responsible
 for ciphering (despite not having a deep background for it), the
 only cipher "known" to be secure (at a certain level) *is* DES.


>> >One of the basic
>> >principles in block-cipher design is that each key bit and
>> >data bit should interact with the others many times in
>> >internal transformations.
>>
>>  No, this is not strictly true.
>>
>>  The principle you are looking for is called "avalanche," and states
>>  a *goal* that every output bit should be "affected" by every input
>>  bit.  (This is no more than a consequence of the observation that
>>  a block cipher is a machine which attempts to emulate a Simple
>>  Substitution table of impractical size.  Every Simple Substitution
>>  will avalanche.)
>>
>>  The testable result is that a change on any input bit should change
>>  (almost) half of the output bits, on average.  Any implication that
>>  one can only get such a result if bits *explicitly* interact "many
>>  times" is not warranted, as far as I know.
>
>No, the criteria I meant was the one I said.  Avalanche is not enough
>(and to be fair neither is my criteria; they're both necessary but not
>sufficient).  All the bits need to be effected in a complex way.  If
>sections of the key are used just once, it often allows the analyst to
>partition the effect of those bits from the effects of others.  That
>is why meet-in-the-middle works, why your NxM DES failed, and how I
>attacked Fenced DES.

 To be fair, the reason I put in the outline of avalanche reasoning
 was our previous discussion, where you did not appear to accept
 that avalanche was inherent in Simple Substitution.  You appear to
 have come some way toward the position which I took, and you
 opposed, just several years ago.

 It may be that your "rule of thumb" is stated too generally to
 apply directly to new ciphering structures.  In these Fencing
 ciphers, I would say that each keyed substitution has a "full"
 amount of key.  That is, each 8-bit substitution table has a keyed
 state of 256 bytes for 256! possibilities; finding two keys which
 produce same state in some substitution table is pretty unlikely.
 So, changing the cipher key could be expected to change each and
 every keyed substitution, so that data which encounters even one
 substitution indeed does interact with the "full" key.

 As for why NxM DES failed, one necessary (but insufficient) reason
 is that it was *proposed*.  Failing to propose a cipher may assure
 lack of failure, but also has no chance of producing a successful
 design.  I consider "design failure" an underrated commodity,
 because it often takes many such failures to understand the range
 of design and achieve even one true success.  If failures in some
 of my designs highlight new rules for future designs, I claim
 *progress* (if not success), and certainly not personal failure.

 In this particular case, I don't consider a defined-plaintext
 attack to be any kind of a death-knell for the design.  It does
 present an operational limitation which must be taken into
 account when that particular cipher is designed into a system.


>>  The use of keyed invertible substitutions guarantees that *any*
>>  change in a substitution input value *will* avalanche the output
>>  from the substitution.  (The use of keyed substitutions also
>>  protects against Differential and Linear attacks, since the
>>  transformation is keyed and not known externally.)  Again, Feistel
>>  operations have no guarantees.

 Here we have the guarantees.  Avalanche stands guaranteed as stated.
 The present attack is a chosen-plaintext Differential attack, but
 to the extent that the key is changed often enough, no such attack
 is possible.

 Actually, chosen plaintext attacks can be prevented by the rest
 of the system in a variety of ways, including simply not allowing
 arbitrary ciphering access.

 Still, it is very important to know the limitations of any cipher,
 and I am very glad that you took the time to look at it.


>>  The separate use of balanced, complete mixing layers and a "fencing"
>>  layer of keyed, invertible substitutions guarantees avalanche
>>  across large blocks.  The use of another set of layers protects
>>  against various "meet in the middle" or "fix in the middle"
>>  attacks (and may not always be necessary).  The use of a fencing
>>  layer before linear mixing protects the mixing, provided that
>>  the subsequent ciphering layers have not been exposed.
>>
>I'll let you assess my attack before I argue the problems with that.

 The above was the short outline of the basis for strength logic.
 While I am certainly grateful for your consideration, I think we'd
 need a known-plaintext or ciphertext-only attack to do serious
 damage to the concept of ciphering with such a structure.


>> >Finally, I don't like designs which work extensively on
>> >isolated sections of the key and/or data space.
>>
>>  Then you should *love* my ciphers.
>
>Uh...
>
>[...]
>>  I can provide much *better* guarantees of key interaction than
>>  anything I've seen on the DES design.
>>
>
>Obviously I disagree.

 Again, I have a serious problem with the phrasing:

    "designs which work extensively on isolated sections
     of the key and/or data space,"

 although I do have a better understanding, now, of what you
 probably mean by this.

 As you say, I have proposed many ciphers -- whole new classes
 of ciphers in fact.  Now, how do I apply your rule of thumb to
 these many divergent designs?

 Consider the 4x cipher which consists of 5 levels of "FFT" mixing
 using a keyed pair of orthogonal Latin squares (64KB each).
 Now, does each data byte "use" the key repeatedly in any real
 sense?  Suppose I use a different, separately-keyed, oLs mixing
 pair at each mixing junction:  Does each data byte *then* use the
 key repeatedly?  How does one apply your admonition?


>[...]
>>  As a matter of fact, this common design concept "let's just make
>>  something pretty much like DES, only a little better" is very
>>  seriously flawed due to a lack of known mathematical basis for
>>  Feistel-type designs.  The fact that many do this does not make it
>>  a good design technique.
>
>Which side are you arguing?  You're the one trying to use DES but add
>some simple layers to make it stronger.

 I am using a Feistel design as a *building-block* in a hopefully
 better cipher.  DES uses S-boxes which are laughably-weak on their
 own, but that does not mean that the resulting cipher is quite
 so laughable.

 DES is what many people consider to be the most proven cipher in
 the world, and now has problems with respect to keyspace and
 block size.  The obvious alternative of Triple-DES is great for
 users (who have lots of CPU cycles to spare), but not so great
 for servers (which have no free cycles).  The situation begs the
 question as to whether it is possible to do better with what we
 have.  (Especially since the academic community has come so late
 to the party of providing a widely-accepted replacement.)

 Note that I have *not* proposed some new Feistel cipher with
 larger blocks, or Feistel structure using 1x DES as an S-box.
 I claim that the design of secure S-boxes for a Feistel design is
 far too tricky, and we know far too little about it.  In fact,
 using a cipher (an invertible function) as a Feistel S-box
 (instead of some non-invertible function) would make me nervous.

 So you tell me, which "side" am I on?


>>  Your so-called "established principles" do not, alas, begin to
>>  provide the mathematical guarantees of operation provided by
>>  Fencing and Mixing cipher designs.  It seems strange you would
>>  bring this up, for it is the conventional wisdom of Feistel
>>  ciphering which is weak here, not these designs.
>>
>> Terry Ritter   ritter@io.com   http://www.io.com/~ritter/
>
>I think I probably have a few numbers wrong in my analysis, but
>not so far wrong as invalidate the attack on Fenced DES.  If
>the attack doesn't work, I'd like to know why.

 I'd like some time to think about it, but I think it probably
 works, as far as it goes.


>If it does,
>well I have to respect you for showing your failures on your web
>page.

 I really hope that this was not intended to be as sarcastic as
 some might take it.  Documenting success *and* failure is part
 of responsible design.


>--Bryan

 Thank you!  You have my sincere thanks for your analysis, and
 thanks as well for finding a bug.

 While I am naturally disappointed that Fenced DES was not immune to
 interactive defined-plaintext attack, I do not yet consider the
 design a "failure."  The attack does set some additional system
 requirements on the use of Fenced DES, and this information is very
 welcome.  It also directly implies how to make structures which are
 not vulnerable to this attack.  The attack also does not address
 fundamental questions like whether weak linear mixing automatically
 indicates a weak cipher.

 Thanks again.

 ---
 Terry Ritter   ritter@io.com   http://www.io.com/~ritter/


========
Path: news.io.com!arlut.utexas.edu!geraldo.cc.utexas.edu!cs.utexas.edu!howland.erols.net!cam-news-hub1.bbnplanet.com!news.mathworks.com!newsfeed.internetmci.com!newsxfer2.itd.umich.edu!uunet!in3.uu.net!trellis.wwnet.com!news
From: rscott@wwnet.com (Robert Scott)
Newsgroups: sci.crypt
Subject: Re: Ciphers with *Huge* Block Size ?
Date: Wed, 28 Aug 1996 12:45:14 GMT
Organization: WWNET
Lines: 64
Message-ID: <50135n$6i2@trellis.wwnet.com>
References: <321DE177.2193@io.com> <321E0AA1.4517@jhuapl.edu> <4vmcm7$8jd@nntp-1.io.com> <3220FFAD.27BB@jhuapl.edu> <4vvdbj$3de@anarchy.io.com>
NNTP-Posting-Host: annas112.wwnet.com
X-Newsreader: Forte Free Agent 1.0.82

ritter@io.com (Terry Ritter) wrote:

       [..snip..]

> You have
> described an interactive chosen-plaintext attack.  This is hardly the
> same thing as a known-plaintext or ciphertext-only attack.

> This attack essentially shows the need for additional system-design
> requirements.  Few of us expect to be able to take an arbitrary
> cipher, drop it into an arbitrary application and produce a secure
> system.  Instead, the entire system must be designed to properly
> use the cipher and avoid its weaknesses.  This is normal practice.

> This attack indicates that any system using this Fenced DES design
> must prevent a chosen-plaintext attack of the necessary size.  This
> can be done by not allowing The Opponent to perform chosen-plaintext
> ciphering, or by forcing a key change before the necessary amount of
> data has been ciphered.  Thus, your adaptive chosen-plaintext attack
> is itself easily rendered ineffective.

       [..snip..]

> I think we'd
> need a known-plaintext or ciphertext-only attack to do serious
> damage to the concept of ciphering with such a structure.

  [..snip..]

> While I am naturally disappointed that Fenced DES was not immune to
> interactive defined-plaintext attack, I do not yet consider the
> design a "failure."  The attack does set some additional system
> requirements on the use of Fenced DES, and this information is very
> welcome.  It also directly implies how to make structures which are
> not vulnerable to this attack.  The attack also does not address
> fundamental questions like whether weak linear mixing automatically
> indicates a weak cipher.

     It seems that the standard of performance for ciphers has been
going steadily up.  Long ago ciphers were based on entirely secret
designs, which if known, would compromise the cipher.  Then we moved
to open designs with secret keys, but considered just ciphertext-only
attacks.  Then as ciphers became better and better, we began asking
for resistance to known-plaintext attack, then chosen-plaintext
attack.  Perhaps in practice, a small quantity of chosen-plaintext is
all that an adversary can really obtain.  But accademically, it became
desireable to ask for ciphers that are resistant to massive
chosen-plaintext attack, perhaps even interactively.  As impractical
as these attacks seem, the fact that they are theoretically possible
is going to make a cipher unacceptable for adoption as a standard.
(Of course, if the weakness is discovered after adoption, then the
investment in the standard may outweigh the purely theoretical
weakness.)  As I see it, your Fenced DES design is still in the
proposed stage.  Even if it is true that the fencing adds effectively
to the keyspace of the total cipher, and even if it is measureably
better than DES is every way, this theoretical weakness will have to
be dealt with.  You will notice that my NEWDES cipher as described in
Bruce's book is dismissed only because of this same theoretical
weakness, even though it is has a 120-bit key, is much faster than
DES, and has no other weakness.

   -Bob Scott


========
Path: news.io.com!usenet
From: ritter@io.com (Terry Ritter)
Newsgroups: sci.crypt
Subject: Re: Ciphers with *Huge* Block Size ?
Date: Wed, 28 Aug 1996 18:16:09 GMT
Lines: 45
Message-ID: <5022bk$kqm@nntp-1.io.com>
References: <321DE177.2193@io.com> <321E0AA1.4517@jhuapl.edu> <4vmcm7$8jd@nntp-1.io.com> <3220FFAD.27BB@jhuapl.edu> <4vvdbj$3de@anarchy.io.com> <50135n$6i2@trellis.wwnet.com>
NNTP-Posting-Host: dialup-01-102.austin.io.com
X-Newsreader: Forte Free Agent 1.0.82

In <50135n$6i2@trellis.wwnet.com> rscott@wwnet.com (Robert Scott)
wrote:

> As I see it, your Fenced DES design is still in the
>proposed stage.  Even if it is true that the fencing adds effectively
>to the keyspace of the total cipher, and even if it is measureably
>better than DES is every way, this theoretical weakness will have to
>be dealt with.  You will notice that my NEWDES cipher as described in
>Bruce's book is dismissed only because of this same theoretical
>weakness, even though it is has a 120-bit key, is much faster than
>DES, and has no other weakness.

Actually, I agree (except that we cannot know that *any* cipher has
"no other weakness).  But note that even if the current attack is
completed, it simply breaks through to the DES level, when then must
be attacked on its own.  Using a DES layer puts a lower limit on
strength for the cipher as a whole, which is an interesting advantage.

My first response is to add two more mixing layers each side, which
should solve this particular problem.  But I will still have to
implement it, if only to see what kind of performance hit it will
cause.  Very much stronger nonlinear mixings could also be used in the
very same structure, but then I would consider the DES overkill
(except for the minimum strength guarantee).

As it turns out, I "knew" two different, mutually-contradictory things
about Differential attacks, and so did not really understand the
problem.  It is true, of course, that if we changed the key often
enough, no Differential attack could succeed (because the
substitutions are keyed), but we also don't want to do that, which
means that Differential attacks cannot be rejected out-of-hand simply
by using keyed substitutions.

It would be nice if we could find some sort of test which would
certify or even indicate that a design is free of this sort of
Differential attack.  Maybe some sort of statistical correlation would
point it out.

>   -Bob Scott

---
Terry Ritter   ritter@io.com   http://www.io.com/~ritter/


========
Path: news.io.com!news.thenet.net!newsserver.pixel.kodak.com!bloom-beacon.mit.edu!news-res.gsl.net!news.gsl.net!news.mathworks.com!newsfeed.internetmci.com!in3.uu.net!trellis.wwnet.com!news
From: rscott@wwnet.com (Robert Scott)
Newsgroups: sci.crypt
Subject: Re: Ciphers with *Huge* Block Size ?
Date: Wed, 28 Aug 1996 22:38:04 GMT
Organization: WWNET
Lines: 68
Message-ID: <5025t8$h2p@trellis.wwnet.com>
References: <321DE177.2193@io.com> <321E0AA1.4517@jhuapl.edu> <4vmcm7$8jd@nntp-1.io.com> <3220FFAD.27BB@jhuapl.edu> <4vvdbj$3de@anarchy.io.com> <50135n$6i2@trellis.wwnet.com> <5022bk$kqm@nntp-1.io.com>
NNTP-Posting-Host: annas124.wwnet.com
X-Newsreader: Forte Free Agent 1.0.82

ritter@io.com (Terry Ritter) wrote:

>Actually, I agree (except that we cannot know that *any* cipher has
>"no other weakness).  But note that even if the current attack is
>completed, it simply breaks through to the DES level, when then must
>be attacked on its own.  Using a DES layer puts a lower limit on
>strength for the cipher as a whole, which is an interesting advantage.

True, you have a cipher that is at least as strong as DES.  But what
about the cost effectiveness of your additions?  By merely relaxing
the requirement that the DES key bytes have redundant parity bits, you
have a cipher that costs no more to implement and has 256 times as big
a keyspace.  And perhaps running DES a few more rounds (say 18 or 20
instead of 16) would achieve the same degree of mixing afforded by
your fencing.  I don't know it for a fact.  But perhaps...
Perhaps the real value of combining different methods into one cipher
as you have done is for insurance.  If one component is found to have
a weakness, perhaps the combination will still be strong.

You mentioned before that there may be value to using DES as a
component because it has been so scrutinized and is so widely
accepted.  This may be a psycological plus, but the official status of
DES does not impress me very much.  In order to place faith in the
process of analysis, you have to see it as essentially a continuous
process.  That is, the understanding of the strengths and weakness of
DES will come out little by little as the great brains of our time
focus their energies.  But what I think is far more likely is that
progress comes in discrete jumps, some small, some huge.  Only when
averaged over time and over many other problems does technological
advance appear uniform (or exponential).  But that does not say
anything about progress in analyzing one particular cipher.  I would
place a lot more confidence in the progress in general principles
extracted of cryptography.  From this point of view, DES is just
another Feistel cipher.  Perhaps it is optimum for its size, but it
would be no better than average when compared to Feistel ciphers of
just a slightly larger size (in terms of keyspace or rounds, or block
size).  Now although the S-boxes of DES may have been specially chosen
to be strong for their size, I believe that it is quite likely that if
the structure of DES were redesigned with larger randomly chosen
S-boxes, then they would almost certainly be stronger than the ones we
have.  (eg. instead of 6 bits of input use 8 bits of input to each
S-box, still using 4 bits of output).


>My first response is to add two more mixing layers each side, which
>should solve this particular problem.  But I will still have to
>implement it, if only to see what kind of performance hit it will
>cause.  Very much stronger nonlinear mixings could also be used in the
>very same structure, but then I would consider the DES overkill
>(except for the minimum strength guarantee).

It seems to me that the strength of ciphering is based on something
like this.  Suppose you have 100 units of resources to spend on
running a cipher.  These resource could be time or silicon, but just
think of them as general resources.  If you design a cipher composed
of 80 units of one component (say, DES) and 20 units of another
component (say, fencing, or mixings), then the overall "strength" of
the cipher is 80 * 20 = 1600.  (Provided the two components don't have
an unfortunate relationship).  On the other hand, if the system is
designed with four components, each costing 25 units, then the system
strength is 25 * 25 * 25 * 25 = 390625.  If this "product of
resources" analysis holds, it is easy to see that the most
cost-effective cipher would be one that is composed of as many cheap,
non-commuting parts as possible.  This looks like just what Feistel
ciphering is doing.

   -Bob Scott


========
Path: news.io.com!news.thenet.net!trellis.wwnet.com!news.inc.net!newspump.sol.net!uwm.edu!math.ohio-state.edu!howland.erols.net!surfnet.nl!tudelft.nl!cyber.tn.tudelft.nl!visser
From: visser@ph.tn.tudelft.nl (Boudewijn W. Ch. Visser)
Newsgroups: sci.crypt
Subject: Re: Ciphers with *Huge* Block Size ?
Date: 28 Aug 1996 20:33:20 GMT
Organization: Delft University of Technology, Faculty of Applied Physics
Lines: 41
Message-ID: <502aeg$93j@cyber.tn.tudelft.nl>
References: <321DE177.2193@io.com> <321E0AA1.4517@jhuapl.edu> <4vmcm7$8jd@nntp-1.io.com> <3220FFAD.27BB@jhuapl.edu> <4vvdbj$3de@anarchy.io.com> <50135n$6i2@trellis.wwnet.com>
NNTP-Posting-Host: sextantis.ph.tn.tudelft.nl
X-Newsreader: NN version 6.5.0 (NOV)

rscott@wwnet.com (Robert Scott) writes:

[snip some history]

>for resistance to known-plaintext attack, then chosen-plaintext
>attack.  Perhaps in practice, a small quantity of chosen-plaintext is
>all that an adversary can really obtain.  But accademically, it became
>desireable to ask for ciphers that are resistant to massive
>chosen-plaintext attack, perhaps even interactively.  As impractical
>as these attacks seem, the fact that they are theoretically possible
>is going to make a cipher unacceptable for adoption as a standard.

Since todays cipher may very well end up in a high-bandwith application,
I don't consider such requirements completely "academically".

Many of the historial ciphers were used for very small messages; Both
because of transmission issues (or hand-decoding),but it was realized that
long messages and little  keychanges posed risks.

A modern cipher should also be suitable to encrypt a few gigabytes of data
with no keychange for months or years,and lots of known or even chosen
plaintext.[disk encryption,for example]
Or encrypting over a 100 Mbit network,where an adaptive chosen plaintext
is very well possible.

Besides, if a cipher is vulnerable to a chosen plaintext attack,I would
want a -very- solid proof that it isn't vulnerable to a known-plaintext
attack,if I happened to have a use for it where a chosen plaintext attack
wouldn't be possible. When it comes to trusting ciphers,the motto
is "weak unless there is convincing evidence for the opposite",and that
goes especially for ciphers in which some kind of weakness has already
been found.

Snake-oil sellers work different :"presumed strong until broken".

Boudewijn
-- 
+-------------------------------------------------------------------+
|Boudewijn Visser       |E-mail:visser@ph.tn.tudelft.nl |finger for | 
|Dep. of Applied Physics,Delft University of Technology |PGP-key    | 
+-- my own opinions etc --------------------------------------------+


========
Path: news.io.com!imci4!newsfeed.internetmci.com!cpk-news-hub1.bbnplanet.com!cpk-news-feed1.bbnplanet.com!netnews.jhuapl.edu!usenet
From: Bryan Olson 
Newsgroups: sci.crypt
Subject: Re: Ciphers with *Huge* Block Size ?
Date: Wed, 28 Aug 1996 11:09:47 -0400
Organization: Johns Hopkins Applied Physics Lab
Lines: 215
Message-ID: <3224613B.3BEE@jhuapl.edu>
References: <321DE177.2193@io.com> <321E0AA1.4517@jhuapl.edu> <4vmcm7$8jd@nntp-1.io.com> <3220FFAD.27BB@jhuapl.edu> <4vvdbj$3de@anarchy.io.com>
Reply-To: Bryan_Olson@jhuapl.edu
NNTP-Posting-Host: snake.jhuapl.edu
Mime-Version: 1.0
Content-Type: text/plain; charset=us-ascii
Content-Transfer-Encoding: 7bit
X-Mailer: Mozilla 3.0 (WinNT; I)

I'm thinking of writing up the attack for the research group.
I won't have time for a couple weeks.  There are still a few
open issues.

[Lot's of stuff is edited out at various places.]

Terry Ritter wrote:

>
> In <3220FFAD.27BB@jhuapl.edu> Bryan Olson 
> wrote:
> >(To follow the next section, readers will have to understand
> >Ritter's "Fenced DES".  See: http://www.io.com/~ritter/FENCED.HTM
> >)

And to follow this post, readers will have to understand the
attack in the referenced post.

> >
> >In your block mixing functions, it seems that a change of one bit
> >position in either half of the input can change only the output bit
> >in the same position and the one to the immediate left (circularly).
> >Am I wrong on that?
>
>  Just technically.  We have a field; sometimes the field
>  polynomial is activated and we get its bits too.  Usually not.
>

Right.  It looks like the high-order bit of each block triggers
addition of the polynomial.

> >If so, the change between a plaintext A and A'
> >needed to make three of the internal DES applications receive the
> >same input are not all that "substantial".  Even with the
> >substitutions we could certainly search for them.
>
>  Hardly "certainly."  Search implies recognition of difference.
>  It would be easy to cycle through all combinations of four bytes
>  and know that somewhere in there the combination(s) we wanted
>  must exist.  The problem is knowing when that occurred.
>

I thought I answered that.  The output pattern I described is
what allows recognition.

> >I can pick four bytes that will be mixed only with each other in
> >both levels of pre-DES mixing.

> This doesn't take much "picking"; this is the situation in
> general. [...]

Right.  The simple picking criteria is four bytes in the
same relative position in the four eight-byte sections of the
input, and not the most significant bytes of these sections.

> >If A and A' are a match, I
> >can immediately find many other matches by changing bytes which don't
> >interact with the critical four.  That gives me a good space to search
> >for any statistical anomaly which would indicate a match.
>
>  If we *assume* that A and A' match, we can do lots of things.
>  The issue is finding that match.
>

Is this point still open, or do you agree that I can find and
recognize the matches?
>
>  You are proposing a situation where particular bit-position values
>  occur "at random" from an internal DES operation.  The probability
>  of this would seem to be 1 in 2**10.
>

Agreed.  That's the number I was using.

> >The block mixing will preserve
> >8 of them, lined up with a single substitution at the end.  Thus in the
> >output, we'll see four bytes, each three bytes apart, which are
> >identical
> >under A and A'.

> Generally, yes.  Three levels of mixing poly will each tend to
> activate about half the time, though.

Good point.  I can avoid the polynomials in the pre-DES
mixing by special handling of the high order bytes.  In
the   post-DES I can't control these.  I only see two
"levels" of mixing, (though three mixes).

In my description "Three bytes apart" should have been
"seven bytes apart".

>
> >If that happens, then all 4 DES outputs will have 10 bits positions
> >which are the same under A as under A'.
>
>  Yes.  If that happens.
>
> >If they hold under a non-match, then the four-byte coincidence
> >still has the same probability, but if they hold
> >under a match, then the coincidence is a certainty.
>
>  ?
>
I was simply pointing out that the 1 in 2**10 shot can
occur whether or not I have a match.  It increases the
chances of the visible pattern in the output only if it
holds under a match.

>
> >So I have to search over a 2^32 space, and run a few hundred pairs
> >for each to find such a match.
>
>  This may be in the right ballpark; there may be some effect from
>  the intrusion of the mixing polys in each output mixing level.
>

Does this drop the chance of the output pattern under
a match by a factor of 4 ?  That does make the search
harder.  It shouldn't change the over-all complexity
since attacking DES still dominates.

I'm not yet sure whether the chance of polynomial
modulation helps or hurts the attack.  I may be able
to find specific  bits of output by detecting when
the modulation occurs.

> >Once I've found it, I have a lot of information about the
>
>  I'd like to see a discussion of how one would continue from there
>  to actually find substitution values and start attacking one of
>  the internal DES operations.
>
>  I don't see how to recover the substitution element values, but I
>  don't stand on this as a redeeming strength.
>

Right.  I still need to work out just what I can infer about
the substitutions.  Your posts don't specify exactly how the
substitutions are produced from a key, so at some point I'll
have to leave open how I can get from what I know about the
substitutions to how to determine the rest.

You also don't specify a polynomial.  I'm not sure it matters.

[...]
> >I certainly think it's a strong enough toe-hold for the analyst that
> >Fenced DES is dead.
>
>  Not at all.  Boy, are you looking for a cheap victory :-).  You have
>  described an interactive chosen-plaintext attack.  This is hardly the
>  same thing as a known-plaintext or ciphertext-only attack.
>

From the FAQ:

  A strong encryption algorithm will be unbreakable not only under known
  plaintext (assuming the enemy knows all the plaintext for a given
  ciphertext) but also under "adaptive chosen plaintext" -- an attack
  making life much easier for the cryptanalyst.  In this attack, the
enemy
  gets to choose what plaintext to use and gets to do this over and
over,
  choosing the plaintext for round N+1 only after analyzing the result
of
  round N.

>
>  Obviously, the attack has been on the Fencing and Mixing structure,
>  and has not broken DES with this small effort.  The guarantee that
>  Fenced DES could not possibly be weaker than DES remains unbroken.
>

Agreed.

>  The attack also directly indicates a solution, and that is to mix
>  repeatedly until each and every mixing output byte is a function
>  of each and every mixing input byte.  This would require three more
>  (linear!) mixing levels on each side, and would probably *still* be
>  faster than Triple-DES.  The mixing itself would be similar to my
>  mixing ciphers which use an internal array of keyed substitutions
>  instead of ciphers.
>

Also from the FAQ:

  If you don't have enough
  experience, then most likely any experts who look at your system will
  be able to find a flaw. If this happens, it's your responsibility to
  consider the flaw and learn from it, rather than just add one more
  layer of complication and come back for another round.


>  To be fair, the reason I put in the outline of avalanche reasoning
>  was our previous discussion, where you did not appear to accept
>  that avalanche was inherent in Simple Substitution.  You appear to
>  have come some way toward the position which I took, and you
>  opposed, just several years ago.
>

Not at all.  I was encouraging you to say something well defined
and true, instead of something ill-defined or false.

> >[...]I have to respect you for showing your failures on your web
> >page.
>
>  I really hope that this was not intended to be as sarcastic as
>  some might take it.  Documenting success *and* failure is part
>  of responsible design.

Not meant sarcasticly.  I hope quoting the FAQ didn't come
off as too snotty either; I just wanted to make it clear
that I can justify calling the attack successful cryptanalysis
of 4X Fenced DES (provided I can advance it at a few points).

--Bryan


========
Path: news.io.com!usenet
From: ritter@io.com (Terry Ritter)
Newsgroups: sci.crypt
Subject: Re: Ciphers with *Huge* Block Size ?
Date: Wed, 28 Aug 1996 18:16:13 GMT
Lines: 126
Message-ID: <5022bq$kqm@nntp-1.io.com>
References: <321DE177.2193@io.com> <321E0AA1.4517@jhuapl.edu> <4vmcm7$8jd@nntp-1.io.com> <3220FFAD.27BB@jhuapl.edu> <4vvdbj$3de@anarchy.io.com> <3224613B.3BEE@jhuapl.edu>
NNTP-Posting-Host: dialup-01-102.austin.io.com
X-Newsreader: Forte Free Agent 1.0.82

In <3224613B.3BEE@jhuapl.edu> Bryan Olson <Bryan_Olson@jhuapl.edu>
wrote:

>> >If A and A' are a match, I
>> >can immediately find many other matches by changing bytes which don't
>> >interact with the critical four.  That gives me a good space to search
>> >for any statistical anomaly which would indicate a match.
>>
>>  If we *assume* that A and A' match, we can do lots of things.
>>  The issue is finding that match.
>>

>Is this point still open, or do you agree that I can find and
>recognize the matches?

These were just comments made while reading the text for the first
time.


>>  Obviously, the attack has been on the Fencing and Mixing structure,
>>  and has not broken DES with this small effort.  The guarantee that
>>  Fenced DES could not possibly be weaker than DES remains unbroken.
>>

>Agreed.

>>  The attack also directly indicates a solution, and that is to mix
>>  repeatedly until each and every mixing output byte is a function
>>  of each and every mixing input byte.  This would require three more
>>  (linear!) mixing levels on each side, and would probably *still* be
>>  faster than Triple-DES.  The mixing itself would be similar to my
>>  mixing ciphers which use an internal array of keyed substitutions
>>  instead of ciphers.
>>

>Also from the FAQ:

>  If you don't have enough
>  experience, then most likely any experts who look at your system will
>  be able to find a flaw. If this happens, it's your responsibility to
>  consider the flaw and learn from it, rather than just add one more
>  layer of complication and come back for another round.

First of all, I am *not* adding a layer of complication, I am
proposing to fix the mixing layer so that it will not have the
weakness you indicate.  There still would be the same five-layer macro
structure consisting of Fencing, linear mixing, DES, linear mixing,
and Fencing.  This would still be very recognizable as Fenced DES.

Given that there is no scientific *proof* of a secure cipher, the only
way a design can escape a sequence of modifications is by luck, and we
can't teach luck.  It might be nice to keep those changes "in house,"
which is what we had with DES, but this requires massive resources and
several different groups of people working on the problem, and can
still only obscure the fact that a sequence of modifications is the
way ciphers are built.  And even after extensive S-box analysis
in-house, apparently DES came back from NSA with different S-boxes;
should the designers have thrown it out and started over?

Any really innovative construction is necessarily a *process* rather
than an *event*.  Only *after* a structure has been evolved can we
avoid the possiblility of having it fail.  It would be nice if the
various ciphering structures had already been investigated and
analyzed and the results placed in a handbook for use, but we seem to
be at least a generation away from that.

In fact, I have no idea what the FAQ could possibly mean except "start
over with a completly new structure" or "if you ever fail, get out of
the business," both of which are incredibly ignorant and arrogant
responses.  I note that there is no suggestion that an *analyst* who
*failed* to break a cipher should get out of the business (since
obvioiusly *somebody* knows something the analyst does not, so he has
not learned his field).  Presumably, analysts are allowed to fail and
learn, but failed cipher designers might as well quit.  I guess we
know which classification wrote this part of the FAQ!


>>  To be fair, the reason I put in the outline of avalanche reasoning
>>  was our previous discussion, where you did not appear to accept
>>  that avalanche was inherent in Simple Substitution.  You appear to
>>  have come some way toward the position which I took, and you
>>  opposed, just several years ago.
>>

>Not at all.  I was encouraging you to say something well defined
>and true, instead of something ill-defined or false.

>> >[...]I have to respect you for showing your failures on your web
>> >page.
>>
>>  I really hope that this was not intended to be as sarcastic as
>>  some might take it.  Documenting success *and* failure is part
>>  of responsible design.

>Not meant sarcasticly.  I hope quoting the FAQ didn't come
>off as too snotty either; I just wanted to make it clear
>that I can justify calling the attack successful cryptanalysis
>of 4X Fenced DES (provided I can advance it at a few points).

I have several points here; the first is that the situation is not
black or white: we can judge one cipher better than another without
saying that the inferior is useless.  Similarly, we can also judge
attacks, and some are better than others.  Currently, ciphers are
being held to one heck of a high standard -- which is OK -- but
attacks can be pretty casual.  Few attacks ever seem to be reduced to
practice and actually tested before proposal, but this is an integral
part of the construction of ciphers.  And if the attack itself
sugggests a response, I wonder why the attacker does not have to deal
with that as well?  The whole process seems biased away from that
which would best end up producing a sollid improved design, and toward
some sort of ego contest which I reject.

That said, I am very happy for Bryan to have found any sort of
weakness in one of my designs, and am willing to call it "broken" on
that account and move on to prevent that sort of attack.   I am
probably the last person to judge such an attack on its overall
qualities, because if even *part* of the attack can do something I did
not think could be done, it has already triggered a redesign.

>--Bryan

---
Terry Ritter   ritter@io.com   http://www.io.com/~ritter/


========
Path: news.io.com!insync!apollo.isisnet.com!eru.mt.luth.se!news.kth.se!nntp.uio.no!Norway.EU.net!EU.net!nntp04.primenet.com!nntp.primenet.com!howland.erols.net!newsfeed.internetmci.com!newsxfer2.itd.umich.edu!caen!umass.edu!kernighan.cs.umass.edu!usenet
From: Lewis McCarthy <lmccarth@cs.umass.edu>
Newsgroups: sci.crypt
Subject: Re: Ciphers with *Huge* Block Size ?
Date: Mon, 26 Aug 1996 14:12:41 -0400
Organization: Theoretical Computer Science Group, UMass-Amherst
Lines: 31
Message-ID: <3221E919.2781@cs.umass.edu>
References: <321DE177.2193@io.com> <321E0AA1.4517@jhuapl.edu> <4vmcm7$8jd@nntp-1.io.com>
NNTP-Posting-Host: opine.cs.umass.edu
Mime-Version: 1.0
Content-Type: text/plain; charset=us-ascii
Content-Transfer-Encoding: 7bit
X-Mailer: Mozilla 3.0 (X11; I; OSF1 V3.0 alpha)

Terry Ritter writes:
>  No.  Modern mixing guarantees distribution of any input change to
>  every substitution in the next layer in n log n operations.
>  (This is either log n layers of n-byte multi-precision mixing, or
>  log n layers of byte-by-byte mixing.)  Feistel mixing gives no such
>  guarantees, and, indeed, virtually requires experiment to decide
>  how many rounds are "enough."  (Authors of similar Feistel ciphers
>  often let the user decide how many rounds are enough, as though
>  users will have the background to do this.)

End users already need to pick key sizes to achieve a desired
balance between speed and cryptographic security. Ideally a grounding
in the contemporary state-of-the-art of cryptanalysis and computational
feasibility informs the selection. This doesn't prevent inexpert users
from making good choices about their keying material in presently
deployed cryptosystems. The cryptologic community can and does provide
both customized and canned advice on this and related issues to
everyone else.

I'm not claiming that end users of round-based ciphers should
be choosing the number of rounds they use. We need to be careful about
what we mean by "the user of a cipher". I believe it is reasonable to
expect an implementor to decide upon a range or fixed number
of rounds of a cipher (hash fn., etc.) when integrating cryptography
into an application. This is just one of a host of concerns that
should be addressed to ensure that an appropriate cryptosystem is
properly deployed to satisfy a specified security requirement.
--
Lewis	       http://www.cs.umass.edu/~lmccarth/lmkey.asc
    "He said, `You are as constant as the Northern Star,' and I said,
     `constantly in the dark ?  where's that at ?'" -- Joni Mitchell


========
Path: news.io.com!arlut.utexas.edu!cs.utexas.edu!math.ohio-state.edu!uwm.edu!newsfeed.internetmci.com!in2.uu.net!munnari.OZ.AU!harbinger.cc.monash.edu.au!newsroom.utas.edu.au!mg4-71.its.utas.edu.au!user
From: roger_sf@postoffice.utas.edu.au (Roger Fleming)
Newsgroups: sci.crypt
Subject: Re: Ciphers with *Huge* Block Size ?
Followup-To: sci.crypt
Date: 24 Aug 1996 08:20:10 GMT
Organization: University of Tasmania
Lines: 37
Message-ID: <roger_sf-240896173928@mg4-71.its.utas.edu.au>
References: <321DE177.2193@io.com>
NNTP-Posting-Host: mg4-71.its.utas.edu.au

Terry Ritter <ritter@io.com> wrote:

[...]
>  Next, it is clear that codebook attacks can start to be effective
>  when we get two ciphertext blocks which are the same.  If we assume
>  an absolutely flat plaintext distribution (we assume cipher block
>  chain (CBC) mode, or other plaintext randomizer) and 64-bit blocks,
>  we can expect to see this with about 2**32 blocks, and rapidly
[...]
>  In fact, the whole need to use CBC (or other plaintext randomization)
>  is based on the small amount of language entropy in 8 character
[...]
>
>  On the other hand, a 200-byte block of plaintext language should
>  have about 300 bits of entropy, so we could expect to find two the
>  same in about 2**150 blocks.[...]

Codebook attacks are a serious threat on low entropy texts, but chaining
solves the problem completely and efficiently, and certain other problems
as well.

I agree wholeheartedly that even for high entropy texts, 64 bit wide blocks
will one day (real soon now) reach the end of their useful lives. But that
doesn't mean we have to go all the way to 1600 bit blocks. Just as with key
sizes, quite a small increment should be good for a very long time indeed.
We don't require (and probably never will) 1600 bit symmetric keys to
defeat keysearch attacks; and we don't require 1600 bit blocks to defeat
codebook attacks.

Consider a 128 bit block cipher. Quite apart from possibly anticipating 64
bit processors (or new UFN designs), it should take about 2**64 blocks, or
262,144 exabytes of data before codebooks start to be any use at all (if
used in a chaining mode). This is equivalent to about 1620 channels of real
time hi-res 24-bit colour video encrypted continuously for 100 years. The
storage system for this codebook, once completed, will require billions of
tonnes of matter (even assuming storage systems requiring only one atom per
bit).


========
Path: news.io.com!usenet
From: ritter@io.com (Terry Ritter)
Newsgroups: sci.crypt
Subject: Re: Ciphers with *Huge* Block Size ?
Date: Sun, 25 Aug 1996 07:25:19 GMT
Lines: 89
Message-ID: <4vovcb$ap5@nntp-1.io.com>
References: <321DE177.2193@io.com> <roger_sf-240896173928@mg4-71.its.utas.edu.au>
NNTP-Posting-Host: dialup-01-153.austin.io.com
X-Newsreader: Forte Free Agent 1.0.82


In <roger_sf-240896173928@mg4-71.its.utas.edu.au>
roger_sf@postoffice.utas.edu.au (Roger Fleming) writes:

>Terry Ritter  wrote:
>
>[...]
>Codebook attacks are a serious threat on low entropy texts, but
>chaining
>solves the problem completely and efficiently, and certain other
>problems
>as well.

 So much for the positive side, but chaining has significant
 costs and actually *creates* certain problems.  If we can avoid
 chaining, we may simplify things in at least some applications.


>I agree wholeheartedly that even for high entropy texts, 64 bit
>wide blocks
>will one day (real soon now) reach the end of their useful lives.
>But that
>doesn't mean we have to go all the way to 1600 bit blocks. Just
>as with key
>sizes, quite a small increment should be good for a very long
>time indeed.
>We don't require (and probably never will) 1600 bit symmetric
>keys to
>defeat keysearch attacks; and we don't require 1600 bit blocks
>to defeat
>codebook attacks.

 I believe I generally agree with the motivation behind this.
 That is, dramatically larger blocks or keys are not necessary
 for strength.  Real cryptographic strength should be measured as
 the cheapest way to gain the hidden information, and few of us
 could possibly maintain physical security sufficient to prevent
 billion-dollar intrusions (or even thousand dollar intimidations).

 Nevertheless, from the design standpoint, once one has a flexible
 efficient, large-block technology, *why not* use large blocks and
 keys?  Surely it is obvious that an efficient keying mechanism
 should handle short keys very well -- then if one wants to use keys
 with only 96 bits, that's fine.  But the basic technology could
 support real keyspaces in excess of 1000 bits with the exact same
 efficient design.  (The storage of 1000-bit keys might have once
 been a problem, but those days are gone.)  The situation is similar
 with large blocks.  The added costs for larger keys or blocks are
 not linear, especially with new technology.

 There *is* an argument, though, for 200-byte blocks (or so!), which
 is the expected language entropy (~300 bits) and the resulting
 level for codebook attacks (2**150 blocks).  We might want this
 last figure to be 2**120 or more to take this particular issue off
 the table, and that implies 160-byte blocks.  Again, a
 slightly-larger block is not an implementation hardship or cost.


>Consider a 128 bit block cipher. Quite apart from possibly
>anticipating 64
>bit processors (or new UFN designs), it should take about 2**64
>blocks, or
>262,144 exabytes of data before codebooks start to be any use
>at all (if
>used in a chaining mode).

 Fine, but we can *avoid* the use of chaining if we use substantially
 larger blocks.  Avoiding chaining better supports:

    1) use in packet-switching environments (packets are often
      delivered out of sequence),
    2) random disk access, and
    3) parallel processing (as has recently come to my attention :-).


>This is equivalent to about 1620 channels of real
>time hi-res 24-bit colour video encrypted continuously for 100
>years. The
>storage system for this codebook, once completed, will require
>billions of
>tonnes of matter (even assuming storage systems requiring only
>one atom per
>bit).

 ---
 Terry Ritter   ritter@io.com   http://www.io.com/~ritter/


========
Path: news.io.com!insync!news.ios.com!news2.cais.net!news.cais.net!tezcat!cam-news-hub1.bbnplanet.com!news.mathworks.com!newsfeed.internetmci.com!in3.uu.net!info.htcomp.net!NewsWatcher!user
From: wtshaw@htcomp.net (W T Shaw)
Newsgroups: sci.crypt
Subject: Re: Ciphers with *Huge* Block Size ?
Date: 23 Aug 1996 18:36:09 GMT
Organization: Another Netscape News Server User
Lines: 50
Message-ID: <wtshaw-2308961339080001@207.17.188.113>
References: <4vi4pb$fsq@elna.ethz.ch>
NNTP-Posting-Host: 207.17.188.113

In article <4vi4pb$fsq@elna.ethz.ch>, gec@acm.org wrote:

> Hello everybody,
>
> current data encryption techniques usually encrypt a data stream or
> small blocks of data. While this makes perfect sense for communication
> where you need to encrypt and decrypt 'on the fly', it is perhaps not
> the optimal solution for archiving, and generally not the strongest way
> to secure data.  These are just idle thoughts, and I would be quite
> interested to hear what you think about it.

You are absolutely right.  Given the same algorithm, the larger the block,
the more possibilities for keys, fewer blocks are used, and it is apt to
beharder to attack by brute force. Keeping things in nice small blocks may
appear to make it easier to process, but that is no real excuse.
>
> Imagine a data file (or semantic entity, or whatever) being encrypted
> as a whole, which means that each of the output bits depends on each of
> the input bits (and naturally the secret key). This way, many kinds of
> attacks may be a bit less feasible e.g. imagine doing a known plaintext
> attack on something which simply can not be known in its entity, but of
> which certain parts are well known (e.g. contents of encapsulated IP
> headers, or headers of archived and encrypted mail) -- or imagine doing
> a linear CA attack when the relevant block size is about a Megabyte.
> 
> Current block cipher chaining methods (CBC, OFB, CFB, ...) do not offer
> this property, the best result is that the last encrypted 'block'
> depends on all previous elements.

The down side is that when parts depend on other parts, then the message
is increasingly subject to random errors completely destroying the ability
to decode the message.  

A good compromise of all is to use longer blocks up to an fairly large
size as a limit.  You would have some messages below that limit, others
using several blocks.  You also have to solve the varying size problem of
blocks.  If you have no limit to size of the possible blocks, you create
another problem, one of defining keyspace.

One class of algorithms I use breaks messages into blocks that might
approach 250 characters in length for some blocks, a good common maximum
for string sizes for most higher level programming languages. Having the
longer strings as a floor for manipulation allows me to get beyond the
popular tinkertoy, bit-oriented algorithms.
/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\
wtshaw@htcomp.net   Mac Crypto Programs
 You should at least know how to use ROT13.
"Fhpprff vf n Wbhearl, Abg n Qrfgvangvba."
          http://www.htcomp.net/wts/
\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/


========
Path: news.io.com!usenet
From: Terry Ritter 
Newsgroups: sci.crypt
Subject: Re: Ciphers with *Huge* Block Size ?
Date: Fri, 23 Aug 1996 23:34:25 -0500
Organization: Ritter Software Engineering
Lines: 259
Message-ID: <321E8651.1521@io.com>
Reply-To: ritter@io.com
NNTP-Posting-Host: dialup-01-145.austin.io.com
Mime-Version: 1.0
Content-Type: text/plain; charset=us-ascii
Content-Transfer-Encoding: 7bit
X-Mailer: Mozilla 3.0 (Win95; I)
CC: Jerry Leichter <leichter@smarts.com>

Jerry Leichter <leichter@smarts.com> wrote

> Terry Ritter wrote:
> > [Interesting examples of uses for ciphers with large, or even
> > variable, block sizes.]
>
> The counter-arguments that have held sway are more
> application-oriented
> and historical than theoretical:
>
> 	1.  Hardware implementations of large block sizes are more
> 		difficult.

 Whenever one does something new, it will have new costs, new
 tradeoffs, and new ways to do things.  Certainly it is easier to
 build a chip with a 64 data I/O pins than one with 1600 I/O pins;
 therefore, don't do that.  Bus bandwidth limitations are problems
 shared with video displays, and could have similar solutions.

 As an erstwhile chip architect myself, I would say that *both*
 Fenced DES *and* Variable Size Block Cipher technologies have
 very attractive hardware implementations, even if they will never
 be quite as straightforward as DES.  On the other hand, I note
 that the most successful CPU chips are *not* built on the idea
 that "this chip was easy to implement."  Indeed, we often have
 the opposite situation entirely.


> 	2.  The biggest demand is for encryption of communication
> 		streams.

 I'm a little confused here:  This discussion so far has been about
 *block* ciphers.  Certainly stream ciphers are important, and I have
 addressed them in other work, but here we discuss *block* ciphers.

> 		This adds a number of constraints.  For example, stream
> 		mode operation is highly desireable; you can't afford
> 		to buffer large amounts of information just for
> 		encryption.

 I guess the issue here is the phrase "large amounts of information."
 Inevitably, disk sectors *are* written as buffered blocks, and
 generally are 512 bytes or larger.  This amount of data is *already
 buffered*.  Similarly, most modern communications work on "packets"
 or "frames," which are also *already buffered*.  These natural
 buffer sizes are far, far larger than 8-byte DES blocks.

 Note that the main reason we use cipher block chain (CBC) mode
 with DES ("stream mode"?) is because of the disastrously small
 amount of language entropy in an 8-character block.  In contrast,
 it is much less necessary to chain large blocks -- we can use
 electronic code book (ECB) mode and simply cipher the blocks.
 This is a big advantage *for* random disk access or communications
 systems (like IP) which may deliver packets out of order.


>(The original question in this thread
> 		concerned encryption modes in which the output depended
> 		on the *entire* input, no matter how large.  Such a
> 		mode would be entirely unusable in a communications
> 		framework.)

 Almost anything, taken to excess, is impractical or dangerous.

 Since most messages are "short" (relative to available store), they
 are often completely buffered anyway.  These can be ciphered as
 single units, and larger messages ciphered as a sequence of blocks
 which are *much* larger than 8-byte DES.  There is no need to go
 overboard on this, since huge benefits can be reaped from
 relatively small buffers.


> 	3.  The larger the block, the larger the average space that
> 		is wasted to fill out short blocks.  In a communications
> 		framework, many messages are quite short.  A really
> 		large block could end up devoting more space to fill
> 		than to useful message!

 I note that, in this case, it would be very nice to have a Variable
 Size Block and not *have* wasted space, in general.

 But even a Fencing or Mixing cipher can be one of a *set* of
 ciphers of various widths.  We use the large one as long as we can
 fill the block, then use smaller and smaller block sizes at most
 once each at the end of the message.  The remaining "wasted space"
 would be that of the smallest block implementation, and comparable
 to that in DES.


>  	4.  All other things being equal, a large-block cipher is
> 		probably going to be slower than a small-block cipher
> 		on a per-bit basis:  If each bit of the output depends
> 		on each bit of the input, then the minimum possible
> 		operations per bit doubles when you double the block
> 		size.

 This argument is flat-out false, because modern mixing operations
 (which assure that every output bit depends upon every input bit)
 can be much faster than Feistel ciphering, which depends upon
 many repeated rounds to get a good mixing.  Even my first
 Fenced DES implementation would cipher a 4x block at data rates
 much faster than Triple DES.

 Now, if we use DES as a component, we clearly cannot go faster
 than DES, but we can *approach* DES speeds in a much wider and
 stronger cipher.  Fencing and Mixing technology which does not
 use DES can have better throughput.

>(This kind of argument is necessarily vague and
> 		incomplete, since with a larger block size you may be
> 		able to get away with, say, fewer rounds for equivalent
> 		security.

 If you want better technology, the whole issue of Feistel ciphering
 with its implicit repeated "rounds" of operation has got to go.  By
 combining mixing and ciphering in one operation, Feistel ciphering
 does neither optimally well.  We can improve on both functions by
 performing them separately.

>But I know of no ciphers that actually come
> 		out ahead for large block sizes on this basis.)

 Both the Fencing and Mixing and Variable Size Block Cipher
 technologies provide better mixing performance and so need fewer
 layers.  Unlike Feistel technology, none of these layers are
 repeated in "rounds."

 We now have about 20 years of optimization experience with DES,
 and just a couple of early implementations of VSBC's.  I expect
 a mature VSBC design to offer wider, dynamically-variable size
 blocks, much greater strength, inherent resistance to Differential
 and Linear cryptanalysis, with performance comparable to or better
 than the best DES implementations, on a byte-per-byte basis.


> 	5.  Many of these issues grow less important with each passing
> 		year as chips get faster

 No matter how fast a chip is, it can never be as fast as our dreams.
 Indeed, the more our dreams are realized, the more advanced they
 get.

(though encryption of traffic
> 		on the highest speed lines remains problematical -
> 		transmission data rates are going up faster than CPU
> 		speeds these days, and the trend lines indicate that
> 		this will continue.  If it does, we'll have to go for
> 		parallelism of some sort - perhaps just pipelining is
> 		enough - and that's much easier to do with smaller
> 		block sizes.)

 Please.  While parallelism probably *would* be difficult with
 multi-megabyte blocks, let's be practical.  Frankly, ciphering
 blocks of a few hundred bytes or a few KB is likely to be *more*
 efficient than ciphering blocks of 8 bytes each.  And if we use
 a large block size, we probably don't need to chain the blocks,
 but can instead cipher them independently, which is a *tremendous*
 advantage *for* parallelism.

 When we have a block-size choice, we can choose what we want for
 best operation; if we have no choice, we can only take what we get.
 Which is likely to produce the better product?

>However, there is an existing infra-
> 		structure now.  Block sizes are often "frozen in" to
> 		existing designs and difficult and expensive to change.
> 		That's one reason people like triple-DES:  It can be
> 		used as a drop-in replacement for DES, with effects
> 		only on the relatively low-frequency activity of key
> 		distribution.

 Throughput (or lack of it) is one reason that some systems managers
 absolutely *hate* Triple-DES.  While individual users have CPU cycles
 to burn, servers get faster and faster yet end up farther behind.
 The growth in numbers of users, and the simultaneous growth in the
 desires of those users, far outpace our ability to scale-up
 current systems with new hardware.  Fundamentally new system
 structures are often required.

 Fenced DES can be highly compatible with DES: we use 4x blocks up
 to the end of a message, then use 2x and 1x ciphering, thus
 producing no expansion over ciphering with DES alone.

 Clearly, even a VSBC can be made to cipher in blocks of size mod 8.


> 	6.  Ritter's arguments are strongest when applied to file (or
> 		disk) encryption, and larger block sizes might be very
> 		appropriate there (up to some limit, since you don't
> 		want the cipher block size to force you to transfer
> 		significantly more data to and from the disk).

 I see no reason why any particular technology must necessarily be
 best at every possible job.  But were one single cipher to be picked,
 an efficient cipher with dynamically variable size blocks (which can
 be huge blocks or DES blocks), would seem very attractive

>However,
> 		there is also a strong pressure to standardize on one
> 		encryption technology - DES today, who-knows-what
> 		tomorrow.

 This is a serious fundamental error.  Instead of standardizing on
 any particular *cipher*, we should innovate a standard *interface*
 and the ability to plug-in "any" desired cipher.  Then, if any
 particular cipher is found to have problems, any of us using that
 cipher could switch to another.  Note that in this environment,
 there may be relatively few users using any particular cipher.

 (Both ends need the same cipher, of course, but this can be
 automatically negotiated in the background.  Everybody could be
 expected to have some common ciphers as a part of their basic
 package, plus various custom ciphers which could even be
 distributed as a part of Public Key delivery.)

 Compare the ability to plug in new ciphers to the current situation
 where we know that DES is obsolete and yet are coerced by our
 investment into using it or something very much like it.

 At one time, the idea of selecting from among arbitrary ciphers
 was being seriously considered in the ANSI banking standards
 committee X9F3.

>If one decides that it's a good idea to use
> 		the same cryptosystem for both files and communications
> 		(which has its pluses, like the ability to use fast
> 		encryption chips developed for communications in disk
> 		controllers), then constraints on the communications
> 		side "leak through" to the file side, even if they
> 		aren't really relevant there.

 If one must standardize on one particular cipher for many different
 applications, a Variable Size Block Cipher has got to be attractive.

> 		(Of course, from a security point of view, the more
> 		strong cryptosystems you have, the better.

Indeed.  See my web pages:      http://www.io.com/~ritter/


>All the
> 		available evidence indicates that NSA produces many
> 		different cryptosystems for different purposes.  That's
> 		just prudent design.  The same is actually true for any
> 		component with security implications - a security hole
> 		in, say, Netscape implies a vulnerability on the part
> 		of almost all Web users these days.  Nevertheless, the
> 		pressures for standardization are so great that they are
> 		often impossible to resist.)

 Standards come and go, and, nowadays, we could field a completely
 new ciphering standard in a year or so.  Look at how fast HTML has
 changed.

>
> 							-- Jerry
>
 ---
 Terry Ritter   ritter@io.com   http://www.io.com/~ritter/


========
Path: news.io.com!insync!news.ios.com!news2.cais.net!news.cais.net!tezcat!cam-news-hub1.bbnplanet.com!news.mathworks.com!newsfeed.internetmci.com!news.sgi.com!mr.net!winternet.com!schneier
From: schneier@counterpane.com (Bruce Schneier)
Newsgroups: sci.crypt
Subject: Re: Ciphers with *Huge* Block Size ?
Date: Sat, 24 Aug 1996 16:53:24 -0500
Organization: Counterpane Systems
Lines: 31
Message-ID: <schneier-2408961653240001@news.winternet.com>
References: <4vi4pb$fsq@elna.ethz.ch>
NNTP-Posting-Host: ppp-66-40.dialup.winternet.com
X-Newsreader: Yet Another NewsWatcher 2.2.0b7

There has not been a lot of research on LARGE block sizes, probably
because they are so much harder to analyze.   Block ciphers with small
block sizes are more versitile, and we know all about using feedback
mechanisms to encrypt large blocks of plaintext will small block ciphers.
Still, it makes more than a little sense to encrypt hard drive sectors
with block algorithms that operate on the entire sector as a single block.

There are some serious problems with making a single block cipher with a
block size in the multi-hundred byte range (as opposed to cascading a
block cipher, stream cipher, and block cipher chaining in the other
direction, which will do much the same thing).

1.  It is very hard to get secure diffusion of every plaintext bit to
every ciphertext bit.  There are a lot more possibilities for good
differentials to attack.  On the other hand, you'll probably never get the
volume of plaintext required for a differential attack.

2.  It is a royal pain to analyze large block ciphers.  It is hard enough
to analyze constructions out of smaller primitives.

I kind of like the idea, but I worry about its security.

Bruce

**************************************************************************
* Bruce Schneier              APPLIED CRYPTOGRAPHY, 2nd EDITION is
* Counterpane Systems         available.  For info on a 15%
* schneier@counterpane.com    discount offer, send me e-mail.
*
* For Blowfish C code, see ftp.ox.ac.uk:/pub/crypto/misc/blowfish.c.gz
**************************************************************************


========
Newsgroups: sci.crypt
Path: news.io.com!arlut.utexas.edu!cs.utexas.edu!howland.erols.net!cam-news-hub1.bbnplanet.com!news.mathworks.com!newsfeed.internetmci.com!in3.uu.net!news.dsndata.com!backbone!backbone!wayne
From: wayne@backbone.uucp (Wayne Schlitt)
Subject: Re: Ciphers with *Huge* Block Size ?
In-Reply-To: schneier@counterpane.com's message of Sat, 24 Aug 1996 16: 53:24 -0500
Message-ID: <WAYNE.96Aug24215404@backbone.uucp>
Sender: wayne@backbone (Wayne Schlitt)
Reply-To: wayne@cse.unl.edu
Organization: The Backbone Cabal
References: <4vi4pb$fsq@elna.ethz.ch> <schneier-2408961653240001@news.winternet.com>
Date: Sun, 25 Aug 1996 03:54:04 GMT
Lines: 40

In article <schneier-2408961653240001@news.winternet.com> schneier@counterpane.com (Bruce Schneier) writes:
> There are some serious problems with making a single block cipher with a
> block size in the multi-hundred byte range
>
> 1.  It is very hard to get secure diffusion of every plaintext bit to
> every ciphertext bit.  [ ... ]

Is it really that important that _every_ plaintext bit gets diffused
to _every_ ciphertext bit?  Lets look at what happens when you try to
_decrypt_ a message using a smaller block size with the standard
encryption modes:

ECB     each bit only effects every bit in the small block size
CBC     each bit only effects every bit in the small block size plus
        the immediately following block.
CFB     same as CBC
OFB     same as ECB

Yes, when you change a single bit in the plaintext and use CBC or CFB,
every following block changes, but the following blocks also have
"known plaintext" in the form of the previous blocks ciphertext.
Also, neither CBC nor CFB change the preceding blocks of ciphertext,
so you know which block contains the changed bit.  This also means
that on average, a single bit change in the plaintext will effect
only a little more than half of the ciphertext.


I could _easily_ be wrong, but it seams reasonable to me that each bit
of plaintext would only have to diffuse to more than twice the number
of bits in the small block size for the large block cipher to be as
secure.  It might even be less than the number of bits in the small
block size since the diffusion could be spread to all parts of the
larger block.


-wayne

--
Wayne Schlitt can not assert the truth of all statements in this
article and still be consistent.


========
Path: news.io.com!imci4!newsfeed.internetmci.com!newsfeeder.gi.net!news.mid.net!mr.net!uunet!in2.uu.net!info.htcomp.net!NewsWatcher!user
From: wtshaw@htcomp.net (W T Shaw)
Newsgroups: sci.crypt
Subject: Re: Ciphers with *Huge* Block Size ?
Date: 26 Aug 1996 19:12:07 GMT
Organization: Another Netscape News Server User
Lines: 28
Message-ID: <wtshaw-2608961415530001@207.17.188.102>
References: <4vi4pb$fsq@elna.ethz.ch> <schneier-2408961653240001@news.winternet.com> <WAYNE.96Aug24215404@backbone.uucp>
NNTP-Posting-Host: 207.17.188.102

In article <WAYNE.96Aug24215404@backbone.uucp>, wayne@cse.unl.edu wrote:
>
> I could _easily_ be wrong, but it seams reasonable to me that each bit
> of plaintext would only have to diffuse to more than twice the number
> of bits in the small block size for the large block cipher to be as
> secure.  It might even be less than the number of bits in the small
> block size since the diffusion could be spread to all parts of the
> larger block.
>
Small blocks are compatible with small keys, unless some sort of chaining
takes place.  Then the chaining of the later blocks might affect fewer and
fewer blocks, not a consistent effect at all.  With larger blocks, a vast
amount of interaction between bits is readily available as a basic
characteristic of the algorithm. It is not necessary that all parts of the
block be affected by changing a single bit in the data.

Now looking at my pet algorithm, it really flies in the face of this
mixing wisdom, for while considerable diffusion of the key takes place,
the output does not at first glance seem to reflect any. The process does
have the desired affect, of course, of making the output exceedingly
difficult to correlate with the input while maintaining robustness of the
text, diminishing the effects of noise as a corrupting influence.
/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\
wtshaw@htcomp.net   Mac Crypto Programs
 You should at least know how to use ROT13.
"Fhpprff vf n Wbhearl, Abg n Qrfgvangvba."
          http://www.htcomp.net/wts/
\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/


========
Path: news.io.com!arlut.utexas.edu!cs.utexas.edu!howland.erols.net!cam-news-hub1.bbnplanet.com!news.mathworks.com!fu-berlin.de!zrz.TU-Berlin.DE!news.dfn.de!news.gwdg.de!namu20.gwdg.de!lucks
From: Stefan Lucks <lucks@namu01.gwdg.de>
Newsgroups: sci.crypt
Subject: Re: Ciphers with *Huge* Block Size ?
Date: Sun, 25 Aug 1996 14:02:23 +0200
Organization: GWDG, Goettingen
Lines: 27
Message-ID: <Pine.OSF.3.91.960825135232.7490A-100000@namu20.gwdg.de>
References: <4vi4pb$fsq@elna.ethz.ch> <schneier-2408961653240001@news.winternet.com>
NNTP-Posting-Host: namu20.num.math.uni-goettingen.de
Mime-Version: 1.0
Content-Type: TEXT/PLAIN; charset=US-ASCII
X-Sender: lucks@namu20.gwdg.de
In-Reply-To: <schneier-2408961653240001@news.winternet.com>

On Sat, 24 Aug 1996, Bruce Schneier wrote:

> 2.  It is a royal pain to analyze large block ciphers.  It is hard enough
> to analyze constructions out of smaller primitives.

It is pretty easy to build block ciphers with huge blocks from
conventional cryptographic primitives, such as stream ciphers and hash
functions, and to analyze the block cipher's security.

You can even prove the block ciphers security (provided the building
blocks are secure). 

Anderson and Biham did this with BEAR and LION. I,d propose BEAST, which 
uses the same building blocks as BEAR and LION, but is faster. See the
"papers" section at my homepage for a paper describing BEAST, my
homepage's URL is "http://www.num.math.uni-goettingen.de/lucks/".

A link to Anderson's and Biham's paper has been given by someone else in
this thread.


Stefan Lucks  Inst. f. NAM, Lotzestrasse 16-18, 37083 Goettingen, Germany
              e-mail: lucks@math.uni-goettingen.de
              home:   http://www.num.math.uni-goettingen.de/lucks/
----- Wer einem Computer Unsinn erzaehlt, muss immer damit rechnen. -----



========
Path: news.io.com!arlut.utexas.edu!cs.utexas.edu!chi-news.cic.net!newspump.sol.net!news.inc.net!uwm.edu!math.ohio-state.edu!howland.erols.net!newsfeed.internetmci.com!in3.uu.net!info.htcomp.net!NewsWatcher!user
From: wtshaw@htcomp.net (W T Shaw)
Newsgroups: sci.crypt
Subject: Re: Ciphers with *Huge* Block Size ?
Date: 26 Aug 1996 18:48:47 GMT
Organization: Another Netscape News Server User
Lines: 40
Message-ID: <wtshaw-2608961352320001@207.17.188.102>
References: <4vi4pb$fsq@elna.ethz.ch> <schneier-2408961653240001@news.winternet.com>
NNTP-Posting-Host: 207.17.188.102

In article <schneier-2408961653240001@news.winternet.com>,
schneier@counterpane.com (Bruce Schneier) wrote:

> There has not been a lot of research on LARGE block sizes, probably
> because they are so much harder to analyze.
>
> There are some serious problems with making a single block cipher with a
> block size in the multi-hundred byte range (as opposed to cascading a
> block cipher, stream cipher, and block cipher chaining in the other
> direction, which will do much the same thing).
>
> 1.  It is very hard to get secure diffusion of every plaintext bit to
> every ciphertext bit.  There are a lot more possibilities for good
> differentials to attack.  On the other hand, you'll probably never get the
> volume of plaintext required for a differential attack.

Given a large block, the amount of diffusion becomes an option all its
own.  Indeed, the keylength of the cipher can partly be scaled this way.
Contemporary key sizes would likely be at the bottom of the range.
>
> 2.  It is a royal pain to analyze large block ciphers.  It is hard enough
> to analyze constructions out of smaller primitives.
>
This sounds like a plus rather than a negative, but it depends on whether
you want it to be readily breakable or not, I prefer not.

> I kind of like the idea, but I worry about its security.

Like anything else new and/or different, large block ciphers need study to
see if there are basic flaws.  Having best acquaintance of my own, I find
only a few people that few qualified, less after finding cryptoanalysis of
the pretty tough sledding.
>
Bill
/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\
wtshaw@htcomp.net   Mac Crypto Programs
 You should at least know how to use ROT13.
"Fhpprff vf n Wbhearl, Abg n Qrfgvangvba."
          http://www.htcomp.net/wts/
\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/


========
Newsgroups: sci.crypt
Path: news.io.com!arlut.utexas.edu!cs.utexas.edu!math.ohio-state.edu!uwm.edu!spool.mu.edu!news.sgi.com!wetware!news.dsndata.com!backbone!backbone!wayne
From: wayne@backbone.uucp (Wayne Schlitt)
Subject: Re: Ciphers with *Huge* Block Size ?
In-Reply-To: wtshaw@htcomp.net's message of 26 Aug 1996 18: 48:47 GMT
Message-ID: <WAYNE.96Aug26190836@backbone.uucp>
Sender: wayne@backbone (Wayne Schlitt)
Reply-To: wayne@cse.unl.edu
Organization: The Backbone Cabal
References: <4vi4pb$fsq@elna.ethz.ch> <schneier-2408961653240001@news.winternet.com> <wtshaw-2608961352320001@207.17.188.102>
Date: Tue, 27 Aug 1996 01:08:36 GMT
Lines: 19

In article <wtshaw-2608961352320001@207.17.188.102> wtshaw@htcomp.net (W T Shaw) writes:
> In article <schneier-2408961653240001@news.winternet.com>,
> schneier@counterpane.com (Bruce Schneier) wrote:
> > 2.  It is a royal pain to analyze large block ciphers.  It is hard enough
> > to analyze constructions out of smaller primitives.
> >
> This sounds like a plus rather than a negative, but it depends on whether
> you want it to be readily breakable or not, I prefer not.


Sounds like a _minus_ to me.  You want to be able to _analyze_ a
cipher in order to _prove_ that it is strong.


-wayne

--
Wayne Schlitt can not assert the truth of all statements in this
article and still be consistent.


========
Path: news.io.com!arlut.utexas.edu!cs.utexas.edu!howland.erols.net!cam-news-hub1.bbnplanet.com!news.mathworks.com!uunet!in3.uu.net!info.htcomp.net!NewsWatcher!user
From: wtshaw@htcomp.net (W T Shaw)
Newsgroups: sci.crypt
Subject: Re: Ciphers with *Huge* Block Size ?
Date: 27 Aug 1996 06:22:37 GMT
Organization: Another Netscape News Server User
Lines: 23
Message-ID: <wtshaw-2708960126130001@207.17.188.101>
References: <4vi4pb$fsq@elna.ethz.ch> <schneier-2408961653240001@news.winternet.com> <wtshaw-2608961352320001@207.17.188.102> <WAYNE.96Aug26190836@backbone.uucp>
NNTP-Posting-Host: 207.17.188.101

In article <WAYNE.96Aug26190836@backbone.uucp>, wayne@cse.unl.edu wrote:

> In article <wtshaw-2608961352320001@207.17.188.102> wtshaw@htcomp.net (W
T Shaw) writes:
> > In article ,
> > schneier@counterpane.com (Bruce Schneier) wrote:
> > > 2.  It is a royal pain to analyze large block ciphers.  It is hard enough
> > > to analyze constructions out of smaller primitives.
> > >
> > This sounds like a plus rather than a negative, but it depends on whether
> > you want it to be readily breakable or not, I prefer not.

> Sounds like a _minus_ to me.  You want to be able to _analyze_ a
> cipher in order to _prove_ that it is strong.
>
That is the idea, to make it hard to correlate the data.  A strong cipher
should have this characteristic.
/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\
wtshaw@htcomp.net   Mac Crypto Programs
 You should at least know how to use ROT13.
"Fhpprff vf n Wbhearl, Abg n Qrfgvangvba."
          http://www.htcomp.net/wts/
\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/


========
Path: news.io.com!news2.cais.net!news.cais.net!hunter.premier.net!news-res.gsl.net!news.gsl.net!swrinde!howland.erols.net!newsfeed.internetmci.com!in2.uu.net!ott.istar!istar.net!van.istar!west.istar!n1van.istar!van-bc!news.mindlink.net!uniserve!oronet!news.acsu.buffalo.edu!news.drenet.dnd.ca!crc-news.doc.ca!nott!cunews!usenet
From: Pat Morin <morin@scs.carleton.ca>
Newsgroups: sci.crypt
Subject: Re: Ciphers with *Huge* Block Size ?
Date: Sat, 24 Aug 1996 18:43:01 -0400
Organization: Carleton University
Lines: 37
Message-ID: <321F8575.4972@scs.carleton.ca>
References: <4vi4pb$fsq@elna.ethz.ch>
NNTP-Posting-Host: veda.scs.carleton.ca
Mime-Version: 1.0
Content-Type: text/plain; charset=us-ascii
Content-Transfer-Encoding: 7bit
X-Mailer: Mozilla 2.01 (X11; I; SunOS 5.5 sun4u)

Germano Caronni wrote:
>
> Hello everybody,
>
> current data encryption techniques usually encrypt a data stream or
> small blocks of data. While this makes perfect sense for communication
> where you need to encrypt and decrypt 'on the fly', it is perhaps not
> the optimal solution for archiving, and generally not the strongest way
> to secure data.  These are just idle thoughts, and I would be quite
> interested to hear what you think about it.
>
> Imagine a data file (or semantic entity, or whatever) being encrypted
> as a whole, which means that each of the output bits depends on each of
> the input bits (and naturally the secret key). This way, many kinds of
> attacks may be a bit less feasible e.g. imagine doing a known plaintext
> attack on something which simply can not be known in its entity, but of
> which certain parts are well known (e.g. contents of encapsulated IP
> headers, or headers of archived and encrypted mail) -- or imagine doing
> a linear CA attack when the relevant block size is about a Megabyte.
>
> Current block cipher chaining methods (CBC, OFB, CFB, ...) do not offer
> this property, the best result is that the last encrypted 'block'
> depends on all previous elements.
>
> Comments to this one?

Look at the BEAR and LION ciphers developed by Ross Anderson and
Eli Biham.  They're fast, easy to implement, and have some nice
security properties.

ftp://ftp.cl.cam.ac.uk/users/rja14/bear-lion.ps.Z

If you're really interested, you can also look at AARDVARK.

http://www.scs.carleton.ca/~morin/sac96/AARDVARK/AARDVARK.ps

Pat


Terry Ritter, his current address, and his top page.

Last updated: 1998-01-20