Path: illuminati.io.com!uunet!in1.uu.net!EU.net!uknet!lyra.csx.cam.ac.uk!news
From: beaver@castor.cse.psu.edu (Don Beaver)
Newsgroups: sci.crypt.research

Subject: Re: A cryptographic pseudorandom number generator
Date: 15 Mar 1995 00:32:54 GMT
Organization: Penn State Computer Science
Lines: 134
Approved: crypt@cs.aukuni.ac.nz
Message-ID: <3k5cjm$2dl@lyra.csx.cam.ac.uk>
NNTP-Posting-Host: nene.cl.cam.ac.uk

suk@usceast.cs.scarolina.edu (Peter Kwangjun Suk) writes:
>beaver@castor.cse.psu.edu (Don Beaver) writes:
>>suk@usceast.cs.scarolina.edu (Peter Kwangjun Suk) writes:
>>
>>>How about using the secure Blum-Blum-Shub quadratic residue generator
>>>with a 512 bit modulus?  We could run a fast 64-bit block cipher in
>>>OFB mode, and occasionally flush the shift register and insert 63 bits
>>>from seven iterations of the BBS generator.  (We have lg(512) = 9 bits
>>>available from an iteration.)
>
>>Not to quibble, but why are you assuming lg(512) bits are secure?
>>
>>It would be misleading to use theoretical results to assume this.
>
>If the _Applied Cryptography_ blurb is not in error, then two things:
 
(You stated BBS correctly and read Schneier correctly.  I'll suggest
a polynomial-time attack on 512-bit BBS using 9-bit windows below.)
 
 
        ============ The Literature ==============
 
I just took a look at Schneier.  On p. 366:
 
  As it turns out, you can use more than the least significant bit
  of each x_i as a pseudo-random bit.  According to [863,864,865,19,20],
  if  l  is the length of x_i, the least significant log_2 l bits
  of x_i can be used.
 
This is a slight misinterpretation of the theoretical results.
 
(On the other hand, a more precise explanation would have probably
expanded Bruce's book into a twelve-volume work.  But there is
an immense danger in applying the theoretical results, one that
serendipity may avoid.  But serendipity is not a trusted friend.)
 
 
Let's take [20], for example: Alexi/Chor/Goldreich/Schnorr,
Siam J. Comput., 17:2, 1988, 194--209.   On p. 208:
 
  In section 5 (6), we have shown simultaneous security results of
  O(log n) bits in RSA (Rabin) encryption function.  ...
 
The BBS generator uses the same technique as the Rabin encryption function
(squaring modulo n).  As far as I know (and apparently also, Bruce knows),
O(log n) is the best rating.
 
 
        ============ Getting Rid of Big Oh ==============
 
>>A proof that O(lg n) bits are secure doesn't help.  It merely says
>>that if you design a method to predict bits, it won't work in the
>>limit if it's given O(lg n) bits.  This says nothing about what happens
>>in initial cases.
>>
>>For example, I can request that you use  1000 lg n  bits
>>(which is O(lg n)) and I'll then predict the sequence trivially
>>for n = 512.  Surely, my prediction method will fail when n gets larger,
>>but not yet.
>
>        2) Where the heck did "Big-Oh" get into this?  I think there's
>           some confusion here between N = pq and n = ceil(lg(N)).
 
 
Big-Oh got into it from the published theoretical literature,
which I guessed was the ultimate source of the original "log_2 n."
 
It makes no great difference.  Let's forget about O(lg n) and
just stick to  lg n.
 
 
        ============ Polytime Attacks on BBS ==============
 
>Do you know of any actual polynomial-time attacks on BBS?  As I
>understand it, these would imply polynomial-time factoring.  Please
>post some references, I would be very interested in these.
 
Here is a polynomial-time attack on BBS [using  lg n  windows]
that works fine for n up to 1000000 bits.
 
0. If n>1000000, then flip a random bit and return it as the prediction
   for the next bit.  Else do 1-6:
1. Factor n.
2. List all cycles  x,x^2,x^4,...  modulo n.
3. Request a sequence of n^1000000 BBS bits.
4. Divide that sequence into  lg n  sized chunks.
5. Match those chunks up to one of the cycles we found in step 2.
6. Predict the n^1000000+1'st  bit by using the next number in the cycle.
 
 
(Okay, I overstepped human knowledge in one place, but it's not where
you might think.  This is *indeed* a polynomial-time algorithm.
 
The only problem is that there may be more than one cycle from step 2
that matches the sequence.  You'd get the idea immediately if we weren't
using only a 1-bit window.  This is an easy algorithm to state, but for
all I know it could be as hard as FLT to prove or disprove the existence
of multiple subsequences matching a given pattern.
 
None of the cited references are talking about this, however.  I've tried
to pose a simple candidate solution for the sake of illustration.)
 
 
Context of Alexi/Chor/Goldreich/Schnorr:
 
  No problem if this attack works for n up to 1000000 bits.
  It will eventually fail on much larger n.  (And this is
  *all* that you can say, in this context!)
 
Context of Schneier:
 
  Okay, so this attack works.  It's even polynomial time.  Big deal.
  It's a polynomial with a very big constant!  The constant is so big
  that nobody would use this method.  "Polynomial" does not mean
  "efficient."
 
 
The point is that there may be very simple and very "efficient"
*polynomial-time* attacks that predict BBS for n<1000000.
 
Alexi/Chor/Goldreich/Schnorr would not be contradicted in the least.
Such attacks would merely _eventually_ fail.  *How big*  n  has to be
is not a part of the theoretical rating.  There is no theoretical
support to deny the existence of "efficient," polynomial-time attacks
on BBS for n<1000000.
 
 
In fact, BBS may well be fine for n=512 and  lg n = 9-bit windows.
For that matter, it may even be fine for n=512 and 256-bit windows!
The theoretical results won't tell you anything.  Whether BBS is fine
with 9-bit windows or 256-bit windows is a matter for "hackers" and
modern technology to decide.  Not the cited papers!
 
Don