Discussion:
Prime generation
Ben Laurie
2014-05-26 18:24:54 UTC
Permalink
I was looking at the internal functions in bn_prime.c:
probable_prime(), probable_prime_dh() and probably_prime_dh_safe().

Possibly I'm missing something, but... don't all of these functions
actually generate (probable) safe primes? This is particularly
bemusing for the DH ones.

Also, probable_prime() has some cunning optimisations which it seems
that the other two could also use. Anyone got any idea why not?

Finally, all of them have a bias: they're much more likely to pick a
prime with a long run of non-primes before it than one that hasn't (in
the case of the DH ones, the condition is slightly more subtle,
depending on parameters, but its there nevertheless). Is this wise?
______________________________________________________________________
OpenSSL Project http://www.openssl.org
Development Mailing List openssl-***@openssl.org
Automated List Manager ***@openssl.org
Viktor Dukhovni
2014-05-26 18:52:35 UTC
Permalink
On Mon, May 26, 2014 at 07:24:54PM +0100, Ben Laurie wrote:

> Finally, all of them have a bias: they're much more likely to pick a
> prime with a long run of non-primes before it than one that hasn't (in
> the case of the DH ones, the condition is slightly more subtle,
> depending on parameters, but its there nevertheless). Is this wise?

Where do you see the bias?

--
Viktor.
______________________________________________________________________
OpenSSL Project http://www.openssl.org
Development Mailing List openssl-***@openssl.org
Automated List Manager ***@openssl.org
Ben Laurie
2014-05-26 19:23:07 UTC
Permalink
On 26 May 2014 19:52, Viktor Dukhovni <openssl-***@dukhovni.org> wrote:
> On Mon, May 26, 2014 at 07:24:54PM +0100, Ben Laurie wrote:
>
>> Finally, all of them have a bias: they're much more likely to pick a
>> prime with a long run of non-primes before it than one that hasn't (in
>> the case of the DH ones, the condition is slightly more subtle,
>> depending on parameters, but its there nevertheless). Is this wise?
>
> Where do you see the bias?

They all work by picking a random number and then stepping upwards
from that number until a probable prime is found. Clearly, that is
more likely to find primes with a long run of non-primes before than
primes with a short run.
______________________________________________________________________
OpenSSL Project http://www.openssl.org
Development Mailing List openssl-***@openssl.org
Automated List Manager ***@openssl.org
Viktor Dukhovni
2014-05-26 20:17:06 UTC
Permalink
On Mon, May 26, 2014 at 08:23:07PM +0100, Ben Laurie wrote:

> > Where do you see the bias?
>
> They all work by picking a random number and then stepping upwards
> from that number until a probable prime is found. Clearly, that is
> more likely to find primes with a long run of non-primes before than
> primes with a short run.

I don't think this is an issue. By the prime number theorem for
among $k-bit$ numbers (taking log 2 as a crude approximation of
the natural log) roughly $1/k$ of them are prime. So for 2048 bit
numbers, the gap between primes is about 11 bits.

Consider a 32-bit range in the least significant bits, this contains
~2^21 (~2 million) primes. Our odds of choosing a prime are
proportional to the gap from the previous prime, so the selection
is not uniform leading to less than 21 bits of entropy. Worst case
we deterministically choose a single prime in each such interval,
and lose 20 bits of entropy out of 2048, leaving 2028.

In practice there is a large number of primes with weights near
maximal, and half as many primes with half the weight, ... and the
entropy loss is much smaller.

To choose uniformly, one would have to generate a new random
candidate after each failure, which requires substantially more
random data.

--
Viktor.
______________________________________________________________________
OpenSSL Project http://www.openssl.org
Development Mailing List openssl-***@openssl.org
Automated List Manager ***@openssl.org
mancha
2014-05-26 20:20:43 UTC
Permalink
On Mon, May 26, 2014 at 08:23:07PM +0100, Ben Laurie wrote:
> On 26 May 2014 19:52, Viktor Dukhovni <openssl-***@dukhovni.org>
> wrote:
> > On Mon, May 26, 2014 at 07:24:54PM +0100, Ben Laurie wrote:
> >
> >> Finally, all of them have a bias: they're much more likely to pick
> >> a prime with a long run of non-primes before it than one that
> >> hasn't (in the case of the DH ones, the condition is slightly more
> >> subtle, depending on parameters, but its there nevertheless). Is
> >> this wise?
> >
> > Where do you see the bias?
>
> They all work by picking a random number and then stepping upwards
> from that number until a probable prime is found. Clearly, that is
> more likely to find primes with a long run of non-primes before than
> primes with a short run.

To put this another way, take OpenSSL's probable_prime(). It finds
"probable primes" by searching arithmetic progressions (common
difference of 2) with random start points.

The starting point is more likely to be chosen from within long runs
of composites thus biasing prime selection to ones at tail ends of
longer sequences of composites.

For our purposes, the operative question is whether the distribution
bias created can be leveraged in any way to attack factoring (RSA) or
dlog (DH).

--mancha
Viktor Dukhovni
2014-05-26 20:49:03 UTC
Permalink
On Mon, May 26, 2014 at 08:20:43PM +0000, mancha wrote:

> For our purposes, the operative question is whether the distribution
> bias created can be leveraged in any way to attack factoring (RSA) or
> dlog (DH).

The maximum gap between primes of size $n$ is conjectured to be
around $log(n)^2$. If $n$ is $2^k$, the gap is at most $k^2$, with
an average value of $k$. Thus the most probable primes are most
$k$ times more probable than is typical, and we lose at most $log(k)$
bits of entropy. This is not a problem.

--
Viktor.

______________________________________________________________________
OpenSSL Project http://www.openssl.org
Development Mailing List openssl-***@openssl.org
Automated List Manager ***@openssl.org
mancha
2014-05-26 21:01:53 UTC
Permalink
On Mon, May 26, 2014 at 08:49:03PM +0000, Viktor Dukhovni wrote:
> On Mon, May 26, 2014 at 08:20:43PM +0000, mancha wrote:
>
> > For our purposes, the operative question is whether the distribution
> > bias created can be leveraged in any way to attack factoring (RSA)
> > or dlog (DH).
>
> The maximum gap between primes of size $n$ is conjectured to be around
> $log(n)^2$. If $n$ is $2^k$, the gap is at most $k^2$, with an
> average value of $k$. Thus the most probable primes are most $k$
> times more probable than is typical, and we lose at most $log(k)$ bits
> of entropy. This is not a problem.

One consequence of the k-tuple conjecture (generally believed to be
true) is that the size of gaps between primes is distributed poisson.

You're right when you say the entropy loss between a uniform
distribution to OpenSSL's biased one is small. In that sense there is
not much to be gained entropy-wise from using a process that gives
uniformly distributed primes over what OpenSSL does.

However, if a way exists to exploit the OpenSSL distribution bias, it
can be modified to be used against uniformly distributed primes with
only minimal algorithmic complexity increases. In other words, the gold
standard here isn't a uniform distribution.

--mancha
mancha
2014-05-27 05:23:45 UTC
Permalink
On Mon, May 26, 2014 at 09:01:53PM +0000, mancha wrote:
> On Mon, May 26, 2014 at 08:49:03PM +0000, Viktor Dukhovni wrote:
> > On Mon, May 26, 2014 at 08:20:43PM +0000, mancha wrote:
> >
> > > For our purposes, the operative question is whether the
> > > distribution bias created can be leveraged in any way to attack
> > > factoring (RSA) or dlog (DH).
> >
> > The maximum gap between primes of size $n$ is conjectured to be
> > around $log(n)^2$. If $n$ is $2^k$, the gap is at most $k^2$, with
> > an average value of $k$. Thus the most probable primes are most $k$
> > times more probable than is typical, and we lose at most $log(k)$
> > bits of entropy. This is not a problem.
>
> One consequence of the k-tuple conjecture (generally believed to be
> true) is that the size of gaps between primes is distributed poisson.
>
> You're right when you say the entropy loss between a uniform
> distribution to OpenSSL's biased one is small. In that sense there is
> not much to be gained entropy-wise from using a process that gives
> uniformly distributed primes over what OpenSSL does.
>
> However, if a way exists to exploit the OpenSSL distribution bias, it
> can be modified to be used against uniformly distributed primes with
> only minimal algorithmic complexity increases. In other words, the
> gold standard here isn't a uniform distribution.
>
> --mancha

This is probably more wonkish than Ben intended with his question but
for those interested, the Poisson result I alluded to is due to
Gallagher [1].

[1] Gallagher, On the distribution of primes in short intervals,
Mathematika, 1976
Otto Moerbeek
2014-05-27 06:23:29 UTC
Permalink
On Tue, May 27, 2014 at 05:23:45AM +0000, mancha wrote:

> On Mon, May 26, 2014 at 09:01:53PM +0000, mancha wrote:
> > On Mon, May 26, 2014 at 08:49:03PM +0000, Viktor Dukhovni wrote:
> > > On Mon, May 26, 2014 at 08:20:43PM +0000, mancha wrote:
> > >
> > > > For our purposes, the operative question is whether the
> > > > distribution bias created can be leveraged in any way to attack
> > > > factoring (RSA) or dlog (DH).
> > >
> > > The maximum gap between primes of size $n$ is conjectured to be
> > > around $log(n)^2$. If $n$ is $2^k$, the gap is at most $k^2$, with
> > > an average value of $k$. Thus the most probable primes are most $k$
> > > times more probable than is typical, and we lose at most $log(k)$
> > > bits of entropy. This is not a problem.
> >
> > One consequence of the k-tuple conjecture (generally believed to be
> > true) is that the size of gaps between primes is distributed poisson.
> >
> > You're right when you say the entropy loss between a uniform
> > distribution to OpenSSL's biased one is small. In that sense there is
> > not much to be gained entropy-wise from using a process that gives
> > uniformly distributed primes over what OpenSSL does.
> >
> > However, if a way exists to exploit the OpenSSL distribution bias, it
> > can be modified to be used against uniformly distributed primes with
> > only minimal algorithmic complexity increases. In other words, the
> > gold standard here isn't a uniform distribution.
> >
> > --mancha
>
> This is probably more wonkish than Ben intended with his question but
> for those interested, the Poisson result I alluded to is due to
> Gallagher [1].
>
> [1] Gallagher, On the distribution of primes in short intervals,
> Mathematika, 1976

Would this work: if you are worried the algorithm will never pick the
highest of a prime pair, just make it search backward half of the time?

But I understand it has no real security implications.

-Otto

______________________________________________________________________
OpenSSL Project http://www.openssl.org
Development Mailing List openssl-***@openssl.org
Automated List Manager ***@openssl.org
Otto Moerbeek
2014-05-27 06:26:25 UTC
Permalink
On Tue, May 27, 2014 at 08:23:29AM +0200, Otto Moerbeek wrote:

> On Tue, May 27, 2014 at 05:23:45AM +0000, mancha wrote:
>
> > On Mon, May 26, 2014 at 09:01:53PM +0000, mancha wrote:
> > > On Mon, May 26, 2014 at 08:49:03PM +0000, Viktor Dukhovni wrote:
> > > > On Mon, May 26, 2014 at 08:20:43PM +0000, mancha wrote:
> > > >
> > > > > For our purposes, the operative question is whether the
> > > > > distribution bias created can be leveraged in any way to attack
> > > > > factoring (RSA) or dlog (DH).
> > > >
> > > > The maximum gap between primes of size $n$ is conjectured to be
> > > > around $log(n)^2$. If $n$ is $2^k$, the gap is at most $k^2$, with
> > > > an average value of $k$. Thus the most probable primes are most $k$
> > > > times more probable than is typical, and we lose at most $log(k)$
> > > > bits of entropy. This is not a problem.
> > >
> > > One consequence of the k-tuple conjecture (generally believed to be
> > > true) is that the size of gaps between primes is distributed poisson.
> > >
> > > You're right when you say the entropy loss between a uniform
> > > distribution to OpenSSL's biased one is small. In that sense there is
> > > not much to be gained entropy-wise from using a process that gives
> > > uniformly distributed primes over what OpenSSL does.
> > >
> > > However, if a way exists to exploit the OpenSSL distribution bias, it
> > > can be modified to be used against uniformly distributed primes with
> > > only minimal algorithmic complexity increases. In other words, the
> > > gold standard here isn't a uniform distribution.
> > >
> > > --mancha
> >
> > This is probably more wonkish than Ben intended with his question but
> > for those interested, the Poisson result I alluded to is due to
> > Gallagher [1].
> >
> > [1] Gallagher, On the distribution of primes in short intervals,
> > Mathematika, 1976
>
> Would this work: if you are worried the algorithm will never pick the
> highest of a prime pair, just make it search backward half of the time?

(unless it hits it right on of course)

>
> But I understand it has no real security implications.
>
> -Otto
>
> ______________________________________________________________________
> OpenSSL Project http://www.openssl.org
> Development Mailing List openssl-***@openssl.org
> Automated List Manager ***@openssl.org
______________________________________________________________________
OpenSSL Project http://www.openssl.org
Development Mailing List openssl-***@openssl.org
Automated List Manager ***@openssl.org
mancha
2014-05-28 00:47:07 UTC
Permalink
On Tue, May 27, 2014 at 08:23:29AM +0200, Otto Moerbeek wrote:
> On Tue, May 27, 2014 at 05:23:45AM +0000, mancha wrote:
>
> > On Mon, May 26, 2014 at 09:01:53PM +0000, mancha wrote:
> > > On Mon, May 26, 2014 at 08:49:03PM +0000, Viktor Dukhovni wrote:
> > > > On Mon, May 26, 2014 at 08:20:43PM +0000, mancha wrote:
> > > >
> > > > > For our purposes, the operative question is whether the
> > > > > distribution bias created can be leveraged in any way to
> > > > > attack factoring (RSA) or dlog (DH).
> > > >
> > > > The maximum gap between primes of size $n$ is conjectured to be
> > > > around $log(n)^2$. If $n$ is $2^k$, the gap is at most $k^2$,
> > > > with an average value of $k$. Thus the most probable primes are
> > > > most $k$ times more probable than is typical, and we lose at
> > > > most $log(k)$ bits of entropy. This is not a problem.
> > >
> > > One consequence of the k-tuple conjecture (generally believed to
> > > be true) is that the size of gaps between primes is distributed
> > > poisson.
> > >
> > > You're right when you say the entropy loss between a uniform
> > > distribution to OpenSSL's biased one is small. In that sense there
> > > is not much to be gained entropy-wise from using a process that
> > > gives uniformly distributed primes over what OpenSSL does.
> > >
> > > However, if a way exists to exploit the OpenSSL distribution bias,
> > > it can be modified to be used against uniformly distributed primes
> > > with only minimal algorithmic complexity increases. In other
> > > words, the gold standard here isn't a uniform distribution.
> > >
> > > --mancha
> >
> > This is probably more wonkish than Ben intended with his question
> > but for those interested, the Poisson result I alluded to is due to
> > Gallagher [1].
> >
> > [1] Gallagher, On the distribution of primes in short intervals,
> > Mathematika, 1976
>
> Would this work: if you are worried the algorithm will never pick the
> highest of a prime pair, just make it search backward half of the
> time?
>
> But I understand it has no real security implications.
>
> -Otto

The issue is not limited to twin primes though that extreme drives the
point home. In the twin case {p,p+2}, OpenSSL only finds p+2 if p+1 or
p+2 happens to be the randomly selected start point. So, the proportion
of primes OpenSSL finds that are twins will be significantly lower than
theory predicts. With OpenSSL's incremental search, the probability a
particular probable prime p is selected is proportional to the length of
the gap of composites which immediately precedes it.

Brandt and Damgard [1], from what I can tell the fathers of the
incremental search OpenSSL uses, share Viktor Dukhovni's view and use an
entropy argument to conclude little or no additional risk exists with
incremental searches relative to uniformly distributed prime generation.

Mihailescu (of Catalan Conjecture fame) establishes a complexity
equivalence class to argue improved attacks against an incremental
search can be converted to attacks against uniformly distributed prime
generation with comparable runtimes [2]. For Mihailescu, incremental
search bias is "tolerable", especially in light of the lower entropy
costs and efficiency gains relative to naive discard & repeat. The
improvements he models are, in essence, improvements in the
state-of-the-art not the result of leveraging bias.

Fouque and Tibouchi [3] offer the differing view that it's preferable to
minimize bias and generate primes that are almost uniform "even if it is
not immediately clear how such biases can help an adversary". They
suggest a few algorithms that improve on naive discard & repeat by
discarding only the top N bits of a candidate at each iteration, among
other innovations.

---
[1] Brandt and Damgard, On Generation of Probable Primes by Incremental
Search, 1988
]2] Mihailescu, Measuring the Cryptographic Relevance of Biased Public
Key Distributions, 1998
[3] Fouque and Tibouchi, Close to Uniform Prime Number Generation With
Fewer Random Bits, 2011
Geoffrey Thorpe
2014-05-28 03:14:41 UTC
Permalink
I haven't read through the references but am grateful for them. Indeed, I
haven't actually followed this mail-thread in detail but I was struck by a
strange sense of déjà-vu. There was a similar discussion over 10 years ago;

http://marc.info/?t=107058742900001&r=1&w=2

:-) Talk about feeling old...

Cheers,
Geoff



On Tue, May 27, 2014 at 8:47 PM, mancha <***@zoho.com> wrote:

> On Tue, May 27, 2014 at 08:23:29AM +0200, Otto Moerbeek wrote:
> > On Tue, May 27, 2014 at 05:23:45AM +0000, mancha wrote:
> >
> > > On Mon, May 26, 2014 at 09:01:53PM +0000, mancha wrote:
> > > > On Mon, May 26, 2014 at 08:49:03PM +0000, Viktor Dukhovni wrote:
> > > > > On Mon, May 26, 2014 at 08:20:43PM +0000, mancha wrote:
> > > > >
> > > > > > For our purposes, the operative question is whether the
> > > > > > distribution bias created can be leveraged in any way to
> > > > > > attack factoring (RSA) or dlog (DH).
> > > > >
> > > > > The maximum gap between primes of size $n$ is conjectured to be
> > > > > around $log(n)^2$. If $n$ is $2^k$, the gap is at most $k^2$,
> > > > > with an average value of $k$. Thus the most probable primes are
> > > > > most $k$ times more probable than is typical, and we lose at
> > > > > most $log(k)$ bits of entropy. This is not a problem.
> > > >
> > > > One consequence of the k-tuple conjecture (generally believed to
> > > > be true) is that the size of gaps between primes is distributed
> > > > poisson.
> > > >
> > > > You're right when you say the entropy loss between a uniform
> > > > distribution to OpenSSL's biased one is small. In that sense there
> > > > is not much to be gained entropy-wise from using a process that
> > > > gives uniformly distributed primes over what OpenSSL does.
> > > >
> > > > However, if a way exists to exploit the OpenSSL distribution bias,
> > > > it can be modified to be used against uniformly distributed primes
> > > > with only minimal algorithmic complexity increases. In other
> > > > words, the gold standard here isn't a uniform distribution.
> > > >
> > > > --mancha
> > >
> > > This is probably more wonkish than Ben intended with his question
> > > but for those interested, the Poisson result I alluded to is due to
> > > Gallagher [1].
> > >
> > > [1] Gallagher, On the distribution of primes in short intervals,
> > > Mathematika, 1976
> >
> > Would this work: if you are worried the algorithm will never pick the
> > highest of a prime pair, just make it search backward half of the
> > time?
> >
> > But I understand it has no real security implications.
> >
> > -Otto
>
> The issue is not limited to twin primes though that extreme drives the
> point home. In the twin case {p,p+2}, OpenSSL only finds p+2 if p+1 or
> p+2 happens to be the randomly selected start point. So, the proportion
> of primes OpenSSL finds that are twins will be significantly lower than
> theory predicts. With OpenSSL's incremental search, the probability a
> particular probable prime p is selected is proportional to the length of
> the gap of composites which immediately precedes it.
>
> Brandt and Damgard [1], from what I can tell the fathers of the
> incremental search OpenSSL uses, share Viktor Dukhovni's view and use an
> entropy argument to conclude little or no additional risk exists with
> incremental searches relative to uniformly distributed prime generation.
>
> Mihailescu (of Catalan Conjecture fame) establishes a complexity
> equivalence class to argue improved attacks against an incremental
> search can be converted to attacks against uniformly distributed prime
> generation with comparable runtimes [2]. For Mihailescu, incremental
> search bias is "tolerable", especially in light of the lower entropy
> costs and efficiency gains relative to naive discard & repeat. The
> improvements he models are, in essence, improvements in the
> state-of-the-art not the result of leveraging bias.
>
> Fouque and Tibouchi [3] offer the differing view that it's preferable to
> minimize bias and generate primes that are almost uniform "even if it is
> not immediately clear how such biases can help an adversary". They
> suggest a few algorithms that improve on naive discard & repeat by
> discarding only the top N bits of a candidate at each iteration, among
> other innovations.
>
> ---
> [1] Brandt and Damgard, On Generation of Probable Primes by Incremental
> Search, 1988
> ]2] Mihailescu, Measuring the Cryptographic Relevance of Biased Public
> Key Distributions, 1998
> [3] Fouque and Tibouchi, Close to Uniform Prime Number Generation With
> Fewer Random Bits, 2011
>
>
Ben Laurie
2014-05-28 09:14:01 UTC
Permalink
On 28 May 2014 01:47, mancha <***@zoho.com> wrote:
> Fouque and Tibouchi [3] offer the differing view that it's preferable to
> minimize bias and generate primes that are almost uniform "even if it is
> not immediately clear how such biases can help an adversary". They
> suggest a few algorithms that improve on naive discard & repeat by
> discarding only the top N bits of a candidate at each iteration, among
> other innovations.

This paper assumes two things that don't appear to be true:

a) That prime generation attempts consume entropy - why? Seems fine to
me to just stir the pool and try again.

b) That repeated random number generation is much more expensive than,
say, addition. Our experiments show that generating a new random
number each time is only half the speed of incrementing.

I'm guessing these incorrect assumptions are common in the literature?
______________________________________________________________________
OpenSSL Project http://www.openssl.org
Development Mailing List openssl-***@openssl.org
Automated List Manager ***@openssl.org
David Jacobson
2014-05-27 05:17:26 UTC
Permalink
On 5/26/14 2:01 PM, mancha wrote:
> On Mon, May 26, 2014 at 08:49:03PM +0000, Viktor Dukhovni wrote:
>> On Mon, May 26, 2014 at 08:20:43PM +0000, mancha wrote:
>>
>>> For our purposes, the operative question is whether the distribution
>>> bias created can be leveraged in any way to attack factoring (RSA)
>>> or dlog (DH).
>> The maximum gap between primes of size $n$ is conjectured to be around
>> $log(n)^2$. If $n$ is $2^k$, the gap is at most $k^2$, with an
>> average value of $k$. Thus the most probable primes are most $k$
>> times more probable than is typical, and we lose at most $log(k)$ bits
>> of entropy. This is not a problem.
> One consequence of the k-tuple conjecture (generally believed to be
> true) is that the size of gaps between primes is distributed poisson.
>
> You're right when you say the entropy loss between a uniform
> distribution to OpenSSL's biased one is small. In that sense there is
> not much to be gained entropy-wise from using a process that gives
> uniformly distributed primes over what OpenSSL does.
>
> However, if a way exists to exploit the OpenSSL distribution bias, it
> can be modified to be used against uniformly distributed primes with
> only minimal algorithmic complexity increases. In other words, the gold
> standard here isn't a uniform distribution.
>
> --mancha

I doubt the claim (in one of the messages of this thread, but not above)
that generating a fresh random value for each prime check adds
considerable expense. If we include a CTR-DRBG generator (from NIST SP
800-90A) in the implementation, then the cost for 2048 bit RSA or DH is
18 AES block encryption operations (and it could be lowered to very
close to 16). Years ago, AES was said to take 18 cycles per byte on a
Pentium Pro (according to Wikipedia article). That comes to 2304
cycles. That's got to be peanuts relative to prime testing. On modern
processors the case is even stronger. According to the white paper at
https://software.intel.com/sites/default/files/m/d/4/1/d/8/10TB24_Breakthrough_AES_Performance_with_Intel_AES_New_Instructions.final.secure.pdf,
an Intel Core i7 Processor Extreme Edition, i7-980X can achieve 1.3
cycles per byte on AES, which would be 375 cycles. (There are a lot of
assumptions here. For one thing the paper was reporting CBC mode
decryption. If the hardware is specific to CBC mode, so it can can't
get parallelism with encryption, then it would be quite a bit slower.)

If it doesn't cost much to generate a new random value for each trial,
why not just do it?

--David
______________________________________________________________________
OpenSSL Project http://www.openssl.org
Development Mailing List openssl-***@openssl.org
Automated List Manager ***@openssl.org
Peter Waltenberg
2014-05-27 07:45:48 UTC
Permalink
<font face="Default Sans Serif,Verdana,Arial,Helvetica,sans-serif" size="2">
<div>Not quite correct, the prime rands shouldn't come from a DRBG, they should come from an NRBG (NIST terminology).</div>
<div>There's a considerable difference between the performance of an entropy source and a DRBG.&nbsp;</div>
<div><br></div><div>The output of a DRBG not being non-deterministic being the important point.<br>
<br>/dev/random V /dev/urandom performance. (1-2 bytes/second V 100k+/second)</div>
<div><br></div><div>I did change the RNG sources for some of the OpenSSL code in our hacked version to help with the performance problems using the wrong source causes, for example RSA blinding data can safely come from a DRBG (pseudo_rand_bytes()).</div>
<div><br></div><div>Peter<br><br></div><br><font color="#990099">-----owner-openssl-***@openssl.org wrote: -----</font>
<div style="padding-left:5px;"><div style="padding-right:0px;padding-left:5px;border-left:solid black 2px;">
To: openssl-***@openssl.org<br>From: David Jacobson <***@sbcglobal.net>
<br>Sent by: owner-openssl-***@openssl.org<br>Date: 05/27/2014 05:16PM<br>
Subject: Re: Prime generation<br><br><div><font face="Courier New,Courier,monospace" size="3">
On 5/26/14 2:01 PM, mancha wrote:<br>&gt; On Mon, May 26, 2014 at 08:49:03PM +0000, Viktor Dukhovni wrote:<br>
&gt;&gt; On Mon, May 26, 2014 at 08:20:43PM +0000, mancha wrote:<br>&gt;&gt;<br>
&gt;&gt;&gt; For our purposes, the operative question is whether the distribution<br>
&gt;&gt;&gt; bias created can be leveraged in any way to attack factoring (RSA)<br>
&gt;&gt;&gt; or dlog (DH).<br>&gt;&gt; The maximum gap between primes of size $n$ is conjectured to be around<br>
&gt;&gt; $log(n)^2$. &nbsp;If $n$ is $2^k$, the gap is at most $k^2$, with an<br>
&gt;&gt; average value of $k$. &nbsp;Thus the most probable primes are most $k$<br>
&gt;&gt; times more probable than is typical, and we lose at most $log(k)$ bits<br>
&gt;&gt; of entropy. &nbsp;This is not a problem.<br>&gt; One consequence of the k-tuple conjecture (generally believed to be<br>
&gt; true) is that the size of gaps between primes is distributed poisson.<br>
&gt;<br>&gt; You're right when you say the entropy loss between a uniform<br>
&gt; distribution to OpenSSL's biased one is small. In that sense there is<br>
&gt; not much to be gained entropy-wise from using a process that gives<br>
&gt; uniformly distributed primes over what OpenSSL does.<br>&gt;<br>&gt; However, if a way exists to exploit the OpenSSL distribution bias, it<br>
&gt; can be modified to be used against uniformly distributed primes with<br>
&gt; only minimal algorithmic complexity increases. In other words, the gold<br>
&gt; standard here isn't a uniform distribution.<br>&gt;<br>&gt; --mancha<br>
<br>I doubt the claim (in one of the messages of this thread, but not above) <br>
that generating a fresh random value for each prime check adds <br>considerable expense. &nbsp;If we include a CTR-DRBG generator (from NIST SP <br>
800-90A) in the implementation, then the cost for 2048 bit RSA or DH is <br>
18 AES block encryption operations (and it could be lowered to very <br>
close to 16). &nbsp;Years ago, AES was said to take 18 cycles per byte on a <br>
Pentium Pro (according to Wikipedia article). That comes to 2304 <br>cycles. &nbsp;That's got to be peanuts relative to prime testing. &nbsp;On modern <br>
processors the case is even stronger. According to the white paper at <br>
<a href="https://software.intel.com/sites/default/files/m/d/4/1/d/8/10TB24_Breakthrough_AES_Performance_with_Intel_AES_New_Instructions.final.secure.pdf">
https://software.intel.com/sites/default/files/m/d/4/1/d/8/10TB24_Breakthrough_AES_Performance_with_Intel_AES_New_Instructions.final.secure.pdf</a>
, <br>an Intel Core i7 Processor Extreme Edition, i7-980X can achieve 1.3 <br>
cycles per byte on AES, which would be 375 cycles. &nbsp;(There are a lot of <br>
assumptions here. &nbsp;For one thing the paper was reporting CBC mode <br>
decryption. &nbsp;If the hardware is specific to CBC mode, so it can can't <br>
get parallelism with encryption, &nbsp;then it would be quite a bit slower.)<br>
<br>If it doesn't cost much to generate a new random value for each trial, <br>
why not just do it?<br><br>&nbsp;&nbsp; &nbsp;--David<br>______________________________________________________________________<br>
OpenSSL Project &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; <a href="http://www.openssl.org">
http://www.openssl.org</a><br>Development Mailing List &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; openssl-***@openssl.org<br>
Automated List Manager &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; ***@openssl.org<br>
<br></font></div></***@sbcglobal.net></div></div></font>

______________________________________________________________________
OpenSSL Project http://www.openssl.org
Development Mailing List openssl-***@openssl.org
Automated List Manager ***@openssl.org
Stephan Mueller
2014-05-27 07:56:18 UTC
Permalink
Am Dienstag, 27. Mai 2014, 17:45:48 schrieb Peter Waltenberg:

Hi Peter,

>Not quite correct, the prime rands shouldn't come from a DRBG, they
>should come from an NRBG (NIST terminology). There's a considerable
>difference between the performance of an entropy source and a DRBG.

Not sure where you see that, but looking into, say, FIPS 186-4 appendix
C.3, it always talks about an "approved RBG". In FIPS 140-2 speak, this
implies a DRBG.

Can you please give a reference?

Ciao
Stephan
______________________________________________________________________
OpenSSL Project http://www.openssl.org
Development Mailing List openssl-***@openssl.org
Automated List Manager ***@openssl.org
David Jacobson
2014-05-27 13:45:14 UTC
Permalink
On 5/27/14 12:56 AM, Stephan Mueller wrote:
> Am Dienstag, 27. Mai 2014, 17:45:48 schrieb Peter Waltenberg:
>
> Hi Peter,
>
>> Not quite correct, the prime rands shouldn't come from a DRBG, they
>> should come from an NRBG (NIST terminology). There's a considerable
>> difference between the performance of an entropy source and a DRBG.
> Not sure where you see that, but looking into, say, FIPS 186-4 appendix
> C.3, it always talks about an "approved RBG". In FIPS 140-2 speak, this
> implies a DRBG.
>
> Can you please give a reference?
>
> Ciao
> Stephan
> ______________________________________________________________________
> OpenSSL Project http://www.openssl.org
> Development Mailing List openssl-***@openssl.org
> Automated List Manager ***@openssl.org
>

Let us consider a model in which the adversary gets to see all the
generated random numbers from a CTR-DRBG generator except the 2048 bits
that are used for a private ECDSA key or the two 1024 bit chunks that
become p and q in an RSA key generation operation. If the adversary
sees all that data, he might be able to do some sort of cryptoanalysis
and at least get some constraints on the key. But in the case we are
considering, all the values that do not result in primes or safe primes
(or whatever) are thrown away. The attacker does not see them. Thus it
suffices for the seed of the CTR-DRBG generator to be "entropic" data.
We do not need every trial value to be from entropic data.

--David
______________________________________________________________________
OpenSSL Project http://www.openssl.org
Development Mailing List openssl-***@openssl.org
Automated List Manager ***@openssl.org
Joseph Birr-Pixton
2014-05-27 08:16:21 UTC
Permalink
On 27 May 2014 08:45, Peter Waltenberg <***@au1.ibm.com> wrote:
> ...
> I did change the RNG sources for some of the OpenSSL code in our hacked
> version to help with the performance problems using the wrong source causes,
> for example RSA blinding data can safely come from a DRBG
> (pseudo_rand_bytes()).

I assume you mean RAND_pseudo_bytes. In which case you should know
that RAND_pseudo_bytes has a broken interface and cannot ever be used
safely in a way which makes it different from RAND_bytes.

To restate:

Callers of RAND_pseudo_bytes are either unreliable, or equivalent to
RAND_bytes. Do not use it.

Cheers,
Joe
______________________________________________________________________
OpenSSL Project http://www.openssl.org
Development Mailing List openssl-***@openssl.org
Automated List Manager ***@openssl.org
Ben Laurie
2014-05-27 10:11:11 UTC
Permalink
On 27 May 2014 09:16, Joseph Birr-Pixton <***@gmail.com> wrote:
> On 27 May 2014 08:45, Peter Waltenberg <***@au1.ibm.com> wrote:
>> ...
>> I did change the RNG sources for some of the OpenSSL code in our hacked
>> version to help with the performance problems using the wrong source causes,
>> for example RSA blinding data can safely come from a DRBG
>> (pseudo_rand_bytes()).
>
> I assume you mean RAND_pseudo_bytes. In which case you should know
> that RAND_pseudo_bytes has a broken interface and cannot ever be used
> safely in a way which makes it different from RAND_bytes.
>
> To restate:
>
> Callers of RAND_pseudo_bytes are either unreliable, or equivalent to
> RAND_bytes. Do not use it.

Have I missed something? What are you referring to here?
______________________________________________________________________
OpenSSL Project http://www.openssl.org
Development Mailing List openssl-***@openssl.org
Automated List Manager ***@openssl.org
Joseph Birr-Pixton
2014-05-27 10:36:06 UTC
Permalink
On 27 May 2014 11:11, Ben Laurie <***@links.org> wrote:
> On 27 May 2014 09:16, Joseph Birr-Pixton <***@gmail.com> wrote:
>> To restate:
>>
>> Callers of RAND_pseudo_bytes are either unreliable, or equivalent to
>> RAND_bytes. Do not use it.
>
> Have I missed something? What are you referring to here?

RAND_pseudo_bytes returns:

1 when everything worked (equivalent to RAND_bytes behaviour)
0 when everything worked, but the entropy estimator attached to the
PRNG wants it to be reseeded
0 when nothing worked (memory allocation failure, IO error, etc.)
-1 if the function isn't supported by the current RAND_METHOD.

The documentation doesn't mention the third case, leading to things like:

https://github.com/openssl/openssl/blob/master/crypto/dsa/dsa_gen.c#L208

(Stack disclosure in low memory/low fd conditions.)

A full writeup (along with behaviours of all the RAND_METHODs in
current trunk) is at:

http://jbp.io/2014/01/16/openssl-rand-api/

Cheers,
Joe
______________________________________________________________________
OpenSSL Project http://www.openssl.org
Development Mailing List openssl-***@openssl.org
Automated List Manager ***@openssl.org
Peter Waltenberg
2014-05-27 09:43:46 UTC
Permalink
<font face="Default Sans Serif,Verdana,Arial,Helvetica,sans-serif" size="2">
<div>It may have been unreliable, our version isn't. We hook the RNG callbacks and direct them into our own code. That makes some sense of why OpenSSL hasn't fixed those problems, but that probably should be done now you have decent DRBG's.</div>
<div><br></div><div>As for the prime generation, I'll try to dig up a reference, but I'd put relying on a NIST DRBG solely for RSA key generation on the list of things to avoid ?. &nbsp;</div>
<div><br></div><div>i.e. A few months back and assuming everything was working you'd have picked the strongest DRBG (Dual-EC) and generated server keys from that .</div>
<div><br></div><div>As a sequence generator those DRBG's are very good, and I don't think anything but Dual-EC has real problems, but seriously, you have to ask why you want real entropy for generating long lived keys rather than a sequence generator, particularly a NIST specified one &nbsp;?.&nbsp;</div>
<div><br></div><div>FIPS 186-4 (page 23) states an approved (pseudo) random generator shall be used, the wording implies either. Running a DRBG in prediction resistance mode (continual reseed) satisfies the criteria (mainly the approved bit), AND calms my paranoia, but doesn't help you with the entropy rate issues.</div>
<div><br></div><div><br></div><div>Peter<br><br><br></div><br><br><font color="#990099">
-----owner-openssl-***@openssl.org wrote: -----</font><div style="padding-left:5px;">
<div style="padding-right:0px;padding-left:5px;border-left:solid black 2px;">
To: openssl-***@openssl.org<br>From: Joseph Birr-Pixton <***@gmail.com>
<br>Sent by: owner-openssl-***@openssl.org<br>Date: 05/27/2014 07:14PM<br>
Subject: Re: Prime generation<br><br><div><font face="Courier New,Courier,monospace" size="3">
On 27 May 2014 08:45, Peter Waltenberg &lt;***@au1.ibm.com&gt; wrote:<br>
&gt; ...<br>&gt; I did change the RNG sources for some of the OpenSSL code in our hacked<br>
&gt; version to help with the performance problems using the wrong source causes,<br>
&gt; for example RSA blinding data can safely come from a DRBG<br>&gt; (pseudo_rand_bytes()).<br>
<br>I assume you mean RAND_pseudo_bytes. In which case you should know<br>
that RAND_pseudo_bytes has a broken interface and cannot ever be used<br>
safely in a way which makes it different from RAND_bytes.<br><br>To restate:<br>
<br>Callers of RAND_pseudo_bytes are either unreliable, or equivalent to<br>
RAND_bytes. Do not use it.<br><br>Cheers,<br>Joe<br>______________________________________________________________________<br>
OpenSSL Project &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; <a href="http://www.openssl.org">
http://www.openssl.org</a><br>Development Mailing List &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; openssl-***@openssl.org<br>
Automated List Manager &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; ***@openssl.org<br>
<br></font></div></***@gmail.com></div></div></font>

______________________________________________________________________
OpenSSL Project http://www.openssl.org
Development Mailing List openssl-***@openssl.org
Automated List Manager ***@openssl.org
Annie
2014-05-27 21:10:57 UTC
Permalink
Am 27.05.2014 12:04, schrieb Ben Laurie:
> On 26 May 2014 21:15, Annie <***@informatik.hu-berlin.de> wrote:
>> Am 26.05.2014 21:23, schrieb Ben Laurie:
>>> On 26 May 2014 19:52, Viktor Dukhovni <openssl-***@dukhovni.org> wrote:
>>>> On Mon, May 26, 2014 at 07:24:54PM +0100, Ben Laurie wrote:
>>>>
>>>>> Finally, all of them have a bias: they're much more likely to pick a
>>>>> prime with a long run of non-primes before it than one that hasn't (in
>>>>> the case of the DH ones, the condition is slightly more subtle,
>>>>> depending on parameters, but its there nevertheless). Is this wise?
>>>>
>>>> Where do you see the bias?
>>>
>>> They all work by picking a random number and then stepping upwards
>>> from that number until a probable prime is found. Clearly, that is
>>> more likely to find primes with a long run of non-primes before than
>>> primes with a short run.
>>
>> Ben, this is not the whole story. It is stepping upwards, right, but it
>> jumps also over primes p with small factors of p-1.
>
> That's what I meant by "probable safe primes".
>
>> So you should say that it is more likely to find primes with a long run
>> consisting of non-primes and primes p with small factors of (p-1)/2
>> before than primes with a short run of these numbers.
>> Look for the prime gap of length 21 between 1129 and 1151. If a search
>> arrives at 1129 it jumps over, because 564 is divisible by 4, and the
>> search even doesn't stop at 1151, due to the factor 5 of 575.
>>
>> Regards,
>> Annie.
>>
>> See lines 390-401 in crypto/bn/bn_prime.c
>>
>> loop: for (i=1; i<NUMPRIMES; i++)
>> {
>> /* check that rnd is not a prime and also
>> * that gcd(rnd-1,primes) == 1 (except for 2) */
>> ==> if (((mods[i]+delta)%primes[i]) <= 1)
>
> Exactly. I am not sure why it does this (obviously sometimes it is a
> useful property, but surely not always?).

This is in sources from the very beginning of OpenSSL, I guess that it
comes from Phil Zimmermann's search routine in PGP. The main reason for
doing this, is probably the fact that p-1 must be coprime with exponent
e. Choosing primes p without small factors in p-1 will assure that there
will be in many cases no common divisor of p-1 and e.

As the prime search is costly it is easier to drop candidates mit small
factors in p-1 than to repeat the whole search.

Regards,
Ann.

______________________________________________________________________
OpenSSL Project http://www.openssl.org
Development Mailing List openssl-***@openssl.org
Automated List Manager ***@openssl.org
Continue reading on narkive:
Loading...