|By Gilad Parann-Nissany||
|March 15, 2012 03:45 AM EDT||
In the last week or two, the security community has been abuzz with two different papers on the security of RSA keys. It turns out there are tens of thousands of RSA keys out there that are weak: they share a prime modulus with another public key, allowing both keys to be factored (i.e. broken) in a matter of minutes. The dust seems to have settled by now, and the root cause appears to be poor generation of these keys, in other words, low quality random number generators. How does this issue relate to cloud security, Porticor’s forte? Read on…
Generation of cryptographic quality random numbers is a difficult science, well beyond the scope of this blog. Unfortunately, the old saying applies: you get what you pay for. In the case of crypto randomness, the more initial randomness (a.k.a. entropy) you stir into the pot, the better the quality of the random numbers you will get out of it. And the stronger your cryptographic system will become.
The current research is the latest in a long history of cryptanalysis by exploiting faulty random number generators (RNGs). Starting with the early days of SSL, there have been many such attacks on crypto-systems. Perhaps the best known, but certainly the most embarrassing, is the Debian/openssl bug, where for almost two years, any RSA keys on Debian and Ubuntu systems were taken from the space of 215 keys, and were thus trivial to guess. Lucky for us, this was fixed in mid-2008.
Back to the new research: it turns out most of the weak keys are related to that essential ingredient of the stew, initial entropy. Before it can spit out good encryption keys, the RNG needs to be affected by real-life events, such as key-presses, network packets, disk rotations. Now, many systems start out by creating RSA keys (often in the form of certificates) very early on, as early as a few seconds after the system has been turned on. In the case of a PC, there’s already a useful amount of entropy available before any new software is installed. So where do we expect the lack of entropy to be a problem?
- In embedded appliances, which boot up from a “burned” factory image and immediately create some keys.
- In virtual systems (cloud instances), which boot up from stock software images and immediately go off to create some crypto keys.
All is not lost. When designing a complex virtual system, you can apply some industry best practices to obtain a solid randomness pipeline. This is essential if cryptography is a central part of your application’s security. And with the prevalent use of SSL, this is true for most modern systems.
- Use the Linux /dev/random and /dev/urandom generator. This generator underwent serious scrutiny. Even though some minor weaknesses were found, it is generally believed to be sufficiently strong for crpytographic uses.
- Whenever an appliance is booted, and that includes its first-time boot, it should receive an injection of randomness from a central randomness source, which may be your management subsystem. This allows the appliance to generate strong keys as soon as it starts out.
- The management subsystem itself needs to receive a significant amount of real entropy from user and network interaction.
It may not be a surprise that all these best practices are implemented in Porticor’s VPD appliance and our Virtual Key Management service. We put significant effort into ensuring that our cryptographic subsystems are fed with crypto-grade randomness. This is yet another aspect of our relentless cloud security drive.
To summarize, the RSA algorithm is as strong as ever, and you definitely need a crypto-grade random number generator to use it securely. This is far from trivial in the cloud, and is yet another reason to get cloud security from the experts.
Jan. 30, 2015 11:30 AM EST Reads: 2,266
Jan. 30, 2015 11:30 AM EST Reads: 3,037
Jan. 30, 2015 11:15 AM EST Reads: 3,087
Jan. 30, 2015 11:00 AM EST Reads: 2,400
Jan. 30, 2015 10:45 AM EST Reads: 3,174
Jan. 30, 2015 10:45 AM EST Reads: 3,588
Jan. 30, 2015 10:45 AM EST Reads: 2,649
Jan. 30, 2015 10:30 AM EST Reads: 2,525
Jan. 30, 2015 10:00 AM EST Reads: 3,175
Jan. 30, 2015 10:00 AM EST Reads: 3,202
Jan. 30, 2015 10:00 AM EST Reads: 2,379
Jan. 30, 2015 10:00 AM EST Reads: 2,868
Jan. 30, 2015 10:00 AM EST Reads: 8,005
Jan. 30, 2015 09:30 AM EST Reads: 3,118
Jan. 30, 2015 09:30 AM EST Reads: 2,346
Jan. 30, 2015 09:00 AM EST Reads: 2,845
Jan. 30, 2015 09:00 AM EST Reads: 2,800
Jan. 30, 2015 07:45 AM EST Reads: 2,073
Jan. 30, 2015 06:30 AM EST Reads: 1,970
The Internet of Things is tied together with a thin strand that is known as time. Coincidentally, at the core of nearly all data analytics is a timestamp. When working with time series data there are a few core principles that everyone should consider, especially across datasets where time is the common boundary. In his session at Internet of @ThingsExpo, Jim Scott, Director of Enterprise Strategy & Architecture at MapR Technologies, discussed single-value, geo-spatial, and log time series data. By focusing on enterprise applications and the data center, he will use OpenTSDB as an example t...
Jan. 30, 2015 04:45 AM EST Reads: 3,162