Researchers have asserted that current cryptographic systems are not as secure as we have believed.
That’s a daunting statement.
When you hear that an MIT professor is publishing a paper that attacks the fundamental premises of your career, it’s only natural to get the mental equivalent of biting into a sour lemon.
Luckily, the headlines are mostly sensationalistic and the research, while interesting, is no threat to the security industry, let alone the way of life in the developed world. Let’s read more into this and figure out what exactly Professor Muriel Médard and her team are trying to say.
The concept in question is that of the level of uniformity in the compressed source files. Information theory demands that we assume the highest level of entropy and uniformity, even if the algorithm failed to quite meet that level. Médard says that it is a reliance on Shannon Entropy that creates the issue. Shannon’s 1948 paper was focused on communication, and advanced the idea that data traffic as a whole would average out any imperfections in the uniformity of individual pieces of data. This is a fair assessment, but not the ideal approach for cryptography.
Average uniformity is not the goal of encryption. Rather, it is the simple understanding of the weakest link that explains the conceptual error. When encrypted data is under fire from a codebreaker, we do not worry about the 99.99% of the data that is properly encrypted. It is the weakest link, that did not reach the highest level of uniformity and entropy, that is vulnerable and puts the entire data cache at risk.
“We thought we’d establish that the basic premise that everyone was using was fair and reasonable, and it turns out that it’s not,” says Ken Duffy, one of the researchers at National University of Ireland (NUI) at Maynooth, who worked alongside Médard.
Essentially, these slight deviations in the uniformity of the data open the door for a brute force attacker to test a series of assumptions. For example, an assumption that a password was in English, or even was based on an actual word, could accelerate the codebreaking process. “It’s still exponentially hard, but it’s exponentially easier than we thought,” Duffy says.
The good news? (Yes, there is still good news.)
We are still very much talking about theoretical gains and the security is still very much intact. Brute force attacks have always had a projected success window, but it was so astronomical that it was considered to be effectively moot. This paper is simply saying that it is slightly less astronomical, but likely still effectively moot.
As Matthieu Bloch of Georgia Tech states, “My guess is that [the paper] will show that some [algorithms] are slightly less secure than we had hoped, but usually in the process, we’ll also figure out a way of patching them.”
That’s a great attitude, Bloch! Now let’s clear up a few misconceptions that this news has created.
So stay tuned for more news from MIT and we’ll keep you updated in this space. If you’re using low level, unvalidated encryption, please only do so with the understanding that it is no impediment to a motivated hacker. And if you need encryption at the highest levels, you’re already at the right place. Don’t hesitate to reach out.