Other issues in this category (13)
Recorded, controlled, secure
If we open any Doctor Web news item about malicious programs, for example, this one:
October 20, 2016
Most Trojan backdoors are a threat to Windows, but some may work on devices running Linux. This rare type of Trojan was investigated by Doctor Web’s specialists in October 2016.
At the very end of the article, we see the link More about this Trojan. Let's follow it:
|Added to the Dr.Web virus database:||2016-10-14|
|Virus description was added:||2016-10-20|
A backdoor for Linux....
On more than one occasion, we’ve reminded you that a single entry in the Dr.Web virus database allows dozens and hundreds of malware programs to be detected—and that really is true. However, concrete malware samples received by Doctor Web’s virus laboratory must somehow be identified. This is necessary for many reasons—for example, to avoid confusion when receiving samples for analysis from our clients and partners: we don’t want to have to send malware back and forth to be sure that a sample received was indeed the one someone sent us.
Any file can be identified, using checksums calculated with the help of special algorithms that are based on the file’s data set. As a result, we end up with a number that is unique for each set. A fairly stringent requirement is placed on algorithms: no matter what data is processed, the checksum must be unique. Why is this necessary?
The fact is that checksums are used in places where an anti-virus can’t be used. To keep malware at bay, for such locations one can create a list of programs whose launch is permitted. (security is not absolute and the list can be bypassed so it should only be used as a last resort). And, of course, in these circumstances, we can’t allow a situation where an attacker creates a file with a checksum that is identical to one recorded in the control system.
The MD5 algorithm is frequently used to create a checksum.
This algorithm was developed in 1991 by Professor Ronald Rivest of the Massachusetts Institute of Technology to replace its less reliable predecessor—MD4. The algorithm was first published in April 1992 in the RFC 1321 standard. And, already in 1993, people were discussing the fact that the MD5 algorithm could be cracked.
The initial demonstration revealing how the vulnerability could be exploited took place on March 1, 2005.
The RFC 6151 standard was released in 2011. It recognizes the MD5 encryption algorithm to be insecure and recommends that its use be discontinued. But, no one took that advice, and MD5, as well as other vulnerable protocols, is very much alive, and attackers can create a file whose checksum coincides with the checksum of your application—if, of course, you’re use a vulnerable algorithm.
SHA1, used to describe malware, is a more reliable algorithm. But, it’s not perfect either.
A serious system vulnerability involving the SHA-1 algorithm—a vulnerability that can compromise an application using it—was demonstrated at Eurocrypt 2009, the international conference held annually each spring. By the way, evidently information about the vulnerability must have already made the rounds of various cryptanalyst circles because shortly before the Eurocrypt report was published, the National Institute of Standards and Technology (NIST) ordered everyone to stop using SHA-1 in government institutions by 2010.
On October 31, 2008, NIST announced a competition among cryptographers. The purpose of the competition was to develop an algorithm to replace the outdated SHA1 and SHA2.
Therefore, more advanced algorithms must be used for security tasks: SHA1 is allowed where there is no risk of forgery. It is difficult to imagine that attackers will start forging samples of malicious software; after all, they are recognized in databases anyway, not by checksums.#security #vulnerability #Dr.Web_technologies #terminology
- Use modern algorithms if you want to ensure the integrity of your data.
- All systems are vulnerable; it just depends on how much someone wants to spend hacking into one. Therefore, there are only two conditions under which you can hope that methods used to monitor vulnerabilities protect your system: first you must know what algorithm is used, and second the implemented algorithm must have undergone an independent expert analysis that confirmed no vulnerabilities exist within it.