Other issues in this category (37)
Don't be quick to point an accusing finger
Our users probably missed this piece of news:
Also eliminated was a defect causing operating system and application performance problems if network issues existed in a system.
The text appears to be clearly visible—all black on white (or white on black, if the dark theme is being used). Dr.Web slowed things down, and we fixed that. But would you like to know why Dr.Web caused systems to slow down?
A modern anti-virus can operate without accessing the Internet. However, if Internet access is available, the anti-virus will do its best to take advantage of it and, for example, download up-to-date cloud-based information about bogus websites and file reputation data. In the modern world, relying solely on virus databases can be dangerous—malicious programs pop up faster than database updates. And, of course, it has to check whether updates are available before they can be downloaded. Here is an example of a user complaint:
It takes a lot of time (30 seconds or longer) to open a document (especially, PDFs) or start a browser (up to a minute). Opening a new tab in a running browser also takes a while. Other applications are also slow to start (e.g., the Calculator or Microsoft Word).
We spent a lot of time trying to understand what the problem was and then this message arrived:
We read the article detailing how Windows determines whether Internet access is available, added a few entries for the DNS server, and the issue miraculously resolved itself!
So, it appears that network misconfiguration was the issue. And the anti-virus was merely a program that tried to access the Internet more often than other applications That's what was causing the problems.
By the way, Windows uses a somewhat interesting method to determine whether a system is connected to the Internet. Do you think (as ordinary users do) that it will send a ping request to a specific node, and that's it? Windows doesn't do things the easy way!
An Internet test is performed in two steps:
- When connecting to a network, the system sends an HTTP request to www.msftncsi.com/ncsi.txt. This is a plain text file containing only one string: Microsoft NCSI. If the request reaches its destination, the remote server should respond by sending a reply with the header 200 OK. The reply will contain that very string (Microsoft NCSI).
- The second step is to check whether a DNS server is available. To accomplish this, NCSI (Network Connection Status Indicator) will attempt to resolve the URL dns.msftncsi.com to an IP address. The expected value is 220.127.116.11.
If both steps are completed successfully, Windows assumes that the computer is connected to the Internet. If the file ncsi.txt is unavailable and dns.msftncsi.com wouldn't resolve to the IP or another IP address is returned, the system reports that no Internet connection is available. If ncsi.txt is inaccessible, but dns.msftncsi.com does resolve to the right IP address, the system notifies the user that they need to log in via the browser window.
To use certain Window services, we needed to emulate DNS server responses for dns.msftncsi.com lookups, and that's what we did. However, we didn't imitate the ncsi.txt reply. That's why Windows on our PCs indicated that no Internet connection was available. It is no longer necessary right now, so we removed this entry.
So "Windows on our PCs indicated that no Internet connection was available" and the anti-virus, which couldn't figure out what was going on, became the scapegoat.
And here is another incident when an issue resolved itself.
However, we contacted the service provider's customer care service… because we experienced multiple website availability issues when we used HTTPS. And it looked like they did something (unfortunately, I don't know exactly what that was).
And another example of customers having network issues (Doctor Web engineers are examining the collected data):
After the TCP handshake, a packet is sent. Then six unsuccessful attempts to retransmit the packet follow at increasing intervals.
That is, the connection is established, but no packets can reach their destination.
Certificates pose another problem. Just like this:
In theory, the AddTrust CA root expiry should only cause compatibility issues in legacy systems (Android 2.3, Windows XP, Mac OS X 10.11, iOS 9, etc.) because the cross-signed certificate is still valid and modern browsers can chain back to that certificate. In practice, it turned out that non-browser TLS clients using OpenSSL 1.0.x and GnuTLS can't verify chains of trust involving a cross-signed certificate. If the server used a Sectigo certificate that formed a chain of trust to the expired AddTrust root, the clients wouldn't be able to establish a secure connection and an expired certificate error would result.
…which disrupted the operation of many infrastructures that relied on encrypted communication for inter-node interaction.
And then they will blame the anti-virus again.
We are not saying that Dr.Web always operates flawlessly. Yes, problems may arise at our end too. But, as you can see from the above examples, sometimes the anti-virus runs in systems where Internet access is seemingly available, when in fact there, perhaps, is no Internet. And situations of this kind are very common. That's why we'd like to remind you that you can always give us your feedback and help us make Dr.Web even better. Thank you for your feedback, suggestions and comments!