Your browser is obsolete!

The page may not load correctly.

Evil Kitchen

Темная кухня

Other issues in this category (13)
  • add to favourites
    Add to Bookmarks

Personal data protection in the era of neural networks

Read: 4458 Comments: 0 Rating: 0

Tuesday, October 31, 2023

Neural networks are a ground-breaking technological innovation. Even now, they can be entrusted with responsible tasks that a person used to perform manually. For example, a neural network can work as an airline technician and warn of engine wear. Using neural networks, doctors have learned to identify early signs of epilepsy in children.

At first glance, it is really a useful tool for improving all aspects of human life! But we remember movies we’ve seen about robots that stopped obeying people... Today, the Internet is packed, at a minimum, with frightening videos and articles about neural network fraud and the leaking of secret information to the public. And sometimes predictions about the enslavement of humankind can be found. Users do not fully understand whether neural networks pose a direct threat. And if such a threat exists, how do you protect yourself from it, so that you don’t lose money and you keep your personal life private.

In this article, we hope to dispel our readers’ fears and offer tips on how to protect yourself when using neural networks.

Neural networks do not make decisions

Today, we use a neural network as a tool for collecting and analysing data in order to get an algorithm for solving a problem. Let's say that employees of X company want to come up with personalised advertising for customers to stimulate New Year’s sales. To do this, the employees need to "feed" the neural network with current statistics of user requests and purchases, and the system will generate customer-oriented advertising.

Or Elon Musk is improving the Tesla car. In order for a car to develop a driving style, it needs to be taught. How is this done? Give the system a huge array of patterns. In this way, the car will learn how to collect and analyse all the information on its way, ensuring the safety of the trip. Without patterns, there will be no examples to learn from.

A neural network cannot think as creatively as a human. It gives correct and even unique answers when it has a huge database that has been collected for decades. Based on this data, the neural network creates algorithms to get a suitable answer. This is mathematics, and an innovative and creative approach is the full prerogative of a human being. Therefore:

Do not be afraid of neural networks; they are just incredibly huge encyclopedias of knowledge. But you should always check the information you receive from one. Responsibility cannot be shifted to algorithms.

Where do neural networks get their data from?

We give out such data ourselves when we communicate on forums, search for something in search engines or use gadgets with trackers.

Do you remember that there used to be bracelet pedometers? They didn’t possess the same features as today's smart watches with their phone and app-browsing features. Pedometer watches were popular, but the manufacturer left the market forever. It turned out that the watch collected data about the wearer’s sleep and behaviour during the course of a day. Users did not know about that.

Today, many more people use smart watches, and even more — the Internet. Most likely, you have already shared useful information with neural networks, but this is not a reason to panic. You do not need to delete all your social networking accounts and send messages using pigeons. Just be vigilant and do not share personal data that can be used by scammers on the Internet.

Hidden danger

We've found out that a neural network cannot turn into a machine that will destroy the Earth in one fell swoop. Today, it gathers information and allows you to explore the world. And specialists transform information into useful functions. But, there is a nuance: if the development and use of technologies is not controlled, in addition to useful inventions, we will get new loopholes for scammers.

How can the process of training neural networks get out of hand? This is quite realistic when there are no work regulations. Now everyone who is interested in this area works with neural networks, and that includes scammers. Meanwhile, users are faced with new methods of cyber threats and extortion.

Fishing rod, bait, hook

Today programmers are creating information-processing algorithms for neural networks. The information comes from different sensors that are similar to sense organs: visual, olfactory and others. The data is delivered to servers and quickly analysed in the system. This way, they can come up with an algorithm that will almost flawlessly mimic human traits and behaviour. And here the moment arrives when it becomes easier to deceive a person: people are not yet used to the fact that a character generated by artificial intelligence can be calling from a familiar number — but, nevertheless, this is already reality!

Modern forms of communication are different from those that existed previously, even only 30 years ago. We are sure that if we hear the voice of a friend on the phone, this is that person. Especially if it is a video call: a robot cannot be calling me; I’m seeing and hearing my best friend. But now scammers can look just like your close relatives, creating clones of anyone with the help of neural network technologies.

This sounds scary. But remember news posts, for example, about a woman who is allegedly called by a lawyer saying that her drunk son ran over someone with his car. In a panic, the woman gives all her money to the caller, just to mitigate the punishment for her son. What was the result? The lawyer turned out to be a scammer, and the woman lost her money. To be successful with their extortion, the scammer did not even need the son's voice to be generated by a neural network.

The same thing is true with cheating on marketplaces. You put a new camera up for sale, and in a minute, there is a queue of "buyers" waiting for you. As a result, you buy a camera from yourself with all the money that you have in your account because you relaxed and sent strangers a photo of your bank card with all the data.

These examples suggest that artificial intelligence is not necessarily needed in order to deceive people; it is enough to choose good bait. And the neural network simply gives someone the chance to steal money without using acting abilities. With that, we have a tip from Doctor Web’s specialists:

Do not blindly trust anyone who calls or writes you. Check the information you have heard or read. Confirm it using safe and known-only-to-you ways.

New hacking and deception schemes

There have appeared new combined schemes for using neural networks, through which even more people can be deceived. And this is no longer an ordinary call from the "bank". Face and voice are substituted with the help of neural networks. After all, you can even create a new opinion leader and involve people in a pyramid scheme.

Deepfakes. One example of data falsification using neural networks is deepfakes. These are fake videos or photos that are easy to distribute online. For example, deepfakes involving influential politicians offer opportunities for manipulating public opinion. Theoretically, a photo of any Internet user can be used in the same way.

Information leaks. When we send images, texts and other information to artificial intelligent systems to generate something, they can be compromised. Some consult a neural network as they would a psychologist. The danger is that everything you have told it becomes its working material. This information ceases to be personal and can be used for other purposes, including criminal ones.

Fake GPT Chat. In addition to the authentic GPT Chat, there are fakes. Fraudsters are very interested in this topic, since they do not need to ask users to share information — users do so voluntarily. Situation: a development team wants to review its code and uploads it on a fake GPT Chat. Immediately, commercial information is leaked.

Generation of malicious code and phishing emails. GPT bots can be used to generate malicious code. To do this, attackers use, for example, WormGPT — it allows them to easily create malware or write phishing emails to effectively distribute trojans. That is, code can be written faster and without programming skills. And phishing emails can be made personalised, convincing, and in any foreign language. This is very dangerous for gullible users.

Cheating facial recognition algorithms. There are systems for recognising faces and movements based on data from video cameras. These systems learn from data sets. Algorithms are built on the basis of processing this data. And even then, there is the possibility of hacking — bypassing the algorithms of the cameras. Fraudsters know how to work with such data sets and know certain colour combinations that cameras do not recognise. Or they know how to mask their face so that the camera cannot read it. And then they use it for their own purposes.

Let's discuss how to ensure the maximum protection for your data from online attacks.

The Anti-virus Times recommends

  • Install an anti-virus. Despite the fact that on the Internet you may be of little interest to anyone, crimes can be committed on your behalf and using information about you. Many intrusions into other people's systems are carried out using trojans. For example, intruders can attack a bank using your computer. You will be protected from this by installing an anti-virus solution.
  • Update your software to have access to the latest vulnerability fixes. People often ignore this advice, although it is important. Any update delivers bug fixes. You may not know that there is a “hole” in your computer. But when you have current software installed, you need not worry that your computer will be hacked.
  • Set strong passwords; use different passwords for different services. If you keep your passwords in your notebook and hide them even from your family, this is the safest way to keep them secure. Although, it is much more common for users to store passwords in encrypted form in their browser so that they are entered automatically. This is acceptable, but potentially dangerous, because an attacker can get a password from a special browser area. Doctor Web specialists recommend that you create strong passwords and never disclose them to anyone. This is the hardest way for anyone to get and decrypt them.
  • Be careful on social networks. Look at your pages on social networking sites from the outside: do not show anything that could potentially create an uncomfortable situation. Just as a tattoo with the name of an ex-boyfriend can create difficulties when trying to arrange one’s personal life.
  • Be vigilant. Do not trust all photos and videos on the Internet, especially if they do not seem too realistic or look dubious. Never follow tempting links from emails sent by unknown senders. Verify information through different sources. And the sources themselves, too.
  • Check suspicious sites. Using our site https://www.drweb.com/ as an example, let's identify its security. The site address must contain the HTTPS protocol. This means that such a site encrypts information and is confirmed by a digital certificate. Let's learn how to find the certificate: click on the padlock icon at the beginning of the address bar and go to "secure connection" and "valid certificate".
  • #drweb

    #drweb

    GlobalSign issued the certificate for Doctor Web's site — it is almost impossible to forge it. At the same time, some certificates can be obtained with minimal verification — Let's Encrypt, for example. You can’t really be certain that the certificate is accurately linked to the person or company it claims to represent. Such certificates are not a means of reliable verification.

    #drweb Screenshot of the certificate data page at https://www.drweb.com/

    Remember: the main protection for users is their own critical thinking. Be vigilant and careful: after all, neural network technologies can be directed not only for good, but also for criminal purposes.

#virus_writer #data_loss_prevention #personal_data #privacy #psychology #phishing

[Twitter]

Tell us what you think

To leave a comment, you need to log in under your Doctor Web site account. If you don't have an account yet, you can create one.