Over the years, artificial intelligence has become a mainstream technology and can be seen everywhere around us. There’s no denying that it has made life simpler, but on the other hand, AI has also brought to the fore several security concerns. Cyberattacks have witnessed a rise not just in terms of the frequency, but also in scale, and that’s why it is critical to pay attention to this aspect and find a solution before it’s too late. 

There are various experts and policymakers who are further researching this area to develop potential solutions. One of the most prominent among them is Dr. Roman V. Yampolskiy, who is a Tenured Associate Professor in the Department of Computer Engineering and Computer Science at the Speed School of Engineering, University of Louisville. He is also the founding and current director of the Cyber Security Lab and has authored over 100 publications, including Artificial Superintelligence: a Futuristic Approach. Dr. Yampolskiy is also a senior member of IEEE and AGI. 

His work largely revolves around AI Safety, Behavioural Biometrics, Cybersecurity, Games, and Pattern Recognition, among others. 

In an exclusive conversation with INDIAai's Content Lead Jibu Elias, Dr. Yampolskiy deep dives into the various threats posed by AI, the regulatory framework, and much more. 

Here are some excerpts from the conversation: 

Jibu Elias: When we speak of cybersecurity, the first thought that comes to mind is while growing up - the major threat used to be trojans or viruses. But with the evolution of technology, even in AI, we are witnessing different kinds of issues. Can you explain how the nature of cybersecurity and the threats have essentially evolved, as a result of advancement in AI?

Dr. Yampolskiy : When we talk about AI safety, we should not just concentrate on attacks from outside. A hacker getting access to your project — modifying code, modifying your data, but the system itself becomes very capable and either intentionally or as a side effect can cause damage to cyberinfrastructure and to people. Those are two aspects of AI safety and security, protection from external forces, and control over the internal system itself. More and more standard cybersecurity relies on AI, both to defend cyberinfrastructure to automatically detect novel types of exploits to monitor traffic, and hackers utilise AI to find new ways to attack to automatically engage in phishing attacks. When it comes to deep fakes, it’s a kind of a war-like situation, where both sides are starting to use AI. At this point, we are still far from human-level performance, so people are involved, but eventually, you can see how it becomes AI versus AI.

Jibu Elias : Your work has a deep focus on the larger level of threat AI as a technology can pose. You might be familiar with people asking you, when is ASI coming up? Also, what are your thoughts on the existentialism AI generally brings? And also, how far is AGI and can we expect it in our lifespan?

Dr. Yampolskiy: It really develops very quickly, 2045 was the consensus date for many predictions. It seems that the progress we're making lately is kind of exponentially bringing this maybe even closer. But the main point is time doesn't matter — it could be seven years or 20 years, but the problem is the same, and the difficulty of a problem basically means we need all the time we can get. I think it's a much bigger problem compared to cybersecurity. If you have a successful hacker attack, you lose a credit card, you get embarrassed, and that's it. You move on and get a new credit card. It is a system smarter than anyone controlling nuclear weapons or a power grid, and that can cause a lot of damage, so it's a much more serious problem.

Jibu Elias : We generally speak about the situation with Game Stop, and how the Reddit community has created this artificial value for this particular stock. How do you say attacks on financial systems can be easily done with algorithms? Isn’t that a threat we should be addressing before we speak about the government looking into it?

Dr. Yampolskiy: Most stock trading is already done by computers; around 85 per cent or more of all stock trades is either high-frequency trading done by machines, so it’s a strategy-based machine learning-based trading. We have already seen machines crash into the market as well as flash crashes, and it's very likely to continue and become even more prominent if you have a smarter system capable of outsmarting all the human agents in the market. With ASI in the market, the possibilities are endless, and in terms of how much profit you can generate, it's possible that crashing the market and bringing it back up is a very nice way to make money quickly; that is if you can predict that by causing it yourself. That's a great opportunity, so we definitely need to have much better regulation to protect against that.

Jibu Elias : Speaking about our regulations, do you think we will be anywhere close to reaching a general consensus in global terms? We are yet to have a global framework on regulating this kind of use when it also comes to automated weapons, plus you also have drones. How do we reach a point where we can create that framework globally?

Dr. Yampolskiy: I think there are two steps in that process; one is agreeing and regulation, and I think that’s doable with some negotiations we have successfully created. There are similar documents in the UN about human rights, anti-slavery, health rights. The second step is how does it actually impact reality and if you look at the success of those initiatives, it isn’t just because everyone agrees. We need human rights doesn’t mean every person gets human rights. Just because we all agree AI should be safe doesn't do anything for AI, so you have to be careful separating this bureaucratic attempt at bringing together some documents versus the situation on the ground, where this actually impacts safety. There are multiple steps we need to succeed at. 

Jibu Elias : According to you, what will be the major threats of artificial general intelligence or artificial superintelligence that we don’t have any control over?

Dr. Yampolskiy: I have a paper arguing that you cannot predict accurately what a smarter system will do. If you could, you would be that smart, so I cannot tell you exactly what it will do, but the general pattern is that whatever we design a system to do, it fails to do exactly what we want. We want a self-driving car to be safe, they have car accidents; we want the spell checker to correct words, they introduce wrong words and misspellings, and make things worse. I have a paper surveying all these AI failures throughout history, and basically, that's the pattern we see. We have more such systems today, as everyone is using them. We see more and more problems; they are not very resilient to any modifications in their environment. If something changes, you get unexpected side effects. 

Jibu Elias : In some of your papers, I have read concepts like AI confinement or AI boxing, can you explain are they solutions we can look forward to, in terms of addressing these challenges?

Dr. Yampolskiy: It is a tool for AI researchers to do investigations into different kinds of aspects of a system, before it is deployed; it’s a temporary measure. When we study computer viruses, we isolate them from the internet, we see the input-output, we see what server it communicates to, we’re trying to understand what’s going on. It is the same with the AI system; if we box it, we can control learning data, we can control social engineering attacks, but we know in the long term, it will escape. If you observe a system, if you actually benefit from a system, information leaks out and if you don’t observe it, it’s useless to you. 

Jibu Elias : How did you get into this particular area? We have so many computer scientists, professors and academies, but they seldom get into something like this, and they are seldom vocal about these big long-term problems.

Dr. Yampolskiy: My PhD was in biometrics; most people know about fingerprints and face recognition. I was pursuing behavioral biometrics, how to recognize someone by how they interact with a computer, voices, gate and part of my work was on game strategy and how to recognize someone based on how they play games. For example, in online poker, if somebody hacks your account and starts doing something different, we can easily spot. I realize that more and more players are actually bots, they're not humans but I quickly realized they're going to get better, and with years now the best players in the world are computers. But the point is I was developing safety systems against bots and the question naturally became how do you scale it to much more capable systems. 

Jibu Elias : What is your advice to those who are thinking of the job market in the next 10-20 years, largely the school kids of today?

Dr. Yampolskiy: So surprisingly, it turns out that a lot of very simple jobs are not so simple. If you are a plumber for example, everyone has different pipes, it's very hard to automate and they get paid really well. To be honest, a plumber makes more than a professor for sure per hour, so consider those jobs where automation is difficult. Stay away from jobs, where it's a repetitive task, if you're doing something exactly the same, we can teach a machine to do it. There is no long-term future, so try to be unique, try to always do something creative. That's your best chance; it's not going to be forever, but you may be the last one to lose your job. 

Watch the video of the conversation here.


Want to publish your content?

Publish an article and share your insights to the world.

ALSO EXPLORE

DISCLAIMER

The information provided on this page has been procured through secondary sources. In case you would like to suggest any update, please write to us at support.ai@mail.nasscom.in