
October is Cybersecurity Awareness Month.
Chris Mattmann, the chief data and artificial intelligence officer at UCLA, sat down with science and health editor Shaun Thomas to discuss AI in cybersecurity, cyberthreats and deepfakes.
This interview has been edited for length and clarity.
Daily Bruin: What role should ethics and responsible AI use play in cybersecurity efforts on a campus or in an enterprise?
Chris Mattmann: You have to go (to) the doctor, and they have to know things about you. … There’s no world that we can get away from having that information recorded. There’s a certain amount of information that people need to collect about us, just to deliver service. However, how is that information stored? Is it put in an encrypted environment with the right security controls? Is it encrypted at rest? Is the data encrypted in transit? … That is true in the medical domain, (and) that is true in the context of cybersecurity and AI.
Ultimately, a lot of these AI technologies are from commercial companies. So we need to be aware of the information that is flowing to those tools and that it has the appropriate classification level. I’m not talking about national security classification. The UC system has a safe handling mechanism and data protection levels.
We have already begun a campuswide AI inventory, looking at AI tools and how AI is being used, not just in the context of cybersecurity, but across all domains. … Then we’ve got to talk about ethics (and) about the use. Where should we be using AI? How much should we trust its predictions? … We don’t always go for super high precision. A lot of times, we’re just playing defense. The level of trust is really layered. It’s a layered approach into understanding where in the business process to integrate AI and how much to trust what it tells us.
DB: How do you think a culture of cybersecurity awareness should be fostered among students, staff and faculty?
CM: I know being young, I always want to be included in everything. But then sometimes you realize as you get older, you don’t need to be included in everything. That fosters a culture of cybersecurity awareness.
The other thing is how much information is available about you. … In high school and college, there’s this sort of movement that they need to be online. That’s how they interact. I would just suggest to not totally be offline, You can’t because there’s still this principle of digital adjacency, which is how many of your friends are hyper-online. … Even people that are hiding in a mountain off-grid, there’s a digital adjacency perspective to them. … There’s style metrics and other information that can be discerned about them. It’s kind of like another lesson in life: the principle of moderation. So moderating your digital identity is always good – ensuring that those that need access and only those that need access, is important to it. … It doesn’t mean that you need to take a different route to the cafeteria every day, but just maybe every couple weeks, do something different.
DB: Deep fakes are rising in prominence. What are some red flags that a video or a voice message could be manipulated?
CM: Deep fakes are the moment of the hour now. … There’s an interesting pattern called a semantic forensic associated with that. Let me illustrate it for you, related to video or audio. For videos, I’ll use a Mars rover. We could use the fakes to generate images of the Mars rover, and they would generate all these great images that you wouldn’t be able to tell the difference, except someone who actually understood the rover would understand a few of these images might have big bones or robotic arms. The layperson would not understand that the rover was never built with that. That’s a semantic inconsistency. That’s something that only a higher order analysis – not an analysis of a watermark or whether or not this has image grain artifacts or other things like low level syntactic issues – only a semantic higher level analysis would pick that.
You could dream up an image of your friend with an earring, but your semantic inconsistency in your mind goes off. It’s like ‘Oh, Joe, he would never do that.’ A lot of the focus in deep fakes right now is in identifying those semantic inconsistencies – believable videos, images, sound, but you just know that there’s some piece of evidence that refutes that. Watermarking can handle the syntactic elements of the future, but we need semantic forensics to handle the higher-order things.
DB: How do you personally stay current on tracking the rapidly evolving threat landscape?
CM: I spend a lot of time in code land on GitHub. I spend a lot of time reading blogs. I read X a lot. X is where a lot of the AI research happens. … A lot of people go on there and they publish cool software. The cybersecurity domain and awareness is just being involved in the intelligence community and continuing to contribute to our nation in that way.
DB: What advice would you have for the next generation of cybersecurity leaders?
CM: First, moderation. … It’s not a problem to post online, but if that’s what you’re spending all your time doing, no. Another thing is security by lease privilege, which is, if you don’t need access to it, don’t worry about it. If someone else is doing it and they have the authority, great. Contribute however you can, but you don’t need direct authority to everything. … The other thing is just recognizing patterns and changing your patterns up. Just be that needle in a haystack because all of the stuff is based on pattern recognition.