Transcript Summary

Host (00:00:15): Hello everyone, thank you for coming to —. My name is — and I’ll be your main host the presentation. Just some general comments: please utilize the Q&A function for any questions you may have, and our guest speaker will do his best to answer it at the end. Today’s speaker and presentation will be featuring Mr. Erich Kron, and Mr. Kron is a security learner’s advocate at KnowBe4 and a veteran information security professional with over 20 years of experience in the medical, aerospace manufacturing, and defense fields. Today, he will be talking about how deepfake technology or really, video manipulation technology as a whole is becoming more advanced every day and getting harder to be distinguishable from reality. He will delve into the different risks and consequences of how this has affected all of us and how it will also affect us all in the near future. With that being said, I would like to hand it over to Mr. Kron who will begin his presentation shortly.

Erich Kron (00:01:01): Alright, thank you so much and I appreciate everybody who’s come to this. I gotta tell you, this is a fascinating subject to me. I’ve been in IT and security for a while, I’ll talk about myself here in a second, but doing this for a long time, what really got me into computers, believe it or not, was imaging, particularly, digital imaging back in the day. So, I did photography. I did that kind of stuff when I was younger. I’ve been to darkrooms, I’m showing my age now, but I’ve down all that and was really into that. I was in the U.S. Navy, I was on my ship and one of the guys in my shop had an Omega, I think it was a 500, and he showed me how you could scan something into the computer and then digitally change it and that fascinated me. That got me started into computers, so I’ve actually been into photography and imaging longer than I’ve been into computers. I’ll tell you; I’ve earned all the gray hair that I have going on here. So, I’ve been it for a little while. But I think this is fascinating where we’ve come to these years with respect to deep fakes and I also think it’s very concerning what we can look forward to. Unfortunately, with respect to the way that deep fakes are going to be used, now deep fakes are great for fun, they’re great for all kinds of stuff, but they’re also really good for attackers, especially in the social engineering side of things. So, moving forward here real quick, a little bit about me, my name is Erich Kron and again, I’ve been in IT and security since the mid 1990s. If you have a CISSP and you’ve had it for more than about maybe six or seven years, you may have gotten an email from me, I was the director of member relations and services there. I’ve also worked with the US Department of Defense for about 10 years where I ended up as the security manager at the second regional cyber center Western Hemisphere, which is a mouthful to say every time you answered the phone. But I’ve worked in healthcare, I’ve worked in manufacturing, like I said DoD. I’ve worked in lots of different areas throughout the years and what I get to do now, which is really awesome in my job is that I get to share my experience with folks and hopefully, open our eyes to potential threats, things that are going on, and basically, help educate people on what’s going on. So, I’m not in sales, I’m not in any of that stuff, I’m not even in marketing. I’m just here to kind of share information. Now having said that, I do have to give at least one quick shout out to my company, I work for KnowBe4, we do security awareness training and simulated phishing. We have a platform that we provide and the whole idea is tackling the human element. Now, deepfakes and social about this because I get to see the things that are happening in the social engineering side. One of our partners at KnowBe4 is a gentleman named Kevin —. You may have heard of him, he’s well-known for getting in trouble years ago, he’s reformed and does quite a lot of white hat work now and penetration testing, but he was an excellent social engineer. This is the kind of stuff that really plays into social engineering, again the way that we can manipulate things and people. Here’s our agenda for today. This is kind of the layout I want to use here: how this all got started, like how I got started on this thing, and the progression of digital fakes. So, we’re basically going to cover how things got started and where we’ve moved to from now, and then we’re going talk about the ways they can be used, the potential impact. Then we’re going to be talking about detecting them and defending against all of this insanity that has been coming up. Quite frankly, it’s kind of spooky what we’re able to do with just desktops, computers, or laptops, and the power we have at our hands right no. Let’s get started with this, how it all got started. This all started with a friend of mine, I have a colleague named Javvad Malik, he’s a UK guy, an IT security guy as well. We do a podcast together every week etc. etc. We’ve known each other for years and we love to mess with each other. I don’t know if you can tell from my personality, but I love to have fun. I can’t take life too horribly seriously. We like to mess with people, and what happened here is that we started taking conversations that we had through Whatsapp and things and started photoshopping them and started changing how they look and what they said and moving words and dropping a quote or a screenshot like “I can’t believe you said that to me!” We started messing with each other and then it started kind of upping the amt here a little bit. Once we finally hit the point of mutually assured destruction with respect to HR, we kind of stepped back and said “Wow, this could really go south.” If somebody were to take this technology, even as simple as what we were doing and put it out there as reality, how would people tell the difference? This could be really, really be ugly with the way that it’s going. What we started doing is that we started looking at different ways that this all worked together and then we started playing with deepfake technology. Now I love the deep fake stuff, I’ve been playing with it for a while. It’s not perfect by any means, but we’ll talk about some of the tools here and how they work and what they do. But it’s fascinating to me the kind of stuff that I was able to pull off fairly quickly with just regular old hardware like I said. That’s kind of how all of this got started, that’s how this talk I started that’s what spurred me into this was again the ability to take things very, very simply and even using tools like MS paint or he’s a Mac guy, so whatever he uses, and we redo those conversations to the point that you know these things could be a problem. That was the part that really triggered this. Now, we started looking at deep fakes versus shallow face versus all of that and Javvad, again my good friend, he’s an analyst by trade so he lives and dies by charts. But I really liked this chart he put together for this, so the way this is laid out is on the left shallow on the top and deep on the bottom and then dynamic and static there, the difference here is these are all types of different fakes, okay? So, in the top left, you have the text manipulation like well we were just talking about you know, copying pieces out and putting it in there and making it look like somebody said something. The plain old photo editing, you know adding filters and things like that, photoshop, and this is on static images, right? Now as you move towards the right, you get into the green screen stuff, if you all remember the jib jab stuff where you can just take someone’s photo and put it on the face and the dances around like a little elf or whatever, you know around the holidays? That was very, very popular, but all the way over there on the right, on the dynamic is where we get into CGI stuff, okay? Now this is still fairly shallow stuff because it’s not really using a neural network, when we do CGI in movies it’s pretty incredible if you’ve ever seen the people all suited up with you know the little crosses here and there to pin the pieces on digitally and then they basically lay over this computer-generated field on him. The thing is that’s not really all done through a neural network, that’s just overlays and stuff like that. Although that’s very good, it is very dynamic, it’s up there on the top right where you’ve crossed that line there with the AI and neural network part is where you get into the deep fake stuff. One of the things that really, unfortunately, drove deep fakes in the beginning and started getting a lot of popularity, especially out on Reddit, was the declothing stuff. There was a lot of like celebrity like porn stuff being generated with this and this was like a big deal for folks, it really drove a lot of this until one of the reddits, the subreddits, actually got shut down for it. But this unfortunately drove a lot of this stuff, so they were doing it through both video and static stuff as well. It’s very interesting if you’ve ever seen it, down in the bottom left, this person does not exist. There’s a couple of websites out there where they’re actually generating people through AI and if you ever see in it, it’s fascinating ’cause some of them, I mean it’s very obvious these were AI generated, but some of these people look absolutely real. You really cannot tell them apart from a real person, but they don’t even exist, this was completely generated by an AI. That kind of stuff is pretty scary there, it really is. As you move over towards a more dynamic stuff, you get the face swaps, we’ve seen that on phone stuff like that and then of course the full-on puppetry, where we’re taking somebody and using neural networks to make them do other things. So, this is kind of how we lay this out and I want you to understand when we talk about deep fakes, we’re really talking about things that fall under that AI and neural network part, that’s the differentiator between the standard stuff that we could take and modify on its own. So, image manipulation, some pretty interesting stuff happening with image manipulation. Fairly simple so this is digital image stuff, now in case you don’t know images, most of the photos and videos and things that you see, these are raster. These are, in other words, these are a bunch of dots, a bunch of pixels if you will, individual pixels designed to make a photo. Now there’s also what’s called vector, which is a different kind of graphics it uses math and things like that, but when it comes to these images, it’s a series of a bunch of pixels. Manipulation tends to run down to taking those pixels and either adding, removing, changing, or modifying whatever the individual pixels or groups of pixels to make it look like what it is we want to see. So, some cool things that have happened is that we’ve gotten to the point where we can remove people from busy locations. I’m going to talk a little bit about tilt shift photos just to give you an idea how that works, that’s kind of the opposite and a forced perspective, it can also be used to modify screenshots as well as photos. Like I said, where things get pretty ugly, and you can essentially start some trouble by making it look like somebody said or did something. So, let’s start here with some of the most recent image manipulation, I wanted to bring this up just because it is so timely and is so perfect. How many of you have seen this, I mean this is simple, this is Bernie, right? But they have turned these into some pretty crazy memes, I mean Bob Ross here painting a little, a cold little Bernie over there, I mean what’s important is you got everybody is able to do this to one extent or another. It doesn’t take huge crazy tools, this has just exploded on the Internet with all the different memes around poor Bernie, right? But you can see here how easy it is to take that one little shot and start pinning it into different scenarios, I’ve seen some really, really funny ones with Bernie on the back of the minibike from “Dumb and Dumber” and lots of other stuff, but these are not all super professionals that are doing this, this is just people doing it with what they have on their own computers. Now, this is a technique that I think is fascinating when I first was into some of the digital imaging stuff back in the day, there were ways to do this, it’s called image stacking. So basically, what you do is you take a photo like the one on the left, with a lot of people in it and I’ve seen this done on like Washington, DC monuments and things like that. What you do is you set up a tripod and you just start taking pictures and you take a whole bunch of pictures and naturally people move around, they move in and out of areas and all that. After a while, you get enough photos of that area where every single part is eventually empty because somebody is moved over here, and you took a picture of that and then what they do is they stack them. So, in other words they take maybe, you know, hundreds of hundreds of photos and put them on top of each other. Basically, just use that to fill in where the people are. So, they take the photos where there isn’t somebody and use that to fill in where the people are and next thing you know, you have something on the right that’s perfect, it looks like it’s empty, looks like nobody is there. So, you’ll notice even on the on the cans or lights or whatever that is along the right-hand side the shadows are there, I mean does a really good job. Now in the old days, we used to have to do this manually, which was a real pain but what’s happening these days is even with Photoshop, it’s a script that just comes with Photoshop. This is a, this is a big switch for me because like I said, we had to do that, well what it means is I mean we’re embracing this in the mainstream, when it comes as a packaged filter or a plugin for Photoshop. There are things that you could do here, now let’s think about because we’re security folks here, at least some of us are, let’s think about what we could do with nefarious purposes, right? Let’s say you want to remove somebody from something, okay, so now you have a photo that says no I was not in this place, look here’s a photo of all these people, you could pick one of those people out there and stack it so that person disappears. So, you can actually remove people from that and make it look like it’s legit. 

So, we have to start wondering now, how do we start you know trusting our eyes with what we see? If somebody comes back and says, “look, here’s a photo from that time,” you’ll see I wasn’t even there, obviously that can be manipulated. So, another thing that’s interesting to me is called tilt shift images. Now, this goes on the opposite side of the spectrum, so these are typically shot by drones, you go up, you shoot at about a 45-degree angle or something and then what you do is you really pump up the saturation, like the color saturation on it and you apply a blur on the top and bottom. This is a tilt shift image. Now what it does is it takes something that’s actually real and makes it look unreal. So, in this photo right here, this is actually a real little town village thing, it looks like it’s something fake, but it isn’t, it’s absolutely real. Now again, when you make people question what they’re seeing, that’s a powerful thing in social engineering. So, there are just, just I’m kind of demonstrating different ways that these things can be done and again these are very, very simple; this is a matter of putting a drone up taking a couple of pictures with a 45-degree angle down in town and and adding some blur and changing the colors, but you can see how unreal it makes that thing look. You’ll notice too I don’t see any people on the street necessarily, you could actually do a couple of things like you could end up doing one of those stackings and then remove all the people. So, these ones we’re actually used to seeing, we’ve all seen some of these you know there were the ones back during the Gulf War time, where the people have the spiders right, the camel spiders and the trick is they hold them out in front of them and so that item becomes much, much larger than it really is. So, this is a forced perspective, okay? There are some challenges here with the photography around and getting things in focus in both areas. You can see on the right, you know we have this probably 7-foot dog running at us, we know it’s not really, but it plays with your mind a little bit now. Where I have seen this actually being used for… I guess it was a demonstration, but it was a real eye opener, last year with COVID starting to hit pretty good, there were there was a series of photos that a person did. Now, if you remember I mean the big thing, obviously masks and social distancing, masks and social distancing. Well by changing the perspective of where he was

taking the photos, this individual is able to show that in a photo it looked like people were just stacked on top of each other like very, very close to each other in a line. It made it look like they were totally ignoring social distancing rules, but then by turning you know about 30-45 degrees to the side, you could see that the people were like 8-10 feet apart, it was just a matter of the perspective that the photographer wanted to give the person. And now without putting on a tinfoil hat and sounding you know all crazy like this, we have to understand that when the photographer shows the picture or when a news or a media source shows a picture, they’re trying to tell a story through that picture, now that story could be I guess supported by the photo but they’re saying but again depending on how they do this and the angle that they use, it could look like something is happening that isn’t. So in this case, like I said they were like “Oh my gosh, I can’t believe all these people, nobody is social distancing” so on and so forth but then when he just swings to the side a little bit it’s kind of a “Oh no, that that’s not really what happened,” that’s a forced perspective and that’s the kind of stuff that again socialengineers could use, people could use to make you believe something that isn’t true. So, this is a… this is another cool trick. If you ever if you ever fishing and you catch a fish try that you know put your arm way out in front hold the fish while somebody takes a picture it looks like you got this huge fish ,very important thing to do if you’re a fisherman then you don’t just go home and say no the fish was like this big now you have a picture to prove it, right? So, after that we move into some of the more dynamic type things right so face swap apps and filters if you’ve you know done anything with some of the major social media platforms you know you’re going to see some of these filters. My family, they just, my wife especially, she loves these filters sometimes and she gets to just laugh so hard at some of these filters that they have and sending videos they’re changing audio, they’re changing stuff like that, you know you could add cat ears. If you all remember that one that, was it would age you by like 20 or 30 years and it turned out it was sending some information to Russia. That was one of the things that was going on to we got face swapping apps and making animals talk, but let’s talk about cat ears, okay? So here you have a phone you have a cell phone and on your little cellphone we’re able to do augmented reality, which is mind-blowing that we have this much horsepower just in a, in a cell phone, right? If you’ve done the Pokémon Go thing and you’ve seen some of those augmented reality things, but you can also do very, very important things like adding cat ears in real time. Now, that can go kind of bad, too. I don’t know if any of you saw this, but this was… this was actually one of the ministers, prime ministers, I believe for Pakistan, I believe they were doing a live… live broadcast and whoever was actually filming, they were streaming it from off cell phone they had the cat ears on. Now the good part is that actually caught the other people too and put it on them it wasn’t just this poor individual but if you think about it, it’s pretty wild that we have this much horsepower on our hands, and you see where things could go silly. Now, cat ears pretty easy to tell he’s probably doesn’t have those cat ears or the whiskers but what if you wanted to plan something else in that picture? What if you wanted to take a filter and make it look like something else was going on in that picture? How could we tell, right? And this is just through again the power of our cell phones we’re able to do this, in real time we’re able to do this and we’re changing… we’re changing audio we’re… we’re changing lots of stuff like that, okay? So, it’s really, really powerful to have this in our hands and as we get more and more powerful, we’re going to be able to have more and more capabilities, which in my opinion, makes some things less and less trustworthy. So, this is again my skeptical security person hat that I’m wearing here, but it’s something we need to think about before we jump to conclusions when we see things moving ahead. Now, this is an interesting app, it’s called “My Talking Pet.” this has been around for a while again it’s a… it’s a phone app and essentially what they do is you can take an animal and like your dog or your cat whatever you take a picture and then it will actually speak words that you tell it to do, right? And what’s interesting about this one, there’s a sample video there at that YouTube spot, if you wanted to go check that out. It’s not new, it’s been around for a while but what they’ve done is they’ve actually it’s more than just the mouth moving, they have random movements of the eyes, random movements of the face, little… the little twitches and quirks that we have that makes it even more believable just from a photo, just by tweaking things a little bit here and there and it adds that realism. You know when we speak, we’re not stiff and only our mouth moves, right? Our eyes move, we occasionally blink, we have these little things, and it adds these actually to the pet which is… which is pretty fascinating quite frankly. So, again, this is the kind of power we have in our phones, which you know it’s not going anywhere soon, it’s only gonna get crazier. So now we move into the good stuff, so deep fakes… so deep fakes the apps are used for deep fakes or several of them out there, a good chunk of them available for GitHub. The ones I’ve used have all been free downloadable from GitHub, they generally used a GPU to make things fast, so graphic processing units are way, way faster at doing things than CPUs are. So, they use the power of the GPU in order to do the things that they’re doing. Now a photo or video, rather, is if we think about it, it is the basic form is nothing but a series of photos one after another,right? We know that. Here in the U.S., it’s 29.97, I think, frames per second is the NTSC standard. So it takes basically 29.97 photos per second and it puts them in a row, right? So, each of these photos or remember is made up of pixels so you can modify each photo and each pixel to change the whole thing. So that’s the premise of what happens with these tools. So, what it’s going to do, the first step that it does is it’s going to break the video apart into individual frames and then what they do is they apply AI to look at these frames and identify and extract faces. So facial recognition stuff has come a long way and it’s getting much, much better. So, what it’ll do is it’ll pick those out and it usually cuts him into, I believe it’s a 128 by 128 pixels squares what it ends up doing and then what you do is you go through and you have to sort these and figure out which ones you want to replace and all that. Now it’s going to build an AI model to look at these faces that you have that you want to be doing and replaces it with the AI, so the AI basically looks at it and says “Okay, here’s the face, here’s the old face, here’s the new face,” and it’s basically going to do the blending and the pixel changing together in each individual frame across this whole thing. So, the AI builds the model, and this is where you get the different quirks, like the eye blinks, it’ll trigger off of that and make it happen through the AI model or different angles that may not be present in the original video you use as the source that you want to put into the other video. So, the AI will do that, it will replace the faces then it will re-stack those images all together at the frame rate that it needs to do and recreate the movie. Now, the thing about deepfake apps when it comes to video is, generally speaking, they only do the video part. So, if you want to do something really convincing with that where somebody is saying something other than what’s on the track already or you want to change that voice to sound like the person who you just did the replacement with, you need a good voiceover actor. So, there are some things we’re going to talk about audio deepfakes here in a minute and there are ways that you could probably do both of these things combined. Now, the thing about deepfake video apps is they take a long time. This is a very, very complicated procedure and it takes a lot of power, again, your GPU has to be pretty good, so my desktop I run… I run an AMD GPU on my desktop, which is not as favoritism NVIDIA ones and I use an NVIDIA one of my gaming laptops, both of those were fine for doing this sort of work. Laptops that have like the built-in video processor or graphics from like Intel or AMD, those are going to really struggle and trying to do with your CPU, although there are ways to make it work, it is… it’s just not practical time wise compared to using GPUs. So, if you want to play with this, understand that you’re going to need some sort of a reasonable power GPU to do it. So, this is the tool that I’ve used the most, uh this is called DeepFace Lab, it’s on get GitHub there, there’s a link to it right there, there are windows binaries already compiled and executable there and, again, it basically goes through tears apart the thing and then extracts the faces and puts it all back together again. Now what’s cool about these is they’re under constant revision, updates, things like that. There’s a really active community around these things that is constantly improving them. So, it’s come along ways just in the times that I’ve used it. Now this is what’s going to happen is the output of this I want to show you kind of what I’m talking about here, this is an output from DeepFace Lab, so you see I took a video here of some friends of mine in a video that they have and you’ll see that it broke out and identified the faces so now what do you have to do is you have to take and you have to sort through these things. I’m talking a lot of faces here; I think there were about 10,000 or so images off of this and it was a very short clip because there were multiple people in there. So, what I would do is I would remove all of the faces of the people that I did not want in here, and then what I do is I remove all of the really blurry photos of the person that I want or ones that just don’t look right. Occasionally, there was one scene in here where there was actually a calendar behind the person and there were, there was a face on the calendar, and it was actually pulling that out and putting it in there. So, this is the part that’s probably the most time consuming. So, what you do is you end up taking this stuff here and you’re going to do this both on the video that you want to modify and you’re going to do the same thing with the video that contains the person that you want to swap faces with, okay? So, you have two different videos, a source, and a destination and it’s going to do the same thing on both of them, so you basically strip out the other one like that and then the model goes together and it starts building that AI model and what it starts doing is comparing them as it goes through. So, it builds all of these different facial features, and you can actually watch it as it’s going through, you can refresh it and you can see what the AI model is starting to look like as a mix of these two things together, alright? It sounds complicated, it really is, there is a lot of work that goes into this. Now I’ve done some pretty decent, some pretty interesting videos with this, some of them turned out much better than others. What I found is the key is to have very similar face shapes to begin with. So, what you wanna do is, you know, some people have wider faces, some people have more narrow faces, you want to find people that have a similar face shape and do that. You’re also going to run into some problems with facial hair, that can be a problem, obviously glasses if one person wears them and the other one doesn’t, that can be a problem. It is not a perfect thing by any means, but once I started looking for these things when I was looking for the source and destination video, it made it a lot easier. Now I’ve… I’ve made some videos just with the basic defaults that turned out pretty decent and it took about, I think it was about 8 hours of training the AI with the GPU on my laptop. Now that seems like a long time, but I’ve seen I’ve run some of these things for days, almost weeks straight up, just beating the poor GPU to death, it’s worse than crypto mining in some ways I feel like. But your returns are kind of reduced as it goes along, right? So, eventually you reach a point where it can only be so good based on the source and the destination. But ultimately, some of the things you can do with just a few hours, like maybe 8 hours or so, on a… on a mid to higher stream, I think I have a 1080 GPU in the laptop or something like that, or this is an RF, this is a 580 or 590 on my desktop PMD one. Just something like that can do amazing stuff in just about eight hours especially if you’re just looking at short chunks of it. So, this is what you’re going to have to deal with if you want to play with this as sorting this out and, again, this is probably the part that’s the hardest on both sides of it. Now let’s talk about the other side of this, which is deep fake audio apps. So deep fake audio apps, these are downloadable from GitHub, again, they use the GPU, and the reason is most of these run off of the tensor flow engine, which is kind of like the AI Google engine and that uses the GPU pretty heavily for the reasons we’ve already discussed. But these will both use the tensor flow in a lot of cases. So, what they do here is they’re going to take an audio clip and use the AI to replace or create a voice. Now sometimes you only need a few minutes of audio, the more you have the better, but if you think about this especially if you’re trying to do this for somebody that’s a say in leadership or a celebrity or something like that, there’s plenty of stuff to go out there and pull off a YouTube sometimes from these people, interviews, you know, shows that they that they’re in if they’re a TV celebrity or something like that. So, there are certainly ways to gather this and the fact that you only need a few minutes to get started makes this very attractive. Now again, these work only on the audio piece, not necessarily the video piece. So, this is one that I’ve played around with here, it’s called a “Real-Time Voice Cloning” and it’s fascinating, now I’ll say this it’s not at least the last time I messed with it, it wasn’t necessarily ready for primetime. What that means is it was… it was still pretty obvious that it was a generated voice, but it was doing it in real time, which was kind of an important thing, okay? And there’s been a lot of development on this one here again from GitHub there’s your link right there. It is Python based and it kind of uses some pre-trained models to help speed it up, but it’s been steadily improving. Now if you can imagine, and we’ll talk about some of the ways that we could misuse this stuff, but one of the keyways that this could be used is business email compromise or romance scams, you can think about that. So now you have who sounds like someone who sounds like the CEO calling and saying they needed to transfer money, or you know your grandma or aunt actually gets a call from Yanni, the love of their life, who’s you know reached out to them and wants to meet them etc., etc. in the romance scam side. So, there’s a lot of stuff that you could do with this that would be a little bit on the on the bad side, there’s more we’ll talk about that. Now, with respect to the way that these are working this is a breakthrough that I’ve seen that has really kind of improved things to a significant level, this is called GAN models or generative adversarial networks, okay? So here’s what they do, they use an AI to create a fake and then they use an AI to see if it can detect that fake, if the AI can detect the fake, then the other AI goes through and tries to make another model that’s better and then it goes through this iteration back and forth until the detection side can no longer detect that it’s a fake and then you have your finished product, okay? So, it’s using AI to basically check itself and improve itself to the point that these are becoming very, very good by using these generative adverse aerial networks and adverse aerial being again both sides there. So, the idea is, again, you keep running it through the detection process keep modifying the original one or improving it until the AI can no longer see it. Now, you’re going to run into you know limitations of the detecting AI, it may not be as good as the generating, or you know it may not be as good as other things that are out there, but where this really is going to play in is you know as people start trying to put controls out there to try to detect these things, you can bet the actual controls that are being used to detect it are going to be used in these GANs to get the things undetectable. And I like to think of it like this it’s a lot like the antivirus market. So attackers, the people who write malware, they of course target different kinds of endpoint protection and they try to get it so that it can pass that, well, they have the same tools we have, they subscribe to the same types of services or use them through a friend or whatever and they test these things against those particular products to make sure it’s okay and then once they get it to where it’s undetectable bam, then they’re going to release that version and that’ll be used against that. So, I’m going to see the same thing or I’m guessing we’re going to see the same thing happening when it comes to deepfake protections as this starts to really take off and I love this… the shot here this was from face off with Nicolas Cage and John Travolta and it’s kind of funny when you put these two together, they look so much more alike than when you have the faces apart there. That’s just kind of our brain and… and how it works with these sorts of things. So, this is what’s going on right now in the deep fakes side of things and again this stuff is free, so much of this stuff costs nothing to get involved in and you know you just gotta set up a computer and run it on the side and spend a little bit of time. I welcome people to try these things out on their own to really understand what’s going on here and see what’s happening. I wish I could get more into that, but this could be an entire session just on that piece. So what are the potential impacts of deep fakes and fakes in general? Well, it comes back to this guy, Sun Tzu. “All warfare is based on deception.” Now, there’s different levels of warfare you know there’s… there’s nation state warfare, there’s stuff like that, but then there’s the wars that we fight every day in the cybersecurity side of things, against scammers, against you know social engineers, business email, compromise actors’ stuff like that, right? The fact is we’ve lost a ton of money to business email compromise. This is something here, it’s also known as CEO fraud in certain parts of the world, typically in this kind of a scam there’s no payload, they tend to be very targeted and personalized and there’s few to no traditional towels, okay? So, if you can take the stuff that we see in business email compromise, being the email from the CEO saying “hey, I need you to transfer a bunch of money. Wire it out right now, it’s got to happen ASAP,” and if you can back that with an audio call or a voicemail being left from the CEO, “CEO,” that sounds just like him that says “hey I sent you an email, I need to get this taken care of right away, like right now,” imagine people are going to drop their guards they are going to trust that email 100% now. We see typical, like hybrid types of attacks happening even these days where the person will get the email and it may be followed up by a text message that looks like it’s from that executive. Now imagine the power of hearing the voice saying “I need you to do this right now, send me an email when you’re done,” right? This is going to be a game changer to an already brutal way of losing money in organizations, so that’s one way that I see this absolutely going South. We have to understand when it comes to deception that our brains do some filtering. So, our brains job is to filter, interpret and present reality this is the root of the deception that son Zeus talking about. So the things we smell, taste, hear, all that kind of good stuff, they’re not actually real, they’re filtered through our brains and our brains do some things just based on the stuff from when we’re kids growing all the way up, right? So, if this happens, that happens, we expect these sorts of things and now our brain is preparing us for these sorts of things. If you want to see some really cool stuff, if this kind of stuff fascinates you, there’s a series out there called “Brain Games” I want to say it’s on Hulu last I heard, but they do a number of different things to show how the brain can be tricked and it’s a lot easier than we think it is. Motion, movement, focus, redirection, the fact that even the way that our eyes see things and focus are not what we think they are, you know? So, it’s a fascinating way to look into the human brain, the problem is when it comes to these sorts of filters, we’re predisposed to believe certain things and that is where it comes in so conscientious bias, things like that, if we already believe something we are more prone not to try to fact check it or look at look at it too critically, okay? The other thing that happens with our brains is and what it impacts these filters is, it’s a social engineering, kind of the basis of social engineering our brains work in basically two areas or in two systems, so system one is thinking if you’ve ever heard of this, this is kind of our instinctual level, so we breathe, we walk, we don’t think about move my left foot, move my right foot, all that kind of stuff when we walk, right? That’s all happening in the background. If you’re at game with your kids and someone kicks a soccer ball at you, you don’t stop and do calculus to see if the ball is going to hit you in the head, you put your hands up and you duck you and knock it out of the way, you do whatever, right? So, it’s very, very quick the problem is that it tends to be very flawed. Now system two thinking is where we apply critical thinking. So that’s where we need to do a math problem, we need to solve a puzzle, we need to do something like that. That is stuff that we haven’t already done a bunch of times before and it causes us to really think about it, that’s system two thinking. Now system one thinking is something that we spend probably 95% of our lives in system one thinking. System two thinking, again, is something that we have to work to get towards. When we’re faced with something that causes like an emotional spike, dopamine flood, or something like that, we tend to drive right into the instinctual part. This is why you’ll see fishing messages and things like that that are urgent, there’s some sort of a significant penalty if you don’t do something right now. They get us with our emotions charged, which drives us into this system one thinking which, again, is very error prone and we tend to skip or miss the things that should be a tell for us to see that our filter is just kind of like blocking out all of those things that otherwise we should be able to see. So when it comes to things like we’ve heard of these scams where somebody gets a call from the IRS and the IRS says you know you need to pay us and gift cards, iTunes gift cards, by the end of my shift or coming to arrest you, right? And you wonder how do people fall for this? Well, it’s because they’re in this emotional state of agitation and the brain basically just kind of shuts down all of the critical thinking pieces. So, there’s some psychology behind that but if you think about it if we can take some deep fakes and we can put stuff in there that reinforces what we already think, then people are going to be more likely to buy it. So, places that we could use video or photo fakes, right? Romance scams, this is a big, big deal and we don’t always think about this but especially in the older communities’ things like that. Romance games are a big, big deal, so how do you deal with that well you do things like if you really want to try to you know show your aunt that this really isn’t you know some doctor or millionaire that just found them on the Internet wants to talk to them. If you really want to try to get them through that, what you do is you say “okay, so show us some photos,” they’re basically like proof of life and a ransom situation so you say “okay, have them take a drink from a cup holding their pinky out” right and send you that picture, so you’re making them do some kind of silly things or whatever that would be… you know that would show that there communicating with that real person. Well with deep fakes, you’d be able to fake that pretty easily, right? Another thing you could do is you could make people outraged, so outrage and anger are two emotions that drive action very, very strongly people, probably the two top emotions to drive action. So, what do you do you post something up a of a political nature perhaps that always gets people going and you say “I cannot believe this politician did this, Oh my gosh you need to go here and you know sign this” and what they do is they include a fake video of somebody saying something right from the podium you know and that will fire people up. Now of course the petition is going to be something where, you know, you just need to verify who you are with your Social Security number you know, naming your first dog, etc. etc., all that kind of information that they’re going to get you to do just to confirm that it’s you. Well if you’re upset, you’re not thinking about that critically, again, so these sorts of things can be driven with that an election influencing. We just got rid of you know, we just got through a serious election here in the US and of course it’s not still over but very contentious very, very powerful things going on there right we know that nation states are working to influence elections around the world, not just in the US and other countries as well. This is not something new, we know that it’s happening, so if you were to take this and even on the local or the you know the state level as opposed to the national level, which frankly I’m surprised we didn’t see deepfakes this year on the national level, but they could be influencing congressional or Senate seats or parliament seats or you know all of that kind of stuff by saying “oh so and so said this” right around election time figure a couple days before the election it says bombshell that dropped its own so said this horrible terrible thing well by the time it’s proven a fake, the voting is already over with and that person has you know now lost because of that, that’s a real scenario that could start happening using these sorts of things. Now, the audio fakes, again, romance scams same sort of thing you know so and so I want you to tell me this and so you have this voice of a celebrity, I always use Yanni for some reason, but you have a voice of a celebrity which is well known and easy to hear out there right saying “hey you know Thelma I do love you and I want to marry you, just send me some money so I can come see you or whatever,” you know all the romance scam type of stuff. Well this makes it very believable for that person and remember we have biases where if we if we already want to believe something, it doesn’t take much to convince us that these things are real. Again, with the influencing elections you know instead of a video, what if it’s a intercepted fake phone call, right? And again, CEO fraud or other types of business email compromise and that brings me to this little story here. This was a story that they said was business email compromised, I have not seen anyone be able to prove that yet, but essentially, they said it was the CEO on the phone that ordered me to send this money. There’s arguments that it could have just been a voice actor, a close voice actor as well, but it’s the first time I had seen in mainstream somebody talking about using the AI, which is deepfake technology in this case, to do this. Now so, so long ago we actually used to have conferences in person at RSA, not this well, gosh it was 2020, RSA 2020, they had a booth set up and there was a person there that was actually doing a technology for call centers and the way call centers work if you’ve ever seen this is when you dial into a call center, basically from the time you’ve connected to the call center and say credit card places they start building a profile on you. Did you call from a known number, did you call from a phone that they’re aware of, you know etc. etc., and it basically starts building a risk profile on that person. So, what they’re doing is they’re actually trying to also do deepfake detection to try to counter somebody calling and trying to use voice authorization through that or change their voice through deepfake technologies and that would then end up alerting the person that ends up getting the call to maybe do another check. Like if you’ve ever called into a credit card company and you go through and you enter your information and then when you talk to the person, the person goes “well I need you to confirm one more thing, can you do this,” often times that’s because you’ve reached a point where something didn’t mesh 100% and so they want to do an extra verification. They’re trying to do that. Well anyways, the guy had a deep fake video of one of the actresses, I don’t remember who it was up there, but they had this video running and I asked the guy, I said, “say have you ever actually seen an attack come into a call center that’s a deep fake” and he actually pointed to this story right here and I said well nobody’s ever proven that that was actually a deep fake and he admitted it he said “yeah you know it we haven’t actually seen it happen, but it’s coming,” and I don’t disagree with that. I think we’re going to see more and more of this type of scam stuff going on in the future, especially if we’re using voice recognition and things like that to validate a person. So, I understand that there’s going to have to be some defense. So, the first thing about defenses is detecting them, so how do we do that? Well, whenever you have some sort of an audio or video or something like that, there is going to be an artifact left, okay? We may not be able to see it necessarily, not with our naked eye, depending on how good it is, but like in videos or in photos there’s going to be a softening or blending of the pixels usually around the part that’s been replaced, that’s how it smooths it and it makes it easy to the eye. Well digitally, we can detect some of that stuff it shows up a lot better into digital sites now there’s some ways around that too and that is you can take like a video that you’ve done a deepfake on and then if you re-compress it a couple of times, like let’s say you turn it into an MP4 and then you take that and you put it back in and editing program and then you re-encoded MP4 again or you know MP5 or whatever you know or H265 or something like that, you start messing with that a little bit and re-encoding it. What’s going to happen is, you’re going to degrade the quality and those things can kind of disappear a little bit, so they fall into the rest of the noise that’s being created as you redo this or some tricks around that, but that’s a keyway to see it: as the compression artifacts and video and or the smoothing of the pixels now in audio there’s things that they do where the AI is still create sounds that are impossible for us to make with our mouths and our tongues. So, in our language there are words that we say differently depending on what word it’s paired with, so our tongue may not be able to move from one sound to another very quickly, I know it sounds crazy, right? But our brain fills in that gap, so we make a similar sound. Now the AI’s on the audio side tend to make the exact sound, which is impossible for us to make, therefore it can be picked out digitally and through an AI. It knows these sorts of things that we can’t do now again with GANs and things like that, we’re going to be playing games to improve stuff but that is one way we do that. We hear this sounds the same, but they’re not actually made the same and they’re not actually they don’t actually sound the same when you look at them. There’s going to be automation, there already is some automation looking for this kind of stuff, YouTube, Facebook etc. and there’s still issues on how do we deal with this. Do we just flag it and let it go, do we ban it up, what you know what do we do with this? There’s some ethical issues with this because some of this stuff is done in fun or parody or something like that, right? You know there’s a…  there’s some Zuckerberg ones out there that I’m surprised ended up on Facebook and are staying up although typically tag him. But there’s some ethical stuff that we have to think about with that when it comes to is it censoring by removing etc., etc. and then, again, they’re going to use neural networks to detect fakes so it’s an AI to look for AIs, those are the things that we use to try to spot this and kind of where we’re going with that. Now the bias issue comes back to this, and I’ve mentioned this before, deep fakes are going to enhance prejudices and biases so if we believe something already we’re going to believe it even stronger when we see it yes and we don’t need a high tech hoax to manipulate somebody into believing something they already want to believe, right? We see that all the time with misinformation on Facebook and stuff like that you have people re posting things that are not true and they don’t mean necessarily to do it all the time but, again, you’re going to be really critical of the things you don’t agree with, but if it aligns with your beliefs already, you’re more likely to trust it and then that’s how that kind of continues to go out there. So yeah, the truth is out there as long as we care enough to look for it and that’s kind of the big problem we need to be very, very critical of all of this stuff, especially if you’re in security and looking at these sorts of things. We need to be extremely critical about this stuff and when we see something that’s truly shocking, we need to check it whether or not we you know it reinforces our beliefs we need to check it just as much as if it absolutely countered our beliefs. So defending against it, you know, we know we can detect it in some cases so how do we defend against it well? The first thing we gotta do is understand it typically these are going to be used in social engineering whether it’s just misinformation spreads or whether it comes down to the bigger things. Now, I mentioned earlier, social engineering is largely an emotional attack because it drives us into system one thinking and the key things that they often use our greed like the Nigerian Prince scam, right? Curiosity, if you’ve seen those ads, clickbait ads, you know like “the top five things that blah blah blah blah blah and number 40 blow your mind,” that’s a curiosity thing and it’s based on psychology. It’s pattern interruption based on a information gap theory, so there’s actually psychology behind that self-interest, of course, we always want to do stuff for ourselves, but urgency, there’s almost always urgency in these attacks. I see a lot of phishing emails, I hear a lot of stories, I’ve only had one person in about four and a half years of doing presentations on this stuff in all kinds of places, I had one person say they’ve received a phishing email that was like a “whenever you get around to it take a look at this thing.” I’ve never personally seen that, it’s always you need to do this quickly, it needs to happen now. The idea being you don’t give the time for the person do apply that critical thinking or to happen to run across somebody and ask the question “hey about that thing you wanted…,” no it’s gotta be urgent, it’s gotta happen now, again, driving us into system one thinking. Fear is interesting because people are about 50/50 likely to fear or aggression as they are to back away from it. So, if somebody gets on the phone and they get you know upset with you and they start yelling at you, some people will counter that by yelling back, other people will back off; it’s depending on how the person reacts. A good social engineer doing something on a phone call like that like a phishing call will be able to tell very, very quickly how the person is going to react and they will modify their script or how they’re going to do things based on that person. If they’re fearful, if they back away and they suddenly become very compliant then they can take a softer hand at it etc. etc. So, this is, again, another emotion and then helpfulness right we’ve all seen the scams that happen around tragic times, I mean we’re in the middle of the pandemic right now, people need help. Money scams, stuff like that, that’s constantly going on, so we have to teach people to look at this and understand that. And the way that I like to phrase this is if you get a phone call, text message or an email that elicits a strong emotional response, step back, take a break, wait till you can apply some critical thinking to this thing, right? Very rarely can something not last you know a minute and a half or two minutes more while you have a time to go “okay, wait a minute, this has me upset, let me look at this from a critical way,” that’s what we have to teach people. That way no matter how much shine and polish they have on these fake things that are coming in, we’re able to look at it from a much better angle and better critical thinking side, right? One of the things we have that I love is this, it’s a PDF, it’s just a free PDF that’s available on our website, but it reminds people of things to look at when they’re in … this is the social engineering red flags PDF, so somebody gets that email and they’re upset, right? “Oh, this is something that happened, and I’m all riled up about it,” so now what they do is they can end up looking at this and it reminds them of the things to look at and I think we can probably do something like this. It would be interesting to do this now, as deepfakes get better on videos, you see a video you get something like that or voice mails doing this but in the heat of the moment, people tend to forget what they’re supposed to look at. I got this email, it’s really urgent etc. etc., now what do I do? Well, this is the kind of stuff that’s a reminder for that, so put things around like your office with stuff like this or have people keep it nearby or keep a cheat sheet of some sort because I got something made me feel weird, what do I need to check so they’re not trying to remember that emotions are high. So that’s about all the time I have, I think I nailed it at 4:30, so that’s about an hour right there. Let’s look at some questions, I don’t see anything in the Q&A box right now. I would love to hear some questions from you all, so let’s hear it anything in chat or Q&A. Let’s see what you have going on here, anybody? There’s gotta be some questions, right? Comments? Concerns? Okay, well if there’s no questions, we can go ahead and here’s my contact information, so if you think of something later and you want to ask questions about this, I’m happy to chat a bit about it, I’m on LinkedIn, that’s my Twitter handle, @Eric Kron. I’m always happy there to chat with people and then of course my email address, too. Happy to chat through that as well. 

Host (01:00:20): Perfect, thank you so much Mr. Erick. That was definitely informative and really interesting to see your expertise on this topic. With that being said, this for all the attendees, after this [inaudible], you’ll notice a short survey on your screen, please kindly fill it out. It will be greatly appreciated. With that being said, this concludes [inaudible] today and thank you again for coming out today Erick and for all the attendees as well who’s attended, and we hope you have a great rest of your day. 

Erich Kron (01:00:46): Thank you.

Video Transcription End


CODE 61-Key WASD Keyboard Review

中國移動光纖寬頻提提你:設置Wi-Fi令家居網路 0 死角