Cloud Security Today

Securing Democracy: DNC's Cyber Cop

July 21, 2023 Matthew Chiodi Season 3 Episode 7
Cloud Security Today
Securing Democracy: DNC's Cyber Cop
Show Notes Transcript

On today’s episode, CSO at the Democratic National Committee, Steve Tran, joins Matt to talk about magic, AI, and cybersecurity. As the CSO for the DNC, Steve leads their IT, physical, and cybersecurity strategy. When not defending against dedicated adversaries, Steve can be found doing “off the cuffs” performances at the World-Famous Magic Castle in Hollywood.

Today, Steve talks about how he incorporates magic into cybersecurity, his transition from law enforcement to cybersecurity, and how to mitigate risk in a fast-moving environment. What are the potential risks of using generative AI? Hear about our susceptibility to mental malware, thinking strategically versus tactically to solve problems, and how Steve manages to stay sharp day-to-day.

 

Timestamp Segments

·       [01:21] Steve, the magician.

·       [05:14] Parallels between magic and cybersecurity.

·       [07:21] Transitioning from law enforcement to cybersecurity.

·       [16:26] Using magic to manage mental health.

·       [21:25] The DNC.

·       [22:19] Decentralization and security.

·       [24:59] Getting buy-in.

·       [27:42] Thinking strategically.

·       [29:09] Mitigating risk in a fast-moving environment.

·       [36:00] AI and cyberattacks.

·       [43:25] Potential issues with AI.

·       [50:46] How Steve stays sharp.

 

Notable Quotes

·       “Mental health can really affect cybersecurity professionals.”

·       “Business isn’t meant to be just transactional.”

·       “One of the biggest barriers to why people don’t buy into it at first is because they don’t understand it.”

·       “Security issues don’t care if you don’t have a budget or don’t have a team.”

·       “Once you get people to feel a certain way, you can’t undo that.”

·       “There’s no better way to learn than to have to teach material yourself.”

Secure applications from code to cloud.
Prisma Cloud, the most complete cloud-native application protection platform (CNAPP).

Disclaimer: This post contains affiliate links. If you make a purchase, I may receive a commission at no extra cost to you.

Intro: This is the Cloud Security Today podcast, where leaders learn how to get Cloud security done. And now, your host, Matt Chiodi.

 

[00:14] Matt Chiodi: The Democratic National Committee is part of the United States Democratic Party, and it's an organization that was established way back, in the United States, in 1848. Today, on the program, it's not about politics. We are talking with Steve Tran, who is the Chief Security Officer of the DNC, or the Democratic National Committee. We cover a lot of different topics. As you can already see, this is probably my longest podcast that I've had in the two-plus years that we've been running this program, and the reason is we cover a lot of different topics. We talk about AI, we talk about Cloud, but probably most interestingly, we dive into Steve's background. Steve's got this awesome background. He spent many years as a police officer in California. He's been into magic since he was a child. So, we cover all of those topics. So, if you're someone who has been wanting to get into cybersecurity, this is the podcast you want to listen to. Enjoy the episode.

Steve, welcome to the show.

 

[01:17] Steve Tran: Thanks, Matt, for having me. Excited to have a discussion.

 

[01:19] Matt: Awesome. Awesome. Alright, so I've got to cover these two things right off the bat. So, there are two things that are not on your LinkedIn profile. One, is that you were a police officer, and two, you’re a magician. I want to talk about both of them, but just because I'm always intrigued by magicians. Let's start with magic. How has being a magician helped you in the realm of cybersecurity? What are the parallels? And maybe, just give us, even before you answer those questions, how'd you get into magic?

 

[01:55] Steve: That's a great question. I initially got into magic as a way to improve my communication skills to be less shy, because I'm just naturally an introvert, and I was inspired by watching other magicians perform on TV, like David Copperfield. I was just really inspired, and then the more I learned about it, the more I was hooked. At the same time, I'm learning a lot about technology because around the same time, the Internet was fairly new, people were trying to figure out what AOL is and how to use it, and what you could do with it, and for me, I was phishing for accounts at a very early age, but at the same time, I was doing magic, but I never put the two together until not that long ago, because I looked at it as two separate things, and the more I'm doing work in the cybersecurity field, the more parallels I'm starting to see.

 

[02:59] Matt: I love it.

 

[03:01] Steve: An example parallel would be phishing, business email compromise, for security professionals in the field, these are things we deal with on a daily basis, but when you really think about what's really happening is that the threat actors are magicians. They're able to convince people to divulge information, personal information, credentials. Because we all know, instinctively, our credentials is something we shouldn't easily give up, but yet, it happens so often, and that's where we preach the use of multi-factor authentication and things like that. So, when I think about magic, it can be used as a tool to entertain, inspire, to improve storytelling, but it could be used in very unethical manners, too, and it could be used to social engineer people, and when we talk about social engineering, it could be done on the lower scale of phishing and low-level type of attacks, but then it could also be used on a grand scale, on a bigger scale where, if you can fool people […], then you could fool people to give up credentials, and then you start to realize, you can fool people on a mass scale and influence elections, and you can start threatening democracy, you can start to unravel communities.

 

[04:38] Matt: Wow, that's interesting. I didn't think about all those parallels, which is why I wanted to ask the question. I guess, a question for you, as you mentioned that you got into magic because you wanted to work on your speaking skills, you wanted to be less shy. It's interesting. Are you familiar with the Toastmasters Program?

 

[05:00] Steve: I am.

 

[05:02] Matt: Okay. So, I'm curious, early on in my career, I did Toastmasters for that reason. I wanted to be able to be a confident speaker. I think you mentioned that there was somebody that maybe influenced you in magic. Did you have that? Was there someone that influenced you? And, I guess, now that you are the chief security officer of the DNC, the Democratic National Committee, which we'll get into a little bit in a moment, how has it helped you? Because, I think, probably, my impressions of magic are largely from movies and things like that. So, you might say, “yeah, that's not real magic,” but where else have you seen parallels? I mean, has it helped you to be a better leader, a better cybersecurity practitioner?

 

[05:53] Steve: It has, because we're people at the end of the day, regardless of the complex problems that we face each day, and what I've learned is how to use things like magic to connect with people, and I think being able to connect with people has taken me far in my career, because we can talk about cyber hygiene until we’re blue in the face. It's another thing to not only talk about that, but to actually gain adoption, to get people to connect the dots on their own without having you overly be involved, where now, you feel like you're either a policing arm of the organization and you're forcing it, and this just becomes very ineffective, because what happens when you're not around? What happens when your program is no longer funded at the same level it was before? Or, it could just be a lot of changes, and that speaks to how resilient can an organization be? And what are the things that organizations can do to be more resilient? And how do security leaders and professionals like ourselves influence that resiliency? Well, not influence, but implement to be able to make it sustainable, year-after-year.

 

[07:14] Matt: I love that. I love that. All right. Now, to my second question, which is almost of equal interest to me, is that you were a police officer for years prior to working in cybersecurity. What were some of the most challenging things for you about transitioning from law enforcement to cybersecurity? And obviously, there are parallels, but when you think about taking off the uniform and doing cybersecurity, what stands out, in terms of things that were difficult for you, in terms of the transition? And then, maybe, talk about how did you address some of those things?

 

[07:56] Steve: What was interesting about my law enforcement career was, when I started, the iPhone didn't exist yet, and it was interesting to be able to see how things changed when smartphones became more ubiquitous out in the field, and what I really noticed was how far behind our laws were, in respect to technological innovation. It was really far behind, and seeing the intersection between law and technology at that time was very fascinating, because it's the same thing we're going through, right now, with AI. Whenever there's new or innovative technology, disruptive technology, is introduced, a lot of times, how we regulate it or create laws around it, for various reasons, becomes either an afterthought or bad things have already happened before change happens, from a legal perspective. So, an example would be search and seizure laws. I remember doing search and seizure on suspects that had pocketbooks, little notebooks, and they would write down who they sold drugs to, and things like that. So, it was like a ledger. It's very low tech, and they would keep it in a notebook on their persons.

So, in California, the ways we can look through the concept of pocketbook was either through consent, you consent to search, or there's probable cause to search, or search after arrest, but things changed when people started using less of the notebook, or pocketbooks, and used more of the smartphones. So, then at that time, same situation, we're arresting a drug dealer, and instead of the pocketbook, they have a smartphone, and there's a good probability that there's critical evidence that’s stored on that smartphone. So, do we treat the smartphone the same as a pocketbook? Because the law doesn't account for something like a smartphone.

So, that was very interesting to work through, and to go through our courts to see how law enforcement should handle cases, and our amendment, and our rights when it comes to digital evidence, and that was very interesting to then think about the Fourth Amendment rights, search and seizures, or our Fifth Amendment rights on self-incrimination, because it's instinct for us to ask for people to unlock their phones, but what we were not realizing we were doing at the time was, whenever we would take their fingerprint and unlock it, or get them to put in their code, essentially, we're violating their Fifth Amendment rights, right to self-incrimination. Of course, we don't know that until we go through all of these things, and it goes through courts, and then, attorneys bring this stuff up and it starts a discussion.

So, that was the only world I knew was where I’m chasing down the criminals, making arrests, and investigating, which is radically different than what I'm doing right now, because I'm no longer trying to prosecute and find people to arrest. So, that transition was difficult because of the switch in mindset. I went from hunting people, chasing people, investigating, and one of the things, too, reflecting back, holy cow, I had to take people's freedoms away. That's a huge amount of responsibility and burden, on top of also trying to stay alive. So, the hypervigilance, and things like that.

So, after many years of being in that environment, it made it really hard to switch to the civilian world, because I had to learn how to be less hypervigilant, and I also, I had to start interacting with people differently, from a context of “okay, I'm no longer trying to get a confession. I also don't have to be overly aggressive because people aren't trying to kill me. We're in a conference room,” and I have to remind myself “okay, these people are not going to attack and kill me, but I need them to turn on MFA. How do I get compliance?” And I'll be very ended. The early days, when I switch to the civilian role, was very hard, because I think I was overly aggressive, not because I was trying to be an A-hole. It was just the shift in mindset was just so hard at the time. It was difficult because you’re used to doing something 24/7. Now, all of a sudden, you’ve got to immediately turn that off and learn a new behavior.

 

[13:29] Matt: Yeah, I think, if I'm hearing you correctly, and feel free to tell me if I'm wrong, but you were shifting from a mindset that was very much, I don't want to say defensive, but you were already going after things that, in many cases, maybe you thought it happened, something occurred, you were trying to dig for evidence, where when you're on the cyber side, yes, there's obviously defense as well, but maybe there's more proactivity that should be there. Was that part of it or am I totally off?

 

[14:04] Steve: Yeah, you go from enforcement to, then, compliance. It's kind of the same thing. On the law enforcement side, I'm trying to enforce the law and I'm trying to go after the criminals and the suspects, but then, when you do encounter them, then you have to get them to comply when we're trying to handcuff them, or comply a during searching, because there's a lot of things from an officer safety perspective that goes into play, but a lot of it's people management. How do you manage this person where you’re also not escalating? And things like that. Being on the private sector side, like when we see someone not using MFA when they should be, or they're creating overly permissive accounts, or their Dropbox or Google Drive, or whatever, is open to the world without any authentication and things like that. So, a lot of, when you think about it from a GRC perspective, how do you get people to comply? If you go from a law officer where, if you want people comply, there's a level of force that can be used, or is an option to you. In the office world, I can't use physical force to get someone to turn on MFA or to comply with security policies and things like that.

 

[15:30] Matt: I don't know if you remember the skit, but do you remember Terry Tate, office linebacker? Do you remember those Reebok commercials back in the day?

 

[15:36] Steve: Yes. Oh, my gosh, I was watching this like, “Am I this person right now?” in the first year of my transition.

 

[15:42] Matt: That's right. For people who are listening, I will try to remember to link this in the show notes, but Reebok did a series of advertisements. I don't know if the first one ran during Superbowl many years ago or not, but they did a lot of these follow-on skits, and they are, to me, still some of the most hilarious skits. So, check it out. Google Terry Tate, office linebacker, but yes, it sounds like, Steve, when you were first starting out, making that transition, you were like Terry Tate wanting to take physical force to get people to comply, and you realized that's not the way to, long-term, build relationships, and maybe this is where your magic skills-- I'm curious. Maybe, I probably should ask this up front, was there an overlap? Were you into magic when you were a police officer? Or was that anywhere in this transition at all?

 

[16:35] Steve: Oh, yeah. I was a magician as a kid, and I stuck with it. What also helped was, after working as a police officer, there are things we're exposed to that you can't unsee, you can't un-experience, and a lot of those events are very traumatic, and that was a challenge for me, too, in the early days, being medically retired, and then having to switch to the civilian world, was managing those traumas, managing PTSD, and that's why I'm such a big supporter of mental health, because there was a time where I was in a very dark place, but I was very fortunate to have resources and people in my corner to help me recover and to be better, and to manage it better.

Unfortunately, depending on some of the mental health issues, it's permanent. There's no cure. The best you can do is manage it, and how well can you manage it? And trying to look at it from a glass half-full perspective, I look at it like this taught me to be more resilient, and in cybersecurity, it's about resiliency, especially if you're an […], too. Especially traveling and long hours, and stress, and things like that. Metal health can really affect cybersecurity professionals, as well, and when I build teams, manage teams, support teams, develop teams, that's part of my leadership principle is, how do we foster and promote mental health well-being? Because I'm able to say that because I went through it myself. I'm able to better help you navigate these waters. It could be scary sometimes for some people, especially if they've never been through it before and there's just a lot of unknown.

So, I do my best to lead through emotional intelligence, as well, because of those past experiences, and magic really was a great tool for me to help with that, as well. Especially, during times where I just don't want to think about the times where someone died in my arms, or the times where I couldn't save somebody. So, then I go back to magic and think about the wonderment of magic and how it can bring joy to people, and a lot of professional magicians that do perform, that’s why they become performers, it’s because that's your gift to the audience. They want to bring joy. That's why we do it. No one's in magic because they want to be an A-hole and piss people off. They get into magic because they thoroughly enjoy entertaining, and […] brings joy to people.

So, I'm very grateful to have magic as a mechanism to decompress and to unplug when we need to, and then I even took my magic to a whole other level, and there was a time where being a member of the Magic Castle in Hollywood was an impossible thing. It wasn't accessible for someone like me growing up, and when I had an opportunity to audition, one of the greatest feedback I got during my audition was like, “you're great, technically. Your slight-of-hand is great, your technique is great, but we can tell you practice a lot in front of a mirror and not in front of people,” and it goes back to, work with people again and relating to people.

So, it forced me to even take that part up to the next level, because the whole point is to really engage with your audience. How do you engage with the audience? How do you make them feel what you want them to feel? So, same thing here in the cybersecurity world is, when you're working with senior leadership, working with middle management, working with individual triggers, it doesn't matter who you're working with, or even third parties, how do you connect with them? The same way I would connect with the audience and things like that, too. So, it's a lot of complex things that when weaved together can produce really positive results.

 

[20:57] Matt: Absolutely, and like you said, being able to connect with people, connecting at an emotional level, I think that is really rewarding. Business isn't meant to be just transactional. There are people, and I think that's a really important point. I really appreciate you sharing some of that knowledge that might be hard to share, about some past experience, as well. So, I really honor you for that one.

Steve, I mentioned this at the beginning of the episode. At least in the US, most of us know, the Democratic National Committee or the DNC, but for those who don't, what's the mission? And what does your org structure look like? I know that it may be a little bit unique. Talk about that a little bit.

 

[21:44] Steve: Yeah, so the Democratic National Committee is the national governing body for all Democrats, and the mission is really to support all Democrats up and down the ballot, and the structure, it's structured like any other private organization, or any company where you have a CEO, COO, then you have all your different business units, and then each business unit has their own separate mission on how it rolls up to the overall mission of supporting Democrats on the ballot across the country.

 

[22:19] Matt: So, it sounds like you're talking about a highly decentralized organization, and it would seem like, and tell me, give me your thoughts on this, it would seem like the centralization could be the enemy of security. Is it?

 

[22:34] Steve: Yeah, so internally, within the DNC itself is not the decentralized part of it. The decentralized part of it is when we talk about the different state parties, and it's no secret, when you look at past stories and reporting, and how the ecosystem works, but it's very similar to how Hollywood works, as well. So, that's another parallel that I noticed when jumping on this side, is when you think about how the studios work, and the different production houses, and the third parties that they work with to make movie and television content, it’s kind of similar in the political world.

So, it was something that wasn't too foreign for me, but it's still a challenge nonetheless, because it's really easy to accomplish your security strategy programs and goals when you have direct governance and oversight. It's a completely different thing when you don't have that same level of direct oversight and control or governance over the decentralized part of it, but there are shared risks, because all of these centralized operations, there’s shared risk. So, how do you solve for that when you don't have the full authority to go to any one decentralized component and be like, “you have to do this in order to achieve this”? So, it goes back to magic again. It's like “how do I convince them, somehow to do the right thing so that we can all be secured?” Because it does take a collective. It requires collective participation and adoption and buy-in in order to really achieve security in that type of environment and that scale.

 

[24:30] Matt: I think, the question that comes to my mind when I think about this, decentralization was an issue I dealt with during my time at Johnson and Johnson, there was 250 subsidiary companies in 60 countries, and at the time, this is going back a number of years ago, we very much struggled with the whole concept of decentralization. Same thing during my time at Deloitte. I know you spent some time at Deloitte. Have you found something that's effective for basically turning what you would consider to be an ideal state, from a cybersecurity perspective, into action? You mentioned that, maybe, background in magic is helpful, but how do you get that buy-in when you're dealing with, essentially, a federation?

 

[25:23] Steve: That is a great question. One of the things I noticed to be effective is how do we story-tell to these different decentralized parts or entities? And then, how do we continually demystify security? Because I think, one of the biggest barriers to why people don't buy into it at first is because they still don't understand it. How good of a job are we doing? Because I also think about, it's not just about other people and other departments. It's also being self-aware of your own capabilities, too. How well are we doing to make it so that it's easier for them to understand? What I'm noticing right here, too, is everybody wants to be secure. No one pushes back on that. Everyone wants to feel safe and secure, but they want to know how do we do that? So, then I start thinking about how accessible am I making it for them? Because I don't expect them to be security experts to see security either. So, where's that balance?

So, from my experience, so far, what's working here is messaging is important. How do we frame and engage with everyone? And how well do we know their persona so that we can better tailor this message? Now, at that same time, demystify cybersecurity as we're messaging things out to them in a way where they immediately understand what their problem is, and then how they can solve that problem, because once they understand what their problem is, and then get them to really be motivated in finding the solution, that's the next part of it. Now, we feel like we're being invited to help solution with them.

 

[27:27] Matt: That approach, I agree with, but I think some people are probably thinking, “man, that’s going to require a lot more time upfront.” I mean, have you found, let me just tell you, from my background, what I have found with any type of project that I'm doing, whether it's a cyber project or a financial project, the projects that I invest the most time upfront, in terms of trying to figure out, like Steven Covey said, “begin with the end in mind,” how do I make this win-win? What are the goals we're trying to accomplish? It seems that there is a proportional relationship to how much time I spend upfront, thinking about it, and bringing in stakeholder, and those are the projects, when I look back over the last 10 to 20 years, the projects where I felt like I was wasting time upfront, like, “man, we're not actually doing anything. We're just talking about what we want to do.” Those are the projects that often had the best outcomes. Have you seen a similar parallel in the DNC in your work there?

 

[28:31] Steve: Yeah, definitely. Because what I noticed, too, and it's like a bad habit in many places is, people just think tactically, but no one takes a step back and think strategically, and I think there's also a time and place where it's better to be tactical first, and then strategic second, but when it comes to a big problem like this, I never underestimate just taking a step back and thinking strategically, to your point, putting a lot of thought upfront for it to make that downstream journey more productive and effective.

 

[29:09] Matt: So, in some of my research for this podcast, I found that, it seems to be public knowledge, at least because it's on AWS website, that the DNC runs its website and voter data collection on AWS. Now, this is, I believe, from back in 2015, so you tell me if this is not the case anymore, but it appears to be a straightforward story of reducing costs, improving load times, and providing content to Democratic voters, faster. Assuming that's still true, many of the CISOs that I speak with, they are struggling, they're in the Cloud, they've been there, and oftentimes for many years, but they still are struggling to secure their Cloud presence because it's such a dynamic, fast-moving environment. The Cloud service providers are constantly adding new features, new functionality, changing APIs.

From your perspective with the DNC, it mentions in the case study, voter data collection, there's some sensitive data that you guys could potentially have there, what processes have you guys put into place, or maybe even frameworks, to help keep that risk in check, but at the same time, allowing your developers, your third parties that you're working with, to be able to move quickly? It's a big question, but what does that look like for you?

 

[30:30] Steve: It's the same as any other organization who are AWS customers. I wish I didn't say it like there's something really special about us, but we're just like everyone else who uses that same service, and we encountered the same problem as many other organizations that use either AWS, Azure, or GCP. It started with Cloud posture management. How do you do that? And then also, when I think about security operations in the Cloud, versus how we used to do it On-Prem, they're vastly different, and what I love about securing Cloud workloads and services is that I can be more automated.

Working in the Cloud lets me be more programmatic with how we operate security, both from a proactive perspective, left of boom, and right of boom, and I feel like it also makes it much more affordable on the portfolio, because I don't have to bolt on so many ad hoc or siloed tools and solutions, because I'm not a fan of having a bloated security portfolio and budget but we can't even cover 60% of our environment or maybe address the top three risks that we should be concerned about at an organization, because you're constantly trying to combat and SWOT symptoms, rather than really understand the root causes of our issues, or the root causes that causes all these symptoms, like the disease, and that's what I really love about what Bob Lord did when he became the first CSO here at the DNC, and now being here after him, I've made a commitment to continue carrying on his approach and model, because then we continually and frequently sit back and start thinking about “if we're trying to solve security point XYZ, do we necessarily have to do it by buying a tool or is it something that can be solved by something even more basic and achievable that doesn't require a level of spend?”

When he created the DNC checklist, I think that was the greatest thing ever, because we go through that checklist, which is also publicly available on the democrats.org website, it covers basic things like “enable MFA, catch, don't reuse password, use a password manager,” and things like that. So, we kind of go through that really in-depth checklists that solves for a majority of the reasons why breaches occur, and when you look at the checklist, these are things that are totally achievable without having 10-million-plus a year budget for security, but of course, every organization is different.

So, that's something I want to throw out there, too. Every organization has their own threat model. They have different risk tolerance and appetites, but when I think about SMBs and things like that, too, and the world we're in, I have to shift my mindset a little bit to re-evaluate how we do risk assessments, and mitigations, and avoidance, and things like that, because I think it's vastly different from the last role I held, where we generated a ton of revenue year after year, but what if you don't have the same luxury or I don't have the same budget, headcounts, things like that. Security issues, they don’t go away, because security issues don't care if you have a budget or have team. So, that’s where you adjust.

 

[34:33] Matt: It sounds like you potentially could use a lot of open source or maybe build your own products. Am I reading that correctly, or no? I'm curious.

 

[34:44] Steve: Those are viable options. Do you do more open source? How does that change our calculation on how we determine buy versus build? But also, if we eliminate entire classes of vulnerabilities, then it also eliminates the need to have to address those problems in the future, or have to spend money to solve those problems, whether you're throwing money at people or technology.

 

[35:12] Ad Prisma cloud secures infrastructure, applications, data, and entitlements across the world's largest clouds, all from a single unified solution. With a combination of cloud service provider APIs, and a unified agent framework, users gain unmatched visibility and protection. Prisma cloud also integrates with any continuous integration and continuous delivery workflow to secure cloud infrastructure and applications early in development. You can scan infrastructure as code templates, container images, serverless functions, and more, while gaining powerful full stack runtime protection. This is Unified Security for DevOps and security teams. To find out more, go to Prismacloud.io.

 

[36:00] Matt: There was a lot of hype at RSA this year around the role of technology and AI in detecting and preventing cyberattacks. How are you thinking about this? How are you approaching this at the DNC?

 

[36:14] Steve: This is a fun topic because we're all in the same boat. So, I will also be upfront and admit that I don't know everything about all of the AI threats and risks out there. So, I'm not going to pretend I have all the answers here.

 

[36:33] Matt: I don’t think anybody does, yet. 

 

[36:35] Steve: So, those listening, I don't want people to think hey, we have all the answers, or we know it all, but what I will say about AI though is, it's amazing technology, it's amazing innovation, and we know that disruptive innovation that's used properly can be a huge benefit to society, to the world, humanity, communities, make lives easier, and things like that. However, if we're not thinking about the potential downsides, the bad things that can happen, we are at risk of making the same mistakes we made when social media first became mainstream, and I remember when Friendster first came out, and Facebook was only for college students. I mean, at that time, I never thought that social media would be as problematic as it is today, and I feel like this is an opportunity for us to learn from that mistake, because we kind of let all of that happen without any kind of safeguards, but as AI is innovating today, I feel like there's opportunity for us to really start to threat-model what are we concerned about with AI technology?

So, the way I look at it is, what are things that we can control today that are an issue today? And then think about what are the things that have not materialized yet, but could in the future? And then start thinking about things we can do about that. So, when I think about the here and now, what I think about is intellectual property, data protection, privacy, because when it comes to generative AI, are we really thinking about what we're inputting into that and understanding what could be used, or how it could be used once it's inputted? And also, knowing how it all works, too. How does it work? Does it mean that if I have an input today, in the next five minutes, can an output use what I just inputted? From what we're learning is, it doesn't work like that in real time, and even though it may not be an issue today, it doesn't guarantee that it won't become an issue in the next version that comes out, because whatever is being used to train the models today, even though it’s not used today, it could be used a year from now, two, three years from now, things like that.

So, I think it's about intellectual property, data, privacy, all that stuff that all people have brought up. Another thing, too, that I am curious to see how it would play out in court is copyright issues, and because AI is using a lot of existing copyrighted work or data, are outputs, essentially, derivatives of that copyrighted work? Because when I look at the terms and conditions of these different AI providers, some are saying that you can own the output, but is that necessarily true if you're doing something like, even coding assistant, where can you create a piece of code? well, how did it know to do it that way? It had to learn from somewhere, and what if it took it from open-source material that requires specific licensing models or licensing requirements, and then how does that play?

So, it's almost like that transit of legal issues, too, and then also, with how accessible it is for anyone to sign up for generative AI tools, who really owns the agreement? Is it the user and they AI tool itself? Or is it the organization and the AI tool? Because when it's really easy to sign up for things, it makes it really easy to create shadow IT and to bypass traditional procurement processes and legal reviews, and agreements, and things like that.

 

[41:32] Matt: Yeah, I think this is probably a place where a baseball analogy is very apt, which is, we're only in, I don't even know if we're in the first inning of AI. I think we're in pregame warmup. I've seen, I think it was just today, I was looking at LinkedIn, and somebody I follow posted a generative AI, it was an infographic of, “here's the attack vectors,” and I thought, “yeah, that's maybe today, but by the end of the week, that'll probably be radically different because it is moving so, so quickly,” and it's funny, when you said, I think it was something you said at the beginning of this about solving problems, and it brought to mind the quote that's often attributed to Albert Einstein, where he said, “we cannot solve our problems with the same thinking we used when we created them,” and what's interesting is, when you think about how generative AI works with MLMs, and all the training sets, we are effectively training it with the solutions that we have today, and what it's being trained with, that is what is now going to then be able to create these potentially new attack vectors, and I don't think there's, per se, going to be a lot of new vectors.

I think it's because, again, we are the ones that are training the AI, I just think that it's going to perhaps more quickly expose risks in an organization that maybe would have taken an attacker maybe weeks or months to find, they'll probably be able to find it faster, which is probably, in my view, I think it's probably going to expedite the adoption of zero trust in a lot of different organizations. I don't know. Do you agree? What do you think?

 

[43:16] Steve: Yeah, that's a really great point and lens to look at it through, for sure, and when we think about, what are the potential issues? Because we just talked about, those are issues that we're dealing with today. Well, what are potential issues that you could be dealing with tomorrow? So, that's what you just mentioned, but now add on, again, going back to massive social engineering, large audience there, or a country. What are my fears is, and again, I'm not saying this is going to happen. It's just a possibility. So, now, at this point, we're just thinking about what are the different possibilities? And what does it look like in terms of, one, can we prevent it? Is it preventable? At the same time, if you can't prevent it, what can we do about it?

So, again, being a magician, and then now, having sophisticated tools today, then I think about how I can deploy it. So, when I think about things that have been proven in the past, I'm not sure how many people remember, Peter Popoff, in the 80s, where he convinced people that he can heal their cancer and ailments. He scammed a lot of people, and there's a really great documentary that speaks about this, called An Honest Liar, with James Randi, and James Randi was a magician who created a reward for anyone who can, under controlled conditions, prove that they have psychic abilities or being able to move things with their mind, things like that, very supernatural, and there was a huge prize for anyone that can pass through his test condition. Because he's the magician, he knows what people are doing, these charlatans out there and scammers, because they're just doing it to scam people and take advantage of people, but what was interesting, though, about the Peter Popoff case, was that, even though James Randi exposed him and shared with the public how he was achieving all of these miracles, people still didn't want to believe it, and it's insane.

So then, I was at CactusCon, I think, a year or two ago, and then someone put on a really nice talk about mental malware, and I'm just thinking about how magic works, how we can fool and deceive people, but at that same time, understanding your own brain and how many of us are willing to admit that we're susceptible to that kind of vulnerability, and then how it can be exploited and things like that. So, when we think about what AI can do right now, it's like when people can use AI to create images of very provocative things, controversial things that don't exist. Once you get people to feel a certain way, you can't undo that either, from my perspective, and to me, that's scary. That's threatening to how we can coexist and cooperate as a community, is when someone is able to really enter false pretenses, put ideas in people's minds that you can't now undo, and then that becomes how actually people make decisions and thinking about things like that.

 

[47:10] Matt: That's definitely a real possibility with AI in terms of being able to poison datasets, being able to put information in there with either misinformation, disinformation, is a very real possibility, and I think we've actually seen, and I know that maybe not specifically mis- or disinformation, but there was reports that came out, sometime in the last two months, about how several large corporations’ general counsel have said, “the data that we're getting back from GPT, it looks like it's stuff that's directly from our internals,” and that's because employees have voluntarily put sensitive information into it, and for a while, before they changed some of the privacy policies with ChatGPT, they were being trained off those datasets. So, I definitely think, that right there is probably one of those new attack vectors for different generative AI’s, which is why I think we'll likely begin to see the rise of local MLMs, where it's truly just your datasets that you're training it with, where you can, hopefully, control some of that.

 

[48:17] Steve: Yeah, that's a great point, because then now, too, from a risk perspective, there's not that many generative AI players in there, but there's a lot of other organizations that are using their API's and creating services off of their API's, and things like that, and it's kind of spreading, but you don't have to attack large companies that are using the API's, because you just go straight to the source, and you just put in the source and then you just amplified your objective, you're able to have a better blast radius, just by attacking the source of it. So, it becomes a supply chain risk issue, as well, but when we think about human error, it's so funny, because I think about security awareness training that we do every month and how we have sound judgment and human error.

So, let's just assume that we have good policies around AI use and there are really good legitimate use cases for AI, what will still be problematic is compliance. So, let's just say AI can be used in this manner, but it requires you to do some kind of vetting with the output, but when you think about human error or someone's getting tired, it's the same scenarios. If I'm tired, overworked, I overlook things, and then I fall victim to BEC. Same thing with AI. Use AI to do work but then you get tired, take shortcuts, and you actually don't vet when you should be, and then also now, because there's also disclaimers about AI not being accurate sometimes, plagiarism is a real fear.

So then, someone doesn't do what they're supposed to do, in terms of vetting it once the output’s created, then it gets used out there and no one else did any checks, so that's a process breakdown right there, so how people process technology. You start to help people as a human error, then a process breakdown, or maybe lack of process, and then now, the outcome is a reputational damage to the organization, they can have trust issues, and potential litigation issues too, as well.

 

[50:31] Matt: Well, Steve, we have covered a lot of topics over the last 45-50 minutes or so, and I just love talking about it. One question that I don't want to leave without asking, because I just find it super interesting is, every leader that I talked to seems to have some type of routine that they use to stay sharp. Stephen Covey would, I believe, call it the seventh habit, which was sharpening the saw. What does your routine look like to stay sharp?

 

[51:00] Steve: I'm learning all the time. What I love to do is continue meeting really smart people, people who are way smarter than me and learn from them, and even doing something like this with you, learning from your perspective. So, that's how I stay sharp is, I stay connected, I talk to people, I'm not afraid to admit I don't know something. I don't know a lot of things. So, I'm learning every day, and what my routine typically looks like is still doing CTF challenges to learn, meeting people to learn, networking with people to learn, but also, if there's an opportunity for me to provide anything useful to them, too, as well, and then that creates other discussions, and I also teach part-time at a community college, too, as well, because I want to help the next generation be successful.

 

[51:57] Matt: There's no better way to learn than to have to teach material yourself. I remember hearing that, years ago, and having taught just a handful of times at the university level, I had to know the material much better than I than I thought I knew it. So, I would definitely second teaching as a great way to stay sharp. So, thank you for doing that. Thank you for investing in the next generation.

Let me ask this. Where can listeners connect with you? I know you're a busy guy, but I think a lot of our listeners are probably going to want to do some magic. I know, I would love to see you do some magic. Maybe it's out there somewhere. What's the best way for listeners to connect with you?

 

[52:32] Steve: LinkedIn is a really great way to connect me because I'm verified on there. So, that would be a good, safe source to make sure, all right, I'm on the right profile, it’s verified, because then within that, I have links that send people either to Mastodon or my website, or Instagram. So then, that way, it's a single source of truth.

 

[52:56] Matt: I love it. I love it. Well, Steve, this has been a great conversation. Thank you so much for your time today, and hopefully, we'll have you back on again next year. Thanks so much.

 

[53:05] Steve: Thank you, Matt.

 

Outro: Thank you for joining us for today's episode. To find out more, please visit us at Cloudsecuritytoday.com.