Bitdefender Completes Acquisition of Horangi Cyber Security


Products +

Services +

Customers +

Partners +

Resources +

The Human Defense Layer

Most cybersecurity programs work against human nature instead of with our innate behaviors, resulting in breaches even though your organizations may already have spent large amounts of money on security technologies. Perry Carpenter, our guest this week, helps you understand your end-users and build an effective Human Defense Layer to bolster your cybersecurity posture.

Tune in to this episode of Ask A CISO to hear:

  • How Perry defines the Human Defense Layer
  • What inspired Perry to focus on the Human Defense Layer
  • Are Security Awareness exercises and programs effective?
  • How current approaches to managing the human element in cybersecurity are setting end-users up for failure
  • Why it's important to understand the intention-action gap in dealing with humans and cybersecurity
  • How to start building a holistic security culture
  • How and why we cybersecurity practitioners should work with, instead of against, human nature to build a strong security culture
  • How you can measure the maturity of your security culture

About The Guest: Perry Carpenter

Perry Carpenter (author of The Security Culture Playbook: An Executive Guide to Reducing Risk and Developing Your Human Defense Layer, and host of the 8th Layer Insights podcast) currently serves as Chief Evangelist and Strategy Officer for KnowBe4, the world's most popular security awareness, and simulated phishing platform.

Previously, Perry led security awareness, security culture management, and anti-phishing behavior management research at Gartner Research, in addition to covering areas of IAM strategy, CISO Program Management mentoring, and Technology Service Provider success strategies. With a long career as a security professional and researcher, Mr. Carpenter has broad experience in North America and Europe, providing security consulting and advisory services for many of the best-known global brands.

Perry holds a Master of Science in Information Assurance (MSIA) from Norwich University in Vermont and is a Certified Chief Information Security Officer (C|CISO).

About The Host: Paul Hadjy

Paul Hadjy is co-founder and CEO of Horangi Cyber Security. 

Paul leads a team of cybersecurity specialists who create software to solve challenging cybersecurity problems. Horangi brings world-class solutions to provide clients in the Asian market with the right, actionable data to make critical cybersecurity decisions.

Prior to Horangi, Paul worked at Palantir Technologies, where he was instrumental in expanding Palantir’s footprint in the Asia Pacific. 

He worked across Singapore, Korea, and New Zealand to build Palantir's business in both the commercial and government space and grow its regional teams. 

He has over a decade of experience and expertise in Anti-Money Laundering, Insider Threats, Cyber Security, Government, and Commercial Banking. 



Hello, and welcome to today's episode of the Ask A CISO podcast.

My name is Jeremy Snyder. I'll be hosting today's episode, and I am delighted to be joined by a really esteemed guest that we have with us today, Mr. Perry Carpenter. Perry is the author of The Security Culture Playbook, An Executive Guide To Reducing Risk And Developing Your Human Defense Layer and the host of the Eighth Layer Insights podcast.

Perry currently serves as the Chief Evangelist and Strategy Officer for KnowBe4, the world's most popular security awareness, and simulated phishing platform.

Previously Perry led security awareness, security culture management, and anti-phishing research, sorry, anti-phishing behavior management research at Gartner Research. In addition to covering areas of IAM security, CISO program management, mentoring, and technology service provider success strategies. With a long career as a security professional and researcher, Perry Carpenter has brought experience in North America and Europe providing security consulting and advisory services for many of the best known global brands.

Perry holds a Master of Science in information assurance from Norwich University in Vermont and is a certified Chief Information Security Officer. Wow. C|CISO. I didn't know that was actually a thing.


It is a thing, though, I don't know what value it has to somebody as they get further and further in their career? I think it may be one of those things like a lot of certifications that help you at a very pivotal time in your career, but don't really do a lot of good after that.

So, I don't know. It's nice to have letters though.


Is it that pivotal time when you're actually taking your first CISO role? Is that kind of what you're referring to?


I would guess so. I was kind of, I'm trying to remember the time period that I decided to take it. It was really, I think back in my Gartner days when I wasn't a CISO, but I was speaking to a ton of CISOs all the time. And so what I really wanted to do is kind of say, Hey, yes, I come at this from a Gartner-esque type of mindset, but I do have a lot of the skillsets and the practical knowledge that is going to be valuable to you too.


Yeah, I think that kind of customer or audience empathy is something that we in the cybersecurity world often lack, to be honest. I tend to think of cybersecurity as a very technical domain and most of the people working in it as technologists first and foremost, not, let's say, people people.




And so we tend to approach problems as technical problems first and foremost. Always looking for technical solutions and not looking for the human solutions.

Does that kind of match a lot of your experience?

The Security Culture Playbook


Yeah, I mean, I think, because we technically sit in the IT domain, we tend to look towards the technology piece a little bit more readily and faster than almost anything else, but then we forget, and a lot of people forget, that we talk about the three things that are components to every security program, which is People, Process and Technology.

We very quickly jump to the technology piece and we're very comforted by blinky lights.

We don't really like process. We rebel against that. And then we devalue people.


I think in my experience, the only time process really gets brought into the conversation is when you go through that annual audit and your auditor starts asking for samplings of your SOP and evidence that you've actually been following process in, let's say incident response or related fields, but it really kind of segues nicely into ... I know a lot of the work that you've been championing for the last, I don't know, five-plus years, is it fair to say, around that people aspect of cybersecurity.

One of the questions just to kind of start things off is how would you define the human defense layer? Is it just the people or is there more to it?


You know, it really is ... so I think that there's a theoretical piece of this and then a practical piece of this. So we could talk about the human defense layer as a theoretical principle that an organization decides to take the mantle of, and the way that I think about this is if we're using the phrase "security culture" as a big piece of this as driving this human defense layer, then the big thing that I want people to understand is that even if you're not paying attention to what we call your "security culture", you still have a security culture.

You have people that have security concepts, security predefined beliefs, behaviors, value sets, and everything else, whether or not they're the ones that you actually want.

And so, the main thing that I try to get to in the book and in a lot of conversations is you have a security culture, whether you want one or not.

The thing that you have to do is you have to really start to get in there and understand and measure it so that you understand how good or bad is that culture. What do you need to do about that? And how do you build it to where you want it to go in a sustainable way.

And so for me that the human defense layer is really an extension of the OSI model, where you, you know, from a technology viewpoint. You start at data, you end at application, but we typically forget about the thing that's past the application, which is the human.

And so the human defense layer is really just a concept that says, oh, there is another thing that you should be securing and that you should also believe is a pivotal part of your security posture.



I really liked that when I saw that the name for your podcasts, that you are ... that you came out with a little while ago is Eighth Layer Insights. Now me as a former network person, I got it immediately, but I don't think a lot of people necessarily will.

But to your point, you know, even a bad security culture is a security culture and we all live in one inside the organizations where we work day to day. I guess one of the other questions that really kind of comes to mind is what was the initial inspiration to start thinking about this? And I know you've been working on this for years.

Was there one pivotal event or one particular incident or organization that inspired you to really narrow focus onto the human layer?


So I've actually been focused on the human layer for, I think just over well over a decade now. And the thing that really got me is, I had worked at a few really large organizations like Walmart and Alltel before the Verizon acquisition. And then at Gartner, really helping serve the largest companies in the world. And the thing that I saw is that, CISOs and other leaders would readily spend lots and lots of money on the technology layers of the defense and still the data breach problem happened.

And I saw over and over and over again, that the thing that was being neglected was the people's side. And then also just for me, I was doing a lot of introspection because I'm a guy that lives on the Asperger's side of the autism spectrum. And so I was doing a lot of self-searching and saying, you know, How do people really work? How does society work? How does cultures work? What drives behavior? What makes people believe certain things?

And then if we're kind of pivoting back into the corporate world how do I help people believe the things that I want them to believe, have the actions that I want them to have to build the security posture that I want to build?

And all of that got into the awareness and behavior and culture types of conversations over and over and over again.


That's really interesting.

It's such an opposite mindset to, let's say, the one that I had in my organization had the companies that I worked for in the late 90s or early 2000s. If we thought about the user at all, when we were designing our security programs, we really thought of it in terms of mitigating the user out of the, you know, out of the kill chain, so to speak where we would say a little bit of a zero-trust where you say, assume that you've been breached.

We assume that the user is frankly stupid. And so we put all these controls in place to kind of protect the user from themselves or protect the organization from the user.

I'm curious, you know, kind of when we fast forward 20 some years and we get to where we are today when I look at the landscape, I think the user has more access to data and more access to systems than they ever had in the past. How do you think about kind of managing that in the modern cyber landscape?



I think you hit on something really critical, which is, for a long time, our mindset has been, the user is the weakest link. The user is ... And I never thought about users as stupid, but I've definitely been around a lot of people that do, and then when you see the behavior patterns, you can also, you can certainly understand why somebody would make that assumption.

I typically think of users are just human and they make versions of the same errors that everybody makes, even when they're in the, you know, the most techno, you know, technology, security-centric type of people make really bad errors in other parts of their life or expect people to do things that they never actually have implemented within their own lives on the security side too.

Let's say we get really frustrated with a user that continues to use, have bad password hygiene, or continues to fall for phishing. In the right circumstance, you or I, or anybody who is a security expert will fall for a phish. If you're distracted, if you get shot at the right trigger, get hit in the right context, you and I will fall for it.

So we shouldn't get mad at somebody else for falling for that. We should just see that they're human.

Same thing with things like passwords. We're like, what, why are you using monkey 1, 2, 3 as your password? Well, it's easy. It hits an algorithm in their heads. They're probably sharing it across several different things, but the reason that passwords become a problem is usually because we've set the user up to fail in that way. They've got way too many systems that they have to create passwords for so they're going to default to something smaller and easier to remember. They're going to default to some kind of algorithm that they can maintain.

And then the other thing is, we may not have given them all the right tools to manage them well, which is why I'm really happy to see things like the password management market explode more and more to where you can create really good, strong passwords and remember them on behalf of the user.

So you're really working with human nature instead of against it. So I think as you talked about, you know, 20 years, post when, when you and I first started, you know, getting into this and, a lot of the conversation was around why humans were inherently flawed. What we started to realize is that humans are a paradox in a lot of ways.

They are both really, really weak and really, really strong. They're both really, really undependable and really, really dependable. We are people that are very creative, but we're also very destructive. And so we have to work with the paradox, I think.

And we think about it: We build technology-based controls to relieve some of that tension, to let people fail safely because, and I think it was Ira Winkler that said it this way the first time, and so I'm going to give credit where I think credit goes for this, is if the human is the weakest link in your chain, it's because you've not built the chain correctly.

The human should not be the one deciding factor on whether your company is breached or not. You can let the human fail, but there should be some other kind of defensive layer after the human. And so I think we have to work with all of that against people, process, and technology, and we have to build those things effectively.

And then we have to build around human behavior, but then also play to the strength, which is things like reporting, things like social pressures and, you know, building the right behaviors, and so on.



I really love the way you put that both in terms of kind of the human being dependable and undependable, but also kind of laying out that chain, because to me, when I thought about designing the security of the systems that I manage, or the applications that were in my live environment, I always thought about kind of layers of security, kind of building on what you said about password managers.

You think about kind of these systems that we design, as let's say, architects or builders. And we think about, Hey, we're going to put in place really strong password requirements. There's going to be complexity. There's going to be length. There's going to be rotation. There's going to be requirements around non-reuse, et cetera.

And yet to your point, we may not be equipping users with password managers to help them live their side of that. So I do think it is kind of a two-sided thing where you can't just set the bar so artificially high that a user is not set up for success as you said.

I think that's really, really well put and I think it's something that we would all do well to remember in our day-to-day work, you know, designing system or implementing cyber strategies.

That's really great. I really enjoyed your message there, Perry.


I appreciate that.


You had something from your book that jumped out at me that I wanted to kind of dive into speaking again about these layers. You talked about kind of security culture and security awareness.

When I think of security awareness, I tend to think of the kind of annual check-the-box exercise that a lot of organizations will put their users through.

Do you think that a, I mean, obviously from my perspective, that's kinda, let's say helpful slash necessary, but not nearly sufficient.




So would you, first question would be kind of, do you agree with that? And second, what would be some components that go along with it? And third is really, do you think that security awareness, that kind of annual exercise does any good?


The annual exercise doesn't do a lot of good.




So the way I talk about it, and again, this is kind of going back decades now, is that security awareness, the way that it was defined, is wrong. You know, the definition is in the name. It's awareness, and it has a fundamental problem in the assumption. And the assumption is that if I give people the right information, they will naturally believe the right things and have the right actions and humans don't work that way.

Nobody works that way.

You and I know tons of things that are for our good that we don't act on every day. We don't make all the right health decisions, financial decisions, relationship decisions, or anything else despite knowing the best practices that are there.

So again, why should we expect an end-user to do that, when in every other aspect of our life, we don't do that unless we really, really care about something?

So I talked about a few things when it comes to this, is that there is what I call a knowledge-intention behavior gap. Just because I know something doesn't mean that I intend to act on it. So there's a gap between knowledge and intention, but even if I know and intend to do the right thing, there's a gap between intention and action. And we have tons of years in each of our lives of failed New Year's resolutions to know that there's a gap between intentions and action.

So out of that flow, what I call three realities of security awareness: number one is just because somebody is aware doesn't mean that they care. Number two is if we work against human nature, we will fail. And number three is what our employees do is way more important than what they know.

Because awareness, you know, the core name of that is all around information dissemination, and that doesn't really do a lot other than check a compliance box and say that I have exposed people to the right information. Which is a good thing. That's a duty as an employer, let me expose people to the right information, but I want to go beyond that and I want to get into behavior shaping and then ultimately some kind of culture shaping type of program.


But to that point, if we start to think about behavior shaping, this is where a lot of organizations get sucked into what I think of as kind of the false allure of let me just lock things down and prevent the user from any kind of misbehavior.




But that doesn't really work, does it?



What it does is it pushes people to find workarounds.

So in the same way that as we build better and better security layers on the technology side, we make things harder and harder to hack. What we essentially do is we make the attacker pivot to target the human because that's the weaker piece of this. That's the, that, you know, they become the primary attack vector at that point. And naturally, unless they're prepared and they've got all the tools to be resilient, they're going to fall victim to that.

Same thing when you just lock everything down, as much as possible, you frustrate people's lives, you make things take a little bit longer. You build resentment, and ultimately they're going to break something. They're going to find that workaround, whether that's, you know, taking work home and getting that done off-network or something else.



From my own background in the cloud space, I can say that this isn't a behavior that we observed very frequently to the extent that the organization would kind of lock down access to the cloud. This is where we saw shadow IT pop up, down to the level of complete, you know, standalone cloud accounts.

We even saw a customer who had a relationship with AWS whereby every month Amazon would report back to the customer new accounts created under that customer's email domain. The developers finally understood this behavior and so they started creating new accounts with their Gmail or in their Hotmail and their Yahoo! so they wouldn't get reported back to central.

You know, the CEO of the company I used to be with, Brian Johnson, he always used the analogy of kind of throwing a big rock into a little stream. It'll stop the stream for a little while, but eventually the water wears away at the edges and finds its way around.

So I totally get what you're saying there with finding workarounds. We as humans, we are wired that way.

And to your point, if we try to work against human behavior, it is bound to fail.

That's really valuable insight.


Well, and when, so ...


Go ahead.


Just to add to that real quick.

When you get into behavior shaping, it really is trying to understand all those types of things. You're trying to understand what is somebody's natural behavior? What are they actually trying to accomplish? And then how do I build what's called a choice architectecture in a way as to naturally suggest the behavior pattern that is going to be best for that person or that environment?

And so there's a lot of science that goes into that, but it's not just about locking everything down. That does make things harder, but as you already talked about, you can actually build and accidentally architect another big problem that you have to solve for at that point.


Yeah, for sure.

Sure. But I think natural behavior is one aspect of it but I tend to think about some of the organizations where our customers live and, you know, these people work day to day. A lot of the times they have unnatural pressures put on them that strain them outside of their natural behaviors. I think like you said, we'll all fall for a phishing attempt in the right context. That context could be a developer who's under pressure to deliver an application on time or release the hottest new feature or the thing that's going to win them 5% more market share, who knows?

You put these unnatural pressures on people and a lot of the times it forces them out of their comfort zone in a lot of the, kind of, let's say, the learnings that they might have from the past that are their intentions and their natural behaviors.

Those go out the window, right?



And I think the pandemic, we saw that over and over and over, right? Because you have a situation where people are stressed. Maybe they're dealing with difficult home life. And now you exacerbate that whole thing with a fear of public health. You've got people that may have escaped either the office environment to home, or they escape their home life and went to the office.

And now you're merging all that together and you've got kids involved and everything else.




Everything has changed. Everything is way more stressful. Everything is way more fearful. Everything is at the top of somebody's mind and their emotions are raw and they're they're primed. And we saw criminals take advantage of that big-time over and over and over again.


I mean, especially those first six months, pretty brutal in terms of the number of breaches. I mean, thankfully I think organization had gotten a little bit better about, let's say not putting all their crown jewels in one place. So you didn't have too many organizations that were breached to the extent of let's say a Colonial Pipeline or something like that, where it was really kind of the whole organization was owned.

I want to come back to kind of security culture for a second and get your view as to an organization that's starting their journey towards having a security culture. So we understand that that security awareness training, that annual exercise is it's not really getting the job done for us. If we think about embracing a security culture, first question: would you roll that out top-down, bottom-up some kind of collaborative effort?

Second question: You know, if the security awareness is not the right starting place, what would be the right starting place for that journey?


I could probably tackle both of those with, you know, one initial thought and then we could build out from there if we have time.

But the biggest thing that I would say is that if you're trying to make a step and you've never entered this journey before, then start with just tone setting, you know, saying the right things as executives and not really just saying the right things, but following through with your actions, because people will naturally model the leaders.

And then what you want to start doing is pushing that down.

You want to make sure that not only the CEO and the CIO and the executive suite are speaking about that and living that out, but that somebody's direct manager is also speaking that and not showing that they've got a different value system because people emulate the person at the top, but they're also going to even more naturally emulate the person that's right above them because that's the person that is most likely to affect their career in the long term.




So I think it's top-down, but it's also middle out and there's also some bottom up pieces, and that you do hear people talk about security champions programs, which is all about building peer structures and peer support and peer pressure so that you can really kind of model an organization from a security perspective the same way that we model society around peer pressure.

And if you remember back to your high school and middle school days, you would talk the way that your friends talked, you would do many of the things that they did. You would even start to believe the things that they believed if they were different than the way that you initially thought. And so we know that the people that are around you are going to be the primary dictator of what you think, believe, and the way that you behave.


This is such a great point and it's so contrary to what we see in a lot of organizations. I can't tell you the number of times I've walked into a customer or spoken with the customer and they say, well, it's like this for everybody. except the CEO also likes to get stuff that used to be on his Blackberry or something like that. And nowadays it's oh, you have to share the document with the CEO's Gmail because they can't open it with their whatever corporate Google workspace, because it has multi-factor and they can't be bothered with multi-factor on it, or who knows what.

But I think that is such a crucial insight into, you know, this really is kind of an organizational thing and people do emulate the leaders that are above them. I mean, to your point that the one step above you is the person who controls your annual performance review and your career opportunities so that's a great answer.


There's another thing that comes up in that too.

So if we go to kind of the world that I live in, which is the simulated phishing world, there are always people that always say All right, Let's make our company policy three strikes you're out. If you fall for three phishing attempts that we've done in simulations, we're going to fire you.

And I always, I argue against those a lot because I asked the question to anybody that proposes that to me, I say, are you going to hold your CEO accountable to that? And they're like, right we couldn't do that. Well, you can't apply a policy at the top level that you're expecting everybody else to follow, then it's a broken policy by design.

And so you need to figure out something that you can apply holistically when it is those kinds of security behaviors.


Yeah. That's such a great point.

Thinking about this evolution or this journey, if you will, where we start and we say, Hey, we, as an organization, we want to embrace a security culture. So we have some top-down modeling, we have some kind of walk the talk type of activities. We demonstrate that. We write up our policies, et cetera.

How do we measure our progress along the way? Is there a maturity model that we should be thinking about?


Oh man. It's like you looked at the book!

There is.

In the book we talk about a model that an associate of mine and I made together, called the Security Culture Maturity Model. And what we do is we break a security culture maturity across five different levels that start basically really, really nascent, move into generalized awareness and kind of the check-the-box compliance, get all the way up into where people are starting to do the right things with behavior shaping and then into the final, kind of, two phases, which are different versions of awareness, sorry, different versions of very intentional behavior shaping and culture shaping all the way towards sustainable culture.

And as you can imagine with the state of the industry right now, those, there are very few organizations that are in those last two most mature buckets. Most everybody is kind of in the level three where they've just started a behavior shaping journey primarily around phishing, maybe passwords and things like that.

But we really see room for the, really the world to move into level four and level five. And we know it's possible because we've actually measured organizations that are doing that successfully.

The thing that sets our model apart from, I think, every other model that's kind of built on this continuous improvement type of framework or capabilities, maturity model. The thing that sets ours apart is that it's very data-driven and it's evidence-backed. As we were building the model out, we had access to the security culture, information, and behavior insights of over 40,000 customers that we were looking at.


Oh, wow.


So millions and millions of actual users and billions and billions of rows of data that we were looking at and taking analytics on. And so that really sets it apart.

So when we say that the maturity of the world when it comes to security right now is about a level three, that's because the data shows that. The other thing that we do is that, from a security culture standpoint, we measure culture across seven different dimensions, and the dimensions are things like attitudes, norms, communication capabilities, and so on.

So we go across all of those and you can be not so good in one, but really good in another, like, you might be really bad at attitudes, but have really decent behaviors. And you know, that's fine if your overall score is moving you to where you want it to be, but if it's not, it also means that you know, at a very granular level, the one or two things that you can focus on to start moving the ball a lot. And you're not overwhelmed saying I need to focus on all seven of those because we also give some data and some insights that show if you're if you focus on any of those, they have a gravitational effect on the others.



And I guess though, to that point, you know, as you said, your actions are kind of more important than your intentions. So as you say, if your attitudes are bad, but your behaviors are good, that may be problematic, let's say from an overall organizational collaboration and teamwork perspective, but maybe less so from a security perspective.

But it brings me to a question that I've observed in a lot of organizations and I imagine you've seen the same.

You go into a lot of companies and you ask people, Hey, what's the relationship like between you workers and security? And you'll hear variations on a, oh, security is the people who say no. Security is the people who yell at us. Security is the people who kind of berate us. I guess my question would be if we know that out of kind of the technology, process, and people elements, people is the one that we're focused on, what's the message that you would want to give to the cybersecurity practitioners around doing better in terms of their interactions with the people that they work with?



So I really am a big advocate towards just having basic human empathy to your end-user population.

Again, it's not that they're stupid. That's not the reason that they're clicking on things or that they're reusing passwords. It's because they are human. And they have all the same frailties and behavior patterns that you and I would very innately have. And they only care about the things that they've really been shown to care about, that are going to have big impacts on the way that they do their work and the way that they live their life.

And so that means if you're wanting to change a behavior, or if you're wanting to instill a value, you really have to start at a very primal level. about, you know, what makes that behavior actually work within somebody or what makes somebody believe the thing that I need them to believe at a core level, so that they will start to take that on as a value and potentially being able to express that out as a behavior?

But you know, the good news is, even if I can't instill a belief or a value system, I can set up the behavior architectecture to pretty much funnel somebody in the behavior that I want, but my best of all is to have somebody really codify a belief into a value system, express that out to others, build peer pressure and social structures around that, and then naturally encourage people to do the behavior that I want because that then starts to become very self-sustaining over time.



Yeah, that's really interesting.

I mean, this whole, there's been a ton of podcast recently that I listened to on programs like Planet Money, where they talk about these kind of nudge influences and these kind of social experiments with getting people towards these certain behaviors. And I can imagine a whole wave of security programs coming around that will kind of make those 2% incremental improvements.

But I really like your message around empathy. And I think it's so important. I myself have spent, you know, kind of the last six years working on the dark side, which is to say primarily in sales and one of the things with my sales engineering teams that I like to really try to help them remember is have some empathy for the customer.




Just because we come in with a particular biased point of view and remember we're always biased towards the products or the companies that we're representing, doesn't mean that our customers that we're talking to (a) understand that, or (b) are going to embrace that right away. They have a day job. They have a set of goals and objectives that they're trying to drive towards. It's our job to kind of understand that and then help guide them towards getting answers to their questions as to whether this is the right thing that will be helpful for them or not.

And I really think that empathy is something missing from a lot of us in the cybersecurity world, to be perfectly honest about it.


And you hit on one of the big things. So if we were to kind of pivot over into the sales world and say, let's say if you're a CISO, one of the things you're trying to do is you're trying to sell your end-user population on certain behaviors or certain values and shared beliefs and things like that.

People will buy from, and this is a standard thing that you hear in sales all the time. People will buy from other people that they know, like, and trust. And so what that's talking about is you need to build a relationship. We would say you build a relationship based on empathy or at least understanding. And so what that also means is that from a security person that you are recognizing when things are going right for people. You're rewarding them in a way that doesn't feel greasy or valueless in some way.

Because reward without relationship feels repulsive to people. And so you build a relationship, you reward, you hold people up, you show the good things that are happening. And then ultimately if you're building the relationship that you want from a security perspective, you're going to have people coming to you with conversations that you never anticipated about things that they're seeing in the organization that are wrong with security or systems that aren't architected in such a way as to encourage this secure behavior or things that can be done to help make self-reporting easier, reporting of suspected events easier.

And so that's what comes with a really good security culture is you built a relationship. That relationship then starts to cycle back and allow people to become a very proactive part of that defensive posture, which then circles back into that whole human defense layer type of concept.


Absolutely. It's such a great point.

Well, Perry, this has been a fascinating conversation. I've really enjoyed it.

I've got only one final question or maybe it'll evolve into two.

I heard that you actually didn't start your studies in the technical domain or in, not in cybersecurity, specifically. I think I heard you mentioned somewhere that you actually started studying theology, Greek, and ancient Hebrew.

Is that right?



So my undergraduate degrees, I have a degree in philosophy and I have a degree in Biblical languages, and was the, at the college that I went to, the first person to actually get a Hebrew, but I took Greek and Hebrew and a few other interesting things around archeology and linguistics, and then also got a degree in philosophy.

But then I didn't know what I wanted to do after that. So I started to go, I thought about linguistics and I had a professor say, well, you're going to wait for somebody to die to get a job if you take that. That didn't sound appealing.

So I went to law school for a couple of years. Again, because I just didn't know what to do and then realized that I would not be a great lawyer. And then so pivoted into really one of the things that was my first love which was computer science and programming, and then got into security as so many do from that path.


Yeah, I smile if you, I don't know if you caught my reaction there. I smiled because my Bachelor's is also in linguistics. Computational linguistics in my case. But I went through that whole evaluation process where I seriously considered the Master's and PhD path and I kind of came to a similar conclusion as you, which was I don't, (a) I don't know if this is what I want to do, and (b) boy, the opportunities are going to be pretty limited.

I remember specifically during my years, West Greenlandic was kind of all the rage in the linguistics community at the time and I just kept thinking to myself, boy, if that's the level that we've reached, I'm not sure I can muster the energy and enthusiasm to go deep into that.

So I'm curious for you. So you went into computers but were there things that you learned from that initial background in philosophy and related arts that you've brought into computer science and into cybersecurity and what are those?


I don't know that I could enumerate them, but I definitely bring a mindset in that I see reflected by other people that have similar backgrounds, which is the ability to pivot around problems a little bit differently or more holistically. I think having that more philosophical bent, or being able to tear things down. I think having a linguistics background also helps with the, you know, the basics of understanding programming languages. I think having the partial legal background that I have has served, you know, really great as I started to thinking about regulations and laws and how to read contracts and deal with vendors.

And so a lot of that I think comes through. And then if you listen to my podcast, you can tell that I kind of philosophize about everything. So there's always kind of a philosophical or an ethical type of way that I think about things.

And then also, just seeing everything as a human problem and I mean problem in a good way is that the entire reason we do everything that we do in society or business is ultimately to serve humans in some way. So every bit of technology serves a function that's meant to further humanity. Every part of business serves a function that's ultimately to further humanity in some way.

And so once we realized that everything comes down to humans, no matter what, then all of a sudden we start to realize that, oh yeah, you know what, that humans actually matter as part of this equation.


Yeah, absolutely. Absolutely.

Well, I think we will leave it on that note. Humans actually matter as a piece of this equation and I think that's a great message for our audience to take away.

Perry Carpenter, it's been an absolute pleasure talking to you. Thank you so much for taking the time to join us on the Ask A CISO podcast today.


Yeah. Thank you.

Jeremy Snyder

Jeremy serves on the Horangi advisory board. Jeremy Snyder has over 20 years of experience in IT and cybersecurity, with deep industry exposure in the M&A space. Some of his previous employers include Amazon Web Services, DivvyCloud and Rapid7. Jeremy has lived in 5 countries and speaks several languages. He is currently the Founder and CEO of, a leader in API security.

Subscribe to the Horangi Newsletter.

Be the first to hear about Horangi's upcoming webinars and events, up-and-coming cyber threats, new solutions, and the future of cybersecurity from our tech experts.