Jon-Anthoney de Boer is the Product Security Lead at Transmax, overseeing security for critical infrastructure that manages traffic flow across Australia. Coming from a strong software engineering background, Jon-Anthoney shares his experience transitioning from traditional engineering into product and application security. He highlights the importance of aligning software engineering and security teams, building trust into the software development lifecycle, and fostering a security culture based on practical strategy rather than superficial metrics. Jon-Anthoney also discusses how behavioural change, organisational alignment, and operational excellence are key to achieving effective, sustainable security outcomes.
Cole Cornford
Hi, I’m Cole Cornford and this is Secured, the podcast that dives deep into application security. Today I brought in Jon-Anthoney de Boer. JA is previously a software engineer who has then moved into being a product security lead for Transmax, which is one of the companies that people don’t know about, but does something really important in basically managing the flow of traffic using traffic lights around the country and JA is in charge of managing the security of those products.
So I really like JA because he brings a lot of rigor from 10 to 20 years of electrical engineering and software engineering and then taking that approach into cybersecurity. I know that our conversations are quite focused on the nitty-gritty, so SLSA, like SOLCA reviews or we’re going to get the DevOps machine, or I would say more broadly the software engineering machine, how do we go about defining, creating features and then making sure that there’s a definition of done and we have the right checks and balances throughout the STLC.
There’s a lot of going really high as well as going quite deep, and I think you’ll appreciate all the different perspectives he brings from his 20 years of SWE before moving into product security. So I guess probably the funniest thing for this episode is those 20 years he had in SWE, he was always terrified of computer hackers and all the app tech people and so on because he was like, I don’t understand how someone could be a savant and just be super awesome at all this hacking and stuff, they have to really, really deeply technically understand stuff. But often when you pull back the curtain, it’s a little bit of a different story. So if you want to learn more about that experience, tune in. I think you’ll love this episode.
We’re here today everybody with Jon-Anthoney de Boer, who is the product security lead at Transmax. How are you doing today, mate?
Jon-Anthoney de Boer
Yeah, great, thanks Cole. Super psyched to be here. First time on a podcast. Very happy.
Cole Cornford
I try to get a lot of people who haven’t given it a go yet, and it’s just so fun to sit down and have a conversation about something that you’re really excited and passionate about, right.
Jon-Anthoney de Boer
Oh, fantastic. Yeah, I’m looking forward to it.
Cole Cornford
Yep. So I thought it would be good for my guests to understand a bit about your background and history and how you ended up becoming a product security lead, at least I think you started just doing software engineering and mobile software development for a long time. Is that where you began?
Jon-Anthoney de Boer
Yeah, so I’m definitely heavy on the engineering side or heavier on the engineering side than security. I came to security after what I would say is a whole career in software engineering, and I’ve really valued the way I started. I started at an electrical engineering company when I was still a late teenager just doing IT support for fun, and I stayed with them the whole way through uni, so I worked with them and grew with them, and the value that I got out of that was the breadth of exposure across technology stacks. So this company worked at the PLC level, so they were writing, it’s almost like assembly logic and circuit logic for electronics, and then they would layer up with the management info systems on top of that and then we would layer up a game with remote PDA type software interacting with those systems and whatever.
And I just really found that the breadth of software that they were working in was really good in my formative years in the career. So I stayed with them for many years and then went over to China for a little while and worked for an offshore development center over there as kind of the English-speaking conduit between the Chinese workforce and the Western customer base. And again, the breadth of software that was being developed and delivered by those teams was pretty interesting. And then started with Telstra after that and stayed there for about 17 years all up. And of course within Telstra, again, it’s such a large organization and it’s so intrinsic to digital society for Australia that played roles in terms of back-end development of APIs, distributed systems, electronic couch potato systems, front-end, afl.com.au web video player was one of those interesting projects, became a SME for video management, moved over to mobiles, just played a whole bunch of different roles.
And the breadth of experience is what I really valued across all of the roles that I’ve had in engineering. But it got to the point where I thought, okay, well what’s next? I’ve worn many different hats, led different teams, different tech stacks, and something that really started to appeal to me a little while back was cyber security. So it’s one thing as a maker. So I used to consider myself a craftsperson of software. You flow between the different stacks and you learn the different syntax, but essentially you’re a craftsperson for automations. And it used to just fascinate me that people could get software that was designed to do a thing and they could get it to do completely different things in a world that I thought was very on and off, one and zero, very hard yes or no answers. And so I started getting into that and started to learning about how software could be made to do interesting things.
Cole Cornford
And then after that, you’ve just been staying in the product security application security area for last, how many years would you say? Four or five, six years?
Jon-Anthoney de Boer
Six or seven, I think. The latest six or seven years it’s been about how do we secure the machines that we build and then how do we secure the machine that builds the machine. So definitely security from a DevOps perspective, DevSecOps, but also with an interest in penetration testing and some of those sort of red activities as well.
Cole Cornford
Yeah, you got to have an interest in how to break stuff as well. For me, I find that the best AppSec people come from software engineering backgrounds almost always, but you’ll also probably laugh at me and I’ll say political staffer slash diplomat simply because-
Jon-Anthoney de Boer
They’re victims of those.
Cole Cornford
… well, they come out of it and they’re like, look, you know what? I’m sick of dealing with international relations and people that suck and constituents and all of that jazz, so I’m going to go join a big enterprise. I’m going to deal with different political parties internally and constituents who don’t want to do software stuff and convince them about my messaging and narratives and marketing. So those kinds of groups or the software engineers who say, oh, cool, so this isn’t working. I may as well build a fix to solve the problem. Fix now. Those are the two types of groups I find do really, really well in the AppSec sphere. So I feel like I’m a little bit of a mix of both. Not that I campaign as a poly, but I think that I’ve done enough public speaking to be able to have people think I was one.
Jon-Anthoney de Boer
I do think you’re bang on in that point around diplomacy. So one of the observations that I would make over time is that software engineering and the security team that at least are sensibly tasked with safing software engineering, they come from very different worlds. And so the diplomacy involved in trying to align those or herd those cats to coordinate so that they’re working toward a common goal that is feasible and achievable and meaningful to both groups, I don’t think that should be understated. So your point about political staffers seeing a different way to use their skills and experience, yeah, resonates with me.
Cole Cornford
I just don’t want to only get people from the same background. And I find it overwhelmingly, we have so many breakers who want to move into AppSec and then when I say, “I know you got an OSCP and your Crest certified and you’re good at hacking and all of that jazz, but how do you fix the problem?” And often they’ll be like, “Oh, SQL injection, it’s easy. Just use parameterized queries, great.” And you’re like, “Oh, awesome. We’ll just tell them to not use the ORM that they’re using at the moment because they don’t actually write direct SQL this point they’re using… So how does is even occurring if they’ve got an ORM in the first place?” And they’re like, “Boom, don’t know, don’t understand this.”
And that’s why I think that SWE moving into AppSec makes a lot of sense because then they can look at and say, oh, well I know SQL Alchemy, this is how we made this mistake. We use the raw.sql here, so let’s fix that up, everywhere else, the ORM takes care of it. But a lot of pen testers will be like duh, parameterized queries. Clearly that’s the advice that’s been around for a long time, you developers are dumb-dumb.
Jon-Anthoney de Boer
A hundred percent. And I would say that I’ve practically experienced kind of the attitudes that you were just sort describing loosely. One of the things that I like to ask if I’m engaging a vendor, particularly for a pen test or any artifact, it could be a luncheon, then it could be an assessment, just as important as identifying the control gaps is the recommendation or the advice to the consumer of that reporter assessment on what to do about it. So what does good look like? And I think that’s very much a front and center thing. I have been in that world where you would receive a pen test report and there would be an issue and you’d say, okay, well what do I do about? And it’s just crickets. So this is again, security and software engineering working together to solve the problem.
Cole Cornford
I know that’s something that you are quite a big advocate of is that instead of us chucking things over the fence, we need to collaborate a lot more. But I also think that, as someone who’s done a lot of software engineering in the past, you think that there’s a lot more power in line one effectively managing these risks in advance. What do we do to help software engineers be more accountable and actually proactively deal with more of these issues rather than us having to have AppSec teams or security professionals tell them to solve stuff?
Jon-Anthoney de Boer
Well, I mean, first and foremost for me is I believe in securing the machine that builds the machine, and by that I mean the SDLC. Now inclusive with that will be incentives on solving problems that you become aware of. So you become aware of a thing, usually it’s a tool injecting into a process that raises alerts of potentially significant things you might need to look at. But if incentives on those software engineering teams are not aligned with securities incentives, then what you’ll see is things will be risk accept or simply ignored. And that’s not to disparage any one team, that’s an organizational failure. So the organization needs to ensure alignment of incentives. So if you are telling software engineering teams effectively, the number one priority for you is business delivery, feature delivery, bug fix for features, functional features, and that is the only thing that you are actually measured on in reality, beyond goodwill and nice statements, then of course, those teams, especially where resource pressure is concerned, are going to focus on hitting the things they’re being marked on. And that’s the entire chain, this is scrum masters and product owners and whatever.
If they’re being marked only on business outcomes and those business outcomes are primarily dealing with feature delivery and hitting delivery milestones, they will do that. The security team might be tasked with changing the game. So you’ve got to demonstrate a 5% improvement in something, it might be a reduction in vulnerabilities that you’re aware of or whatever. And so you’re clashing because you’re saying, well, we need to be moving this needle, but they’re saying, but I’ve got to get delivery out. And so you won’t hit your mark. So I think first and foremost, you’ve got to have aligned incentives and aligned objectives for the organization so that you’re not reaching that conflict.
The second thing I would say is there’s this great talk, it’s a Harvard Business School talk that you can just find on YouTube, it’s called A Plan is not a Strategy, and in security I think we’ve been particularly guilty of this basically from a separation of concerns between software engineering and security, is we’ll come up with plans that we can execute on. They’re in our control, we can budget for it to improve security. So that’s the high level task. And it might be that we will buy a tool and we will integrate it. Happy days. But if we don’t have the right strategy behind that plan or that planning activity series of plans, then there’s no real guarantee that we’re going to doing anything in events.
So I would call on security as well to say, well, we might have these great ideas on how we can improve things, but what we need is a coherent strategy to make sure that everyone agrees that we’re taking the right approach, that we’re putting together plans that support delivery of that strategy and don’t contradict other people’s plans. And so there’s a lot of coordination that I think needs to occur. It’s not just pure signals, it’s about incentives and organizational momentum in the right direction.
Cole Cornford
So unpacking that, there’s two things I really want to jump out at is, firstly going about have you heard of Goodhart’s Law? I think it’s Goodhart’s Law.
Jon-Anthoney de Boer
I don’t think so.
Cole Cornford
So it’s a measure that becomes a target ceases to be a good measure. And the idea is that, just to give you a simple example from Australia, is we have this constantly increasing tax rate on cigarettes, and over the last couple of years it’s been really good and now we’ve hit the point where the tax is too punitive that people are actually committing crimes instead of paying the tax, and it’s actually leading to a higher prevalence of smoking, which is not the intended outcome. The intended outcome is to wean people off smoking. So the politicians are kind of looking at that saying, okay, well we know that maybe that’s the terminal point now rather than keep increasing at that point.
And we see the same with how people use metrics for software security a lot of the time as well. It’s why I’m not really a fan of things like zero-criticals or mean time to resolution and so on, because one, if you go zero-criticals, the easiest way is to just mark everything as a false positive or dash exclude the files that have problems in them or put comments in there. Or I think my favorite I saw in a manual code review once was someone changed all the instances of password to arseword and then static analysis tool was like, oh, well I guess that’s not a hard-coded credential. It’s like, great, okay, I guess it’s not, but obviously to my very good eyes, I could still tell that it is clearly a password, so we can’t trust the output of these tools.
But also it goes back to incentives. So when we say that we have a KPI and the KPI is that we’re not allowed to be having this many vulnerabilities being introduced into production, if we’re setting that as the KPI, then there’s many ways to achieve it. Some of those ways are in line with what we intend and other ways we’ll achieve the KPI but not necessarily do what we want them to do. Another example to go outside of AppSec entirely is tunnels, right? You might say you want people to do a hundred calls a week. Yeah, they can make a hundred calls a week, that doesn’t mean they need to talk to anybody or actually lead to any sales outcomes, you just pick up the phone and randomly dial people. I don’t know if that’s ever going to be particularly helpful or grow your business or whatever, but at least you’re making those a hundred calls a week.
And that’s why a lot of these businesses have moved back to the OKR rather than KPIs because you can then ask them, well, you’ve got these key results, you have an objective, and if what you’re doing is not in line with the objective, then you’re clearly not doing the right thing. And so I’ve seen that a lot in the AppSec sphere. The other thing was about strategy. So there’s a book I like, have you read Good Strategy, Bad Strategy before?
So can’t remember the author’s name, but one of the things he says is that basically everyone sucks at strategy and they all say wishy-washy things like vision mission, we’re going to be the best, our strategy is make more money, which is not particularly helpful. And he always brings it back to you need to spend a lot of time diagnosing what the real problem is at a high enough level that you can actually get as many different perspectives and views before you actually ever go ahead and say, well, now that we’ve got a diagnosis of what we need to solve, we’ll create an operational plan to work to solve those next with some guiding principles if the plan doesn’t go to as we think.
But in my experience, when cybersecurity tends to make strategies, it’s usually in the context of cybersecurity, not in the context of the business. And that means that your software engineering teams are excluded from the conversation and then you end up with two different types of business unit strategies, butting heads against each other, constantly, both of them achieving their goals, but also failing new business overall.
Jon-Anthoney de Boer
I would completely agree with that. So it’s the silo nature of teams and once again we need to have software engineering and the security teams working together and singing from the same songbook. I do like your, I think it was the Goodhart’s Law, it resonates with me. There was this old register article or something where they talked about the green bar of Shangri-La. Are you familiar with that?
Cole Cornford
No, I would love to. Tell me more. Is this a bar that you drink at? Because I’d be keen for it.
Jon-Anthoney de Boer
No, no, no. But back in the, I think mid-two-thousands, the idea of your IDE and you’d run your unit tests and it would be red if anything failed, and it would be green if everything passed. And say the green bar of Shangri-La is that green bar or your test pass. And of course what you mentioned about commenting out troublesome files and things, if you’ve got this metric, it’s 80% coverage of unit testing of all code, just 80% coverage, but you are actually pulling in more open source framework data that has so many lines of code that it’s unwieldy, to have enough unit tests to cover that. So what you end up doing is you just get these contrived tests to get that 80% green bar of Shangri-La or you just comment out all your troublesome stuff.
And that might extend to things like, oh, what about my symmetric crypto? I’m not really sure what I’m doing there, so I’ll just comment that out, exclude it from the calculation and suddenly we’re not even testing it. And it comes down to who’s even looking?
Cole Cornford
I remember, that’s so true. True.
Jon-Anthoney de Boer
Yeah. So no, it’s really interesting. But the point about align strategies, so I think that is a key message that I’d like to take away from this podcast, from this experience is you made the point about I’ve got a strategy over here in team X and we’ve got plans to execute on that strategy and we’ve got metrics to demonstrate our success in achieving that plan. And same in team Y, but overall for the organization, we’re not actually changing the game unless we coordinate.
Cole Cornford
And I know that a lot of AppSec teams tend to look in terms of coverage and capability, which means that they’ve completely missed the whole point of diagnosing what their actual problems that they need to solve for are. Because maybe if you’re saying, oh, we need a WAF. Why do we need a WAF? Do we actually need a WAF or do we need an SCA tool? Do we need dev training? Well, what’s the problem that we’re trying to resolve using these kinds of things? And then you could probably go up a few rungs and say, oh, okay, so the problem is that our velocity is all of our customers require us to be providing assurance, otherwise they won’t do business with us.
But the way that we’re doing assurance is we’re using third party penetration testers and they’re just booked out for six months in advance. So we’ve got no real way to just get those conversations happening and remove some blockers from the sales conversation. So what do we need to do? Invest in something that can give customers a level of assurance in a very quick timeframe. And that’s a very different one from saying, we’re missing SaaS, we’re missing DaaS, we’re missing WAF, which is most AppSec people I speak with that start there and go like, yep, here are my favorite products, let’s get that going.
Jon-Anthoney de Boer
Yeah, I know this tool, I know how it works, I know how to integrate it. We should do this thing, right? Yeah.
Cole Cornford
I love Fortify. I see Fortify everywhere. It’s my favorite.
Jon-Anthoney de Boer
For me, so I really look at the current industry trend of saying, okay, well now we’ve got all these things. We executed on all these plans, we were successful in getting them integrated, and we’re now seeing signals, we’re still not really meaningfully changing the game. We need another tool, now. We’ve got a layer on top of that, we’ve got to have automatic inspection of the STLC, the source code, the output of these tools so that we can give the teams the signal that they need to focus on instead of all these other signals we’re generating. And I do think, yeah, step back a little bit, think about your strategy, why it is that you are going to win as an organization and you’re going to win safely and secure enough sort of way. Work with security, work with those teams and say, okay, instead of focusing on that green bar of Shangri-La metric, what’s something I can do?
Okay, are the primary, the Crown Jewel source code projects, are they known? Do we have an asset inventory of the highest priority things that everyone agrees on? Okay, great. Do we have single source of capability in the STLC so that we’re simplifying and reducing complexity? Because you can’t secure infinite complexity, right? Do we have teams that are enabling and disabling signals? Would we know about it? What can we do to become aware of someone turning off secret push protection in GitLabel or something like that?
So how do we know that that sort of thing is occurring or how do we know that the engineering teams are on board and engaged and complying with our policies and expectations? And if we get those behavioral aspects right, then security can start to have a focal point and work with software engineering to say, okay, now that we’ve got this single source of capability for a container registry, this is the one we use for our organization, we can start to inject security controls in there and we’ll get that force multiplier effect outcome because all the images we manage will be subjected to the same security protections and we’ll start to reduce our OPEX burden and we’ll start to reduce other complexities down the way.
Cole Cornford
And that’s why I really like taking something that is quite valuable for software engineering to do in and of itself, but then has, as a side effect, good security benefits. Because when you aim for that, it’s actually, I always go to AppSec people and say, the cyber budget is 1% of the engineering budget, and AppSec within cyber is another 1%. So why don’t you go and talk to the engineering leaders and say, so one of the things that you’re really getting stuck on is that you just spend heaps of time troubleshooting. Have you considered maybe having environment parity? Because then you won’t have bugs that only appear in production. They’ll appear in dev and test as well. And then if they spend a lot of time just having the same kind of infrastructure, the same kind of apps running in dev test and prod, then the security benefits is that if you know it’s going to be running in dev, it should run in prod, and so that allows you to release software significantly more frequently.
And what are the benefits of that? If something goes wrong, you can roll back really quickly. If a system goes offline, you can spin it up really quickly. If you need to introduce a security mechanism, so let’s say an API security product, a DaaS scanner or whatever, because the release cycle is so fast now, they’re willing to suffer that pain. Whereas if there’s a difference between dev test and prod, then that means that the DaaS tool or the SCA tool or SaaS tool that runs in each of these environments will have different outputs and frustrate engineers. There’s nothing like an engineer coming back and saying, we’ve got 20 sneak high findings and these are all to do with dev dependencies that are only on the docker container that builds the running production container. So how do we mark those as false positives?
Jon-Anthoney de Boer
Well, they’re not false positives, they’re true positives, they’re just not relevant to the production deployable component.
Cole Cornford
Yeah, true.
Jon-Anthoney de Boer
So you’ve got to be able to understand that and articulate that difference. But I think from what I’m hearing, what you were just describing, it comes down to building trust in your STLC. So if you can simplify and harden and secure the various components that build this distributed system that is in STLC, you can start to have assurance that you’ll be able to get the release out on time and security is not the blocker. And if you find a security issue, you’ll be able to work it and you’ll be able to either remediate or avoid, or whatever it happens to be, and it’s not going to block the business unsustainably.
I think sometimes there’s a little bit of a hesitance from say the business side of the house saying, well, we’ve got to hit our contract. The business does not exist to serve security. Security is a servant for the business, to help the business succeed. And so if we have to choose between not being viable because we can’t hit our delivery requirements and being secure, you’re usually going to say, well, let’s hit our delivery requirements, we’ll risk except for the moment, we’ll address it later. And so we want to avoid that friction. And so building trust in that machine that builds the machine will help the business sustainably secure whatever it is that they’re building. It’s certainly something that appeals to me based on my experience.
Cole Cornford
So based on your experience, if you were to build a secure SDLC, where would you start? Because everyone I know in application security tends to focus almost always on shifting to the left, but obviously you can go design and then verification and CI and I see significantly less focuses on what’s running in production, even though you can get some really good signals to actually influence the rest of the program. But whereabouts do you think is the best bang for buck for devs and for business productivity?
Jon-Anthoney de Boer
Gosh, so it’s probably going to differ between green fields and brown fields. Anything legacy. If you’ve got a bunch of stock vulnerabilities and you’re not mutating that software very often so there’s not much in the flow space, then you’re probably going to go ADR, IDR, you’re going to be focusing in on there. And again, if you’re starting from scratch, you’re probably going to start on the machine. You’ll start with the flow end of the spectrum, maybe shifting left or at least injecting into that pathway to production. The reality is you have to burn that candle at both ends, right? You need to secure the machine, you need to have visibility in runtime, but it depends on resourcing and budget pressures and whatever. Where I would start is build observability. Start from the basics. What are my source code repos that I have to protect? Because if I don’t know what I’m protecting and what leads to the runtime, how can I protect it? I won’t even know.
So start with the source code. Then look at the pathway for that source to production. Do we know it? Are there multiple systems in play? Are all the build steps scripted and reproducible? Is it committed to source control? What about the SDLC itself? Who builds that? Where is it managed or how is it managed and mutated? And so I’d start with observability. You’ve got to know what it is you’re protecting to be able to have any chance of understanding how well protected it is and who does what in the event of an incident and whatever. But thereafter, it becomes a judgment call, to be honest. So if I’m working on a product as a product security person, if my product is not really in production yet, if we’re still in our seed funding days or whatever, then I would be focusing on that flow end of that spectrum.
If I’ve come into a company that’s well established and they’ve got a mature product, then I’d be looking at the stock end of the spectrum. But the reality is you’ve got to do both and I think you start with understanding what it is that you’re protecting. So I’d be looking at observability. I’d also be looking at, okay, what’s the culture in the organization from engineering as much as anything? And so do we have developers that are engaged? Do we have teams that push back on security? Do we have security teams that understand the product that they’re attempting to secure? And so try to look at the culture and the behavioral aspects of how software engineering’s actually done in that organization. Because if you’ve got say, a great SDLC, but teams are pumped to deliver and so they take side shortcuts and you get shadow IT and you get this, that and the other, then all your efforts to focus on securing that machine you know about, potentially useful, potentially not, who knows?
So you might need to change those behaviors and build a case for, okay, well we need to not have shadow IT and we need to all consolidate for the good of the organization. And psych acceptability actually, if I can put it in those terms, that I’d have to say is one of the key things to start with in any security campaign for any organization, because you can come up with all manner of justifications for things you want to do and want to restrict usually in security for good reason, but unless the people that are the heart of the organization and the people make decisions about technical dependencies, they make decisions about risk accepting, they make all sorts of decisions about what it is that is possible to observe, if those people aren’t on board with where you want to go with security, then they’ll find creative ways to just get around it. And I think that’s probably a truism for a lot of organizations. So the psych acceptability, the observability are the behaviors where I’d probably start.
Cole Cornford
Yeah, I think that’s why I still think change management is really important and people should spend some time in understanding how do you identify the people who are going to be early adopters, who are excited about technology? How do you identify the people who are going to be blockers and laggards? And then eventually have different campaigns and approaches because evangelizing, being excited, getting hands on with these people, that’s going to be effective. You’re going to have to use policies and sticks and make it easy for the other group of people.
And right sizing an AppSec program, it’s really challenging from what I’ve seen. I think a lot of people end up getting pigeonholed into a specific style of organization and then they just see that as how you do an AppSec program. So I’ve known many people who’ve just gone between different financial services places and then the first thing they say is, cool, well, we’ve got our static analysis tool, commercial variant, not going to look at Semgrep, we’re all just going to look at the expensive ones.
We’ve got our API security product for production. It’s always Akamai. Why not? Got to be, because that’s what the bank uses for CDN. And then sometimes they end up moving to a small business like a Eucalyptus or, I’d be loathe to call LASSIE a small business, but something that’s a little bit of a different type of organization as far as STLCs go. And then they’re like, how come we’re building our own software? Why are we using all these open source tools? What are all of these startups and why do we use any of these instead of mature brands out there? And so choosing the right things for different types of companies can be quite challenging unless you’ve actually gone and seen what else exists out there and what use cases you need to solve for.
There’s a lot of products that solve very specific problems for very specific verticals, but they collapse when they end up in the enterprise sphere because they don’t have all of the things you’d expect, like, oh, I need a SOC 2 Type 2 and it needs to integrate with these 16 products and have active directory integrations for SSO. And it needs to also maybe have Google Workspace SSO integrations and SAML based assertions and auditability for everybody’s logs. And then that poor startup will come back and say like, whoa, nah, we’re not about that. We just want to secure code.
And so choosing these right types of tools, even training developers and understanding culture, as a business gets a lot bigger it’s really difficult to say, hey, all you developers, I want you all to be excited and happy and 10 out of 10 devs who are going to do the best job possible and care about security and every other non-functional requirement out there. We know that’s unrealistic. You go to a small startup where you have three or five developers and they’re all saying yes, sir, for the mission, I’ll do whatever I can. Go to an enormous bank like Capital One, I don’t think they care whatsoever.
Jon-Anthoney de Boer
And I think separating, again, from focus on particular tools for organizations, you might find that just the free open source thing that comes packaged up with your SCM, maybe that’s actually sufficient if the intent of the business, if the behaviors and expectations are all aligned. Remember these are non-tool focused, if everyone’s on board with security, you can probably achieve some really great outcomes. And that’s before you’ve spent money on layers of expensive tooling and then the tooling that manages that tooling.
Cole Cornford
What would you say would be more effective, by the way, do you need to have a CTO, like a senior level person who’s pushing for this, or is it better to have within each of those feature teams to have the one security champion pushing for it? Because I’ve seen both approaches be quite effective.
Jon-Anthoney de Boer
My instinct based on experience is that it’s actually super important that top-level leadership have absolute buy-in to what you’re trying to achieve as a security program or with your security program. And I mean that from engineering and security, that there has to be that top-level leader buy-in to say, yes, I support you, has to be public, but thereafter we really need to listen. And this is again, that combination of alignment between software engineering and the security team, we need to actually understand how software engineering is done practically so that the security team can understand which controls are going to be the most effective and least disruptive to enable delivery in a safe sort of way. Because I have seen it as well where you have a groundswell of support from feature teams and they just can’t get the budget and they can’t get the prioritization for the work you need to do because we’ve got delivery commitments and it’s all done six months in advance with people that are not related to the actual coalface sort of work.
But then I’ve also seen on the other side in the security lens where a control will be proposed and included in a policy or a standard that is practically not possible to comply with. It just simply, it reads well, but there’s no way for the team to actually take that and run with it and do something about it. And so I do think you need that coming together of teams. You need software engineering, you need that empathy to understand, well, how do you actually do things and how big a change could we realistically make to that organization in terms of how it does things? And then figure out which controls are acceptable and will meaningfully do something with a positive angle on security outcomes without completely disrupting the engineering teams. We need that alignment, but it has to have top level leader support.
Cole Cornford
I see a lot of CTOs understand and care quite deeply about making sure that the applications that their teams are producing have security built in or that they get regular pen testing or some level of assurance built into that kind of culture. But what I often see is that a lot of those CTOs also come from a software engineering background, and so they can provide these technical recommendations. Where I think it falls over is when you get an organization to a large enough status where you have a chief security officer, a head of security, and most of the time they come from a policy and governance background, a SOC incident response, or even a pen testing background.
And so with that kind of approach, they’re very rarely going to understand how the SDLC works and why you can’t enforce things that make sense within end user compute and infrastructure in the same way to software engineering. Ones that I understand that I see a lot patching, all patches, critical patches, need to be deployed in 48 hours. That sounds really great. I wish that the open source maintainers that have these transitive dependencies would also follow my 48-hour SLAs. But who am I? I don’t donate to open source? I’m not supporting curl out there am I?
Jon-Anthoney de Boer
[inaudible :41].
Cole Cornford
There’s still probably heaps of businesses that have spring boot vulnerabilities from even before that. So, you know how it is.
Jon-Anthoney de Boer
And this is an interesting thing, so you kind of have gone on a tangent at least the way I’m hearing it, into the domain of the specialist manager versus the generalist manager. And as the organization grows, what we find is that leaders, their accountabilities become much broader. There may be their original skill sets, they maybe I’m a qualified engineer, but now I’m responsible for a 3000 human organization with many, many, many different teams and playing in different spaces. And so there’s only so much that leader will be able to be an expert for across their accountability range.
And in fact, I know with a lot of management, it’s seen as a bit of a virtue to step outside your comfort zone and go and manage a domain that you are not an expert in and learn how to lead teams in that way. Now the problem with that that I’ve seen is I say that the specialist manager who intimately understands their domain, they will know perhaps and be able to come up with plans for how to say, improve security outcomes for their organization, and they will request the budget to make that happen. So they’ll understand the work and need the money to make it happen in the time.
The generalist manager might be really great at being given a budget and say, with this budget, try to do things that move the needle. And that may not occur. Once again, you’ve got to find out budget, you’ve got a goal that you want to achieve, craft a plan that you can execute on that satisfies your needs. It may not actually do anything meaningful for the organization. And so you get these two types of leader in big organizations that maybe, on the one hand, the specialist manager’s proposals are just unwieldy. The business cannot fund them and execute on them. They’re just going to break the business. But on the same token, the generalist manager who has a limited budget and can work to that sort of requirement is just simply ineffective. Nothing ever changes. And we see that in other organizations.
So there’s a bit of a balancing act to play there. And I think what you’re talking about that CTO that came from their domain and that at a certain size of organization, they can still have that visibility and provide that course correction and make sure the strategy is in alignment with the business. Yeah, I totally get that point. It’s a very difficult thing as you grow up in terms of organization of a sort of size.
Cole Cornford
In my experience, when someone is looking at becoming a leader internally within their organization, they know all the skeletons and they’ve seen how things have been, but oftentimes they’re not going to be particularly innovative. They’re going to be looking at efficiency and effectiveness. But I think [inaudible :23] Deming would’ve said something like, the most effective process is not doing the work that you don’t need to do. And so you don’t want to do an efficient process if you don’t need to do it at all, right?
Jon-Anthoney de Boer
A hundred percent. And shout out to Anita Cunningham from Quality Business Services. One of the best courses I ever did, professional development, was her core operational excellence course. Now I’m a technical person, but this is more about operational management and leadership, and she introduced me to things like understanding your ecosystem, input, process output for you, for what you manage, understanding value streams in your processes. So where the pure waste is, where the regulatory requirements wasteful, but we have to have it, and where the value is that a customer would pay for. All these sorts of things. It’s a great course and it really opened up my eyes to management of processes such as software delivery processes as distinct from the technical components of how to write code and how to produce artifacts and whatnot.
Cole Cornford
And look, I think that those courses are super valuable. And I always encourage people to just get out of software engineering and AppSec and security and go do anything else, including politics. There is nothing better for someone who spends all the day arguing about, ah, is it going to be Arch Linux or is it Mint or is it Ubuntu? And this is the life-defining issue for them to go to the beach and then just see people get in on the punch up about whatever. And then they’re like, wow, real life is so fundamentally different, why am I caring about these kind of things? And then they have to take a step back and realize that most of these senior executives are going to be probably more like the beach goers than the people arguing about Arch Linux.
Jon-Anthoney de Boer
And there was a really great talk that I remember listening in on, I think it was from the head of partnerships or something at Google a couple of years back, and he started his talk, it was about AI, but he started his talk with a nice little graph, considering the audience in the room, of Google queries for AI search terms, area related. And the graph started out looking pretty impressive, because it’s all this positive reinforcement of your own biases and your own interests and whatever. And then he broadened it out to think things like my car and groceries and whatever. And then it was like cost of living and this sort of thing.
And it just really brought home the idea that we’re in this little bubble most of the time as software engineering and software security people. A lot of what we consider to be super important and pressing and whatnot is definitely not the most high priority thing for most of the population and a lot of the non-expert, yet accountable leaders that we end up serving in our organizations. And it’s good to be mindful of that and stay humble.
Cole Cornford
That’s it. So for everybody, all of you in my little bubble, I think you should all say thank you to Jon-Anthoney for sharing his thoughts in our little AppSec bubble. It’s been absolute pleasure to have you on, mate.
Jon-Anthoney de Boer
No, thank you. I really, really enjoyed my very first podcast ever, so I hope it’s useful for anyone who’s listening in. And if anyone has any queries or concerns or wants to bounce ideas, I’m always happy to bounce ideas off [inaudible :33] or whatever it is.
Cole Cornford
That’s it. So go ahead, add Jon-Anthoney on LinkedIn and say hello. Anyway, that’s it for today. We’ll see you next episode.
Thanks a lot for listening to this episode of Secured. If you’ve got any feedback at all, feel free to hit us up and let us know. If you’d like to learn more about how Galah Cyber can help keep your business secured, go to galahcyber.com.au.