We're From the Cybersecurity Team, and We're Here to Help

Before I got into cybersecurity, I worked as an electrical engineer and as a software developer. I still keep in touch with several of my friends and former coworkers in both of those fields.

Each time I tell someone for the first time that I’ve switched to cybersecurity, a slight pall passes over their face.

One friend in particular told me: “oh great, so I’ll spend weeks building something and then you’ll spend 30 minutes breaking the thing I poured my heart into.” Ouch.

💔

I’m in the business of telling people (kindly, tactfully) what cybersecurity errors exist in their code, or product, or system. That can be rough on me, since I don’t particularly care for confrontation. But more to the point, it’s even rougher on the developers and engineers.

And not because I’m trying to be mean. In fact, quite the opposite. I don’t want anyone to feel hurt by my findings, to feel singled out, or to feel shamed. I’ve been on the other side of it, and it can truly, genuinely suck.

But even with good intentions present: security teams and their efforts are often not welcome, or at least not fully wanted, by many development teams. Security teams sometimes need to deliver bad news in order to secure a system, and developers sometimes make mistakes and need to know what should be fixed.

It’s a tricky dynamic, so how should we go about handling it?

Defensiveness is Understandable

First, I want to dispel the notion that defensiveness is merely due to engineers and devs being immature. Yes, immature coworkers are more likely to be defensive, but maturity is not a complete salve against having your mistakes being pointed out in a lengthy report (which is often shared with your managers, too).

Accepting criticism is a skill, and in fact it’s a skill that some majors (like art students) practice as part of their curriculum. STEM majors, not so much.

But that’s not why I say defensiveness is understandable.

A past coworker of mine was originally from outside the tech world, suddenly working among a ton of programmers as an EA. We got along well and she would point out to me examples of what she called “social capital” or “social currency” within the group: little stories or details that, if shared, improved the person’s social standing within that group.

She did it in a somewhat tongue-in-cheek way, feeling like a bit of an outside anthropologist, but she did genuinely care for the group.

One big way of getting social currency within tech is by being technically correct. Other fields might defer to someone based on their academic credentials, how many papers they have published, their sales numbers, how likable they are, who their parents are, how well-spoken they are, or any other number of metrics.

But within tech, correctness and “PoC or GTFO” often reigns supreme. Someone without the looks or degree or popularity can socially “win” by out-coding or out-demoing someone else. That’s the trump card, an in-the-moment display of technical meritocracy.

So it’s no wonder that telling someone they are technically incorrect can go over like a lead zeppelin.

Couple that with the idea that many engineers are deeply proud of their work, and then enter the security team, telling them, essentially, that their baby is ugly. Yeah, not fun.

Building Rapport

Whenever I start on a new project, I do my best to build rapport with the engineers and developers whose work I’m evaluating, and/or who will be implementing my recommended changes.

This sounds silly to write up, but try to demonstrate care towards everyone that you work with. Ask them how they are, show (genuine) interest in their pets or kids or news. Find something to relate on.

Try to understand how they want to be interacted with.

Some people love small talk, and will find you cold and uninviting if you don’t lead with that.

Cliche hacker stock photo with balaclava and hoodie
Maybe we could bring a friendlier vibe to debrief call, even if several of us have wardrobes that match stock photo imagery? Just an idea.

Other people hate it and would rather get straight to business. If you small talk with them too much, they may think you are disrespecting their time.

You can take this even further:

  • What is their preferred method of communication?
  • Do they always ask for sources or data? If so, provide it without being asked.
  • Do they want a bullet point email to forward (so they don’t have to try to summarize your concerns themselves)? Make it happen.

Turning a bad relationship around can be tricky, but it’s doable. If you find yourself in a sour dynamic with the dev team, you need to talk with the rest of your security team to change the narrative together. Own up to your past prickliness if need be, and find ways to be better (and stay better) in the future.

Don’t Just Say; Do

We’ve all heard the saying “those who can’t teach, do”. I firmly believe that this is bullshit–as anyone who has ever taught anything knows; teaching is a great way to 10x your understanding of something.

But a similar sentiment can hinder relations between development teams and security. Something to the effect of “those who can’t build, break”.

Are they right?

How can you, as a security person, dispel this notion that security people just show up, demand stuff, and leave the dev teams with more work?

First, you don’t want to appear as someone who doesn’t understand the underlying technology, or who doesn’t care to understand the details of their system. Make sure that you’ve done your homework and can speak intelligently about a technology without hand-waving away the details of what you are asking the devs to do.

For example:

  • Asking them to implement a Content Security Policy header? Have you looked up how to do that for the given web framework in use, and (better yet) have you tried it out yourself?
  • Wanting secure boot? Have you dug through the hundreds of pages of microprocessor guides to be able to quickly point out the steps, and is any information missing, or floating around in an errata PDF somewhere?
  • Wanting to implement MFA? Can you pull some reference workflows so that the UX team doesn’t have to invent things from scratch, and some cost estimations for the Cloud team?
Screenshot of 1,341 page count of manual
Don't ask the dev team sift through documents with four digit page counts.

Next, move things out of the theoretical realm and into something provable. Write an exploit script, or share a demo video, walking through why something is an issue and how it could be exploited.

I’m gonna keep it real with you: us security folks can sometimes be a bit on the dramatic side, so make sure that what you are saying is properly qualified. If you scream “catastrophic!” about something that has a narrow path to exploitation, you will lose credibility. Losing credibility means that when there is a real issue, you’ll be treated as the boy who cried wolf.

Instead, be accurate about what could happen (so you don’t get dunked on) and under what circumstances. Add demos/PoCs if you can.

Then, do what you can to make the fix easy.

I already talked a bit about this, in the form of having done your homework and having resources at hand for the devs and engineers.

You can take things a step further by submitting a pull request or otherwise doing the work and showing that you can build things (not just break them).

You rolling up your sleeves and helping get things done (rather than just putting more stuff on their plate) will be more likely to get you cooperation in the future.

Move yourself out of the role of “person who just tells them they did something wrong” and into the role of “person who actively helps them get it right” whenever possible.

Make it Win-Win

In addition to building rapport and lending a hand where possible, your efforts need to be seen as a win-win for the devs and engineers. And, whenever possible, for management too.

This is not just some mealy-mouthed corporate take. You need to consider what the dev team wants, what resources it has, and then build a case for why the dev team should spend its resources on what YOU want.

And no, “because I said so” isn’t a good answer. It wasn’t a sufficient answer for you as a kid, and it’s not going to be a sufficient answer for your colleagues now. You cannot continue to antagonize engineers and have a successful security program in the long-term. ¯\(ツ)

As I’ve hinted at before, you want to anticipate what they want and need, and make it easy for them. If you know their boss is budget-focused, find them free options or clearly make a case why doing XYZ now will save money (development time, etc) later. If they are under a time crunch, help them prioritize the highest impact items, or make a case to get them more help in the form of people, or an off-the-shelf solution.

Lastly: how do they want to be seen?

If you cannot meaningfully help them with budget or schedule pressures, can you at least make sure you make sure their efforts to help security are seen by others?

Make a point to point out (to their managers, or to someone “who counts”) that they have done something that makes the product better and safer. Connect it to something meaningful for management. Did they see an article lately about how a similar product suffered a big breach? Spell out how the recommendations of the security team prevents that particular scenario (y’all are so on top of things!), and how the dev team busted their butts getting it done (thank you dev team!). AND it also improves performance! Win-win-win, teamwork all around.

Does that feel kind of cheesy? Maybe so, but framing things in a way that highlights security efforts moving the project forward will help you and your team. It will certainly get you more results than being antagonistic and trying to strong-arm them.

To summarize: be nice to the dev team, anticipate their wants, ease some of the (let’s be honest!) additional work you’re dumping on their plate, and make sure their efforts re: security is seen positively by management.

Tech’s Big Secret

If you dig deep enough, many technical problems are actually just social problems that have had a recommended technical fix slapped on them:

  • Convoluted tech workarounds and “temporary” fixes because the person with the expertise to actually fix it is overworked and doesn’t have enough help.
  • Impossible last minute design changes, because the customer asked for something and the engineers don’t have a good way to communicate the difficulty; or aren’t listened to.
  • Weird legacy code situations that could have been fixed if enough budget existed for a rewrite (that was overdue a decade ago).
  • Security fixes that don’t get done or don’t get done fully because the dev didn’t have the expertise to implement it and it’d be embarrassing to ask (or the security team were jerks, so their requests were de-prioritized).
  • The classic “we’ll fix that (hardware) problem in software” move because of waterfall methodology, or needing to report forward progress to upper management
  • And many more.

Of course, nothing involving people is ever as straightforward as it seems, and not all social problems are fixable (or even influence-able) by a single person. This post details how I think about things and what I’ve had success with over the course of my career. Regardless, I hope these ideas can help improve dev-security partnerships and help secure products, faster.

Subscribe for the latest updates

No spam, just weekly (at most) blog posts, resources, and other updates sent to you. You can unsubscribe at any time.

Using the FCC Database for Embedded Device Recon
Older post

Using the FCC Database for Embedded Device Recon

Determining what's inside your electronics products _before_ you tear them apart.

Newer post

Why Hardware Security Matters

No one is going to be messing around with your embedded device on their own time, right? ...right?

Why Hardware Security Matters