Cybersecurity is a term heard three or four times before breakfast these days. But what really goes into keeping businesses secure? Here we detail a day in the life of one of my senior cybersecurity engineers…and what it takes to ensure if breaches happen, they happen to the other guys.
This article was originally posted on CSO Online
By Andrew Douthwaite, Vice President of Managed Services
Jordan has worked as the senior engineer at VirtualArmour since 2011. He is part of a Security Operations Center that oversees client sites across 30+ countries and five continents. Its focus is on protecting client networks, employees and staying up to date with the latest technology in the cybersecurity space.
Starting the day
My day starts with a quick check of emails from overnight to determine if there is anything that needs addressing immediately or if it can wait until I get to the office. As we are a 24/7 operation and our other Security Operations Center (SOC) is in the U.S., there may be tickets that have to be handed off overnight. If there isn’t anything urgent I will scan the news headlines and key tech sites that post the latest threats that we need to be aware of. From there I grab a quick cup of tea with breakfast and then head to work.
Thankfully my commute’s relatively short, however, I do use this time to recap my previous day and think through the goals for the day ahead. As we are in an environment that requires us to be “always on” in protecting our clients this time gives me an opportunity to plan ahead without distractions.
At the office
Once in the office, I check our current tickets and determine if I need to pick any of them up to support the team and ensure we maintain our 15-minute response time. From there, I follow-up on any non-urgent emails that came in overnight from the U.S. that didn’t go through our ticketing system. As the Lead Engineer for several of our clients, I often have messages from key contacts directly asking for advice or information about a certain part of their system.
Innovation and development
We are constantly evaluating new products and processes to spot potential risks or possible fixes for gaps we may observe in a client’s network topology. We also “eat our own dog food” as they say so I spend time running scenarios within our internal network to ensure a robust defense against any attempted a breach on our systems. This allows us to test the technology we are using with our clients before we deploy it within their environment. This can include doing a full cycle review and Proof of Concept (POC) with our major partner systems such as Juniper, IBM, Splunk, etc. on how each of these technologies work together. I will spend time monitoring devices across the networks, looking for anomalies, link changes, and performance spikes.
I typically grab a sandwich with team members and although it’s not exclusively work chat, this is a good time to talk with members of the team about what they are seeing from the day’s tickets. The majority of us are genuinely interested in technology and therefore do like to exchange ideas on new solutions that may be coming to market and which have the potential to perform better than the current leaders. It is a good time to connect with people away from the more pressured environment that exists when we’re engaged in monitoring and ensuring we’re reacting to situations the moment they arise.
For me afternoons consist of client meetings with the accounts that I personally manage. I meet with every account at least once a week to ensure we are up to speed with what’s happening within their business. Each client meeting starts with a standard agenda and is usually attended by the technical contacts as well as key stakeholders. We tend to review what happened in the previous week, what projects are currently open and then discuss what the client is seeing on their side, any concerns and/or any changes that may need implemented.
My clients’ technical expertise varies so I try to customize the discussion based on how they like to work (easier for me to adjust vs. making them try to fit to my approach) and what their needs are. I typically have 2-3 client meetings in a day. After each call, I update the Slack channel that is dedicated to that client with the details and any potential action items. This helps ensure that anyone on shift has the latest details of the client. As our client portal (CloudCastr) is a tool for our clients to see all of the relevant details regarding their network protection in real time, I typically close the calls with a quick recap on the key points they should be aware of.
In between client calls I commonly work on any change requests that have come in via our ticketing system. While our SLA is to do these in 24 hours, we are pretty competitive internally and try to get these done in less than 12 hours. Another key element of the in-between time the “huddles” that I do with the other engineers to hash things out that we come across. The collaboration that we get from this approach speeds up problem-solving and our responsiveness to client issues.
It seems like every day there is news of a new breach (most recently at the time of writing – Uber) and I have to be ready to, not only respond to our clients, but be ready to provide company management with an assessment of the situation so they can communicate out more broadly to the market. As a public company, we feel an obligation to help educate the market on what we know about these types of events.
Training and Development
Learning is part of our culture and a requirement for an engineer in our space so I do spend time continuing my education and development through technical training (ex. Webinars, partner material).
At the end of each day, I make sure that I hand off any issue that is still being worked to our US SOC for them to continue to work on it overnight. Given my role, I am consistently checking email so that I can respond to any tickets or issues that come up which need my response or me to provide context to a fellow engineer.
Before I turn in for the day, I double-check emails and our dashboards, finishing off the day just the way I started it.
Glossary of terms
Tickets: are the result of an end user submitting a help request via an issue tracking system, and it typically contains elements detailing the exact nature of the problem the end user is having with a specific network component.
Change requests: a document containing a call for an adjustment of a system.
Client network topology: is the structure of a client’s network.
Full cycle review testing: is a methodology used to test whether the flow of an application is performing as designed from start to finish. The purpose of carrying out end-to-end tests is to identify system dependencies and to ensure that the right information is passed between various system components and systems.
Link changes: changes that occur within a network.
SLA: Service Level Agreement.
Check out the original article on CSO Online HERE.