A few days ago, I was doing a reference call for our Breach and Attack Simulation partner (you know I am a fan) – and I mentioned how the BAS test results were part of our Board of Directors Cybersecurity Metrics.
There was surprise – did our Board understand the tactics and techniques leveraged in the BAS testing, or the MITRE framework??
I replied, “They know when the test scores breach their risk tolerance.”
I hear some variation of this concern or question often; how do we create meaningful (ideally “one page” dashboard) reports for executive management, when they likely cannot understand or consume the universe of technical data we produce?
To me, this looks at the problem through the wrong end of the telescope. We first have to ask, “What are the questions or risks that concern – or should concern – our executive stakeholders and peers?” Let’s understand or agree on those first – and then provide the specific metrics that address those concerns directly.
For us, our executive reporting is presented as the answers to these ten risk questions (which we worked together to create):
- How well do we align to cyber security best practices \ frameworks?
- How effective is our security infrastructure? (BAS comes in handy here)
- How effective is the staff’s security awareness training?
- Have we detected any serious staff policy violations \ potential insider threats?
- Have we managed third party supplier risk?
- Have we maintained appropriate systems hygiene?
- How well do we control privileged accounts \ access?
- Have we effectively remediated any application and \ or system security issues discovered by third party testers?
- Are we prepared to respond to a major security incident?
- Have we exceeded our overall Information Technology Risk tolerance?
Framing our thinking around these questions first – we were able to consider what information or measures best addressed each (or where we would need to create a new measure, if a blind spot existed).
Whatever potential metrics we reviewed had to adhere to a few guiding principles:
- Transparency: the data sources, methodology and reporting cadence should be accepted beforehand
- Fidelity: we should be able to track these measures consistently and accurately over time
- Reproducibility: if asked, we should be able to re-generate the metrics for a previous period and produce the same results
With these guidelines in mind, we proposed metrics that addressed these concerns, and established threshold achievement \ risk scores for each.
Our report is now a very simple dashboard of basically four columns:
- The framing question \ concern to be addressed
- The threshold \ acceptable target value(s)
- The value(s) for the current reporting period
- The previous reporting period’s value(s)
It is very easy to red-yellow-green arrow this report, if you so desire. We also provide a “appendix” page describing each measure, the threshold value, and why it addresses the framing concern.
If we breach – or are trending toward breaching – a risk threshold – it focuses our conversations with executive management around those concerns, and what we are doing – or the support we need – to return to acceptable risk levels.
It is hard enough for us, as cybersecurity professionals to consume and keep up with all the technical information we process. If we start with this universe of data and try to guess what could be meaningful to our executive sponsors and peers – they will likely be unsatisfied, uncertain or unable to support us in managing cybersecurity risks of the organization.