Cybersecurity Risk is a Board-Level Issue

Elevating Cybersecurity: A Strategic Imperative for Boards

This presentation addresses the imperative of understanding and managing cybersecurity risk at the board level. Despite the growing threat landscape, only a minority of board members recognize their organization’s high vulnerability to cyber-attacks, and nearly half feel unprepared for such incidents. It underscores the importance of board engagement in cybersecurity, highlighting the challenges and necessities of complying with new SEC cybersecurity disclosure requirements.

The NACD Cyber Risk Oversight Principles are introduced, urging boards to view cybersecurity as a strategic risk and ensure comprehensive risk management frameworks are in place. The presentation also sheds light on the typical profile of board members, often senior executives unfamiliar with the nuances of cyber risk, pointing towards a significant knowledge gap.

To bridge this gap, actionable steps for boards and management are outlined, emphasizing the need for effective communication, risk reporting, and a robust cybersecurity program. Finally, it provides guidance on how to present cybersecurity issues to the board, focusing on clarity, relevance, and the facilitation of insightful discussions to enhance cyber-risk oversight.

Author

NIST CSF 2.0: Making CISO’s Lives Easier with the New Govern Function

The National Institute of Standards and Technology (NIST) has recently unveiled Cybersecurity Framework 2.0 (CSF 2.0), marking a significant advancement in cybersecurity risk governance practices. This updated framework, developed through extensive collaboration among industry leaders, academics, and government agencies worldwide, introduces transformative changes that are poised to revolutionize cybersecurity programs and strategies.

Why do CISOs struggle with governance and how NIST CSF 2.0 helps?

Despite the relentless efforts of cybersecurity leaders to navigate the evolving landscape of cyber threats and mitigate risks, chief information security officers (CISOs) have long grappled with a fundamental deficiency in their cybersecurity management toolkit. The lack of structured oversight and top-level support often leaves them struggling to discern critical priorities amidst the wide and evolving scope of their responsibilities. The introduction of the Govern function marks a significant milestone for CISOs, representing a recognition of their indispensable role in the cybersecurity domain. This addition bridges gaps in their management approach, offering a lifeline for navigating the complexities of their roles with greater clarity and effectiveness. In essence, the Govern function represents more than just an incremental addition—it provides some relief in the realm of cybersecurity management. Looking at the Categories and Subcategories in the Govern function, it can easily be recognized that these are of utmost importance for effective cybersecurity management:

“GOVERN (GV) — The organization’s cybersecurity risk management strategy, expectations, and policy are established, communicated, and monitored. The GOVERN Function provides outcomes to inform what an organization may do to achieve and prioritize the outcomes of the other five Functions in the context of its mission and stakeholder expectations. Governance activities are critical for incorporating cybersecurity into an organization’s broader enterprise risk management (ERM) strategy. GOVERN addresses an understanding of organizational context; the establishment of cybersecurity strategy and cybersecurity supply chain risk management; roles, responsibilities, and authorities; policy; and the oversight of cybersecurity strategy.”

By fostering a culture of governance, organizations can effectively address regulatory challenges and mitigate emerging cyber security risks, thereby ensuring robust cyber defenses at all levels.

Additional Resources

In response to extensive feedback received during the drafting phase, NIST has broadened the core guidance of CSF 2.0 and developed supplementary resources to facilitate users’ adoption of the framework. These resources are tailored to different user groups, providing customized pathways into CSF and simplifying its implementation. It places a newfound emphasis on governance, underscoring the importance of informed decision-making in cybersecurity strategy at all levels of an organization.

To facilitate adoption, CSF 2.0 offers a variety of implementation examples and quick-start guides tailored to specific user profiles, such as small businesses and enterprise risk managers. Cybersecurity priorities are driven by strategic objectives, laws, regulations, and risk responses; integrating cybersecurity risk management with overall enterprise risk management significantly adds to both the alignment of the cybersecurity objectives with business objectives, and efficiency in making sound risk management decisions.

Looking at how NIST CSF 2.0 can help enterprise risk managers, as an example, this can be achieved by setting the target profile of the organization to align with GV.OC-01 (“The organizational mission is understood and informs cybersecurity risk management”), GV.OC-02 (“Internal and external stakeholders are understood, and their needs and expectations regarding cybersecurity risk management are understood and considered”) and other relevant Govern subcategories.

Additionally, the newly introduced CSF 2.0 Reference Tool simplifies implementation processes. The framework also includes a searchable catalog of references, aiding organizations in aligning their actions with CSF guidance and referencing other cybersecurity documents, including those from NIST and ISO/IEC.

All information relevant to the new NIST CSF 2.0 can be found at: https://www.nist.gov/cyberframework

Author

Quick Wins: Risk Assessment

No Security – No Business

Highlighting the importance of robust information security management today may seem futile. Businesses that fail to grasp the value of their information are either already defunct or heading towards it. On the flip side, every thriving business prioritizes safeguarding intellectual property, business data, and personal information. If you’re reading this, chances are you manage information security in your business, or perhaps the entire enterprise. This blog post is tailored for you. Whether you have a dedicated security team or someone overseeing information security, and you’re compliant with ISO 27001, SOC 2, NIST, PCI DSS, HIPAA, etc., conducting regular penetration tests and audits is routine to maintain compliance effectively. Your board diligently reviews audit and security reports. Despite significant investments in security tools and personnel, you might feel secure, but the critical question remains – are you truly secure?

There’s Always a Bit of Fear

There’s Always a Bit of FearHow do you navigate those moments when mainstream media buzzes with discussions about a massive cybersecurity vulnerability impacting global IT systems? Does a sense of security linger when, a year after discovering this critical vulnerability capable of compromising confidential data – altering or deleting it, a staggering 74% of the global Fortune 2000 companies remain vulnerable? I wager you reach out to your security team, questioning, “Are we secure?” If you lead the security team, you should have insights, but encountering such a vulnerability for the first time likely prompts you to direct the same inquiry to your security analysts and IT infrastructure team. Regardless of whether you oversee the security team or the entire business, the need arises for information that no conventional security assessment can provide. Until now.

Fear Arises from the Unknown

Fear arises from the unknown, not just from the vulnerability itself but from the uncertainty surrounding our ability to mitigate such unforeseen risks. Cybersecurity professionals grapple with the challenge of combating invisible adversaries and safeguarding intangible assets. It’s not merely the invisibility of electronic data, but the lack of clarity on what data needs protection, its location, form, and the potential threats it faces. This uncertainty fuels our fear, hindering our understanding of the nature and scale of impact in the event of a breach.

Understand Your Risk so you can Mitigate it

To empower executives with transparency and visibility, we offer a unique risk assessment proposal. Unlike conventional security assessors, we commence with a deep dive into your business, comprehending the information you collect, and process, and how it translates into business value. We analyze the information flow, IT systems, security measures, and organizational architecture.

Our approach involves crafting real-life scenarios based on identified weaknesses and vulnerabilities in your business processes. These scenarios range from sophisticated targeted attacks to glitches in system design, cyber espionage, unintentional data leakage, or sporadic hacking attempts. Each scenario is assigned a probability based on interconnected security flaws and your organization’s attractiveness as a target.

Crucially, we assess the potential impact of each scenario, providing not just financial estimates but a detailed explanation of the events, their interconnections, and the specific types of impact your business could endure.
Once we understand the risk scenarios and their impact, we delve into the root causes, enabling us to compile a tailored list of mitigation measures. In essence, we decipher your business, anticipate security incident impacts, identify root causes, and guide your security investments toward areas offering optimal cost-to-security results.

Author

Empower Your Business with Confidence:
Elevate Cybersecurity through Tailored Risk Assessments and Informed Decision-Making

Unlock the power of confidence in your business’s cybersecurity with our comprehensive risk assessment service. In a world where cybersecurity threats lurk, we go beyond conventional approaches. We start by understanding your business, assessing vulnerabilities, and crafting realistic scenarios that delve into potential impacts. Our unique methodology provides transparency and visibility, offering executives valuable insights to make informed decisions about security investments. Fear the unknown no more – mitigate risks effectively with our tailored risk assessment, ensuring your business is safeguarded against the evolving landscape of cyber threats.

Centralizing ERM Data With a Common Methodology

In the intricate landscape of modern business operations, the significance of robust Enterprise Risk Management (ERM) systems cannot be overstated. Centralizing ERM data with a common methodology emerges as a pivotal strategy in enhancing organizational resilience and compliance. This approach not only streamlines risk management processes but also fosters a culture of transparency and accountability across various departments. By aligning methodologies and normalizing assessment results, organizations can efficiently manage risks and ensure compliance, thereby safeguarding their assets and reputation. This post discusses the essentials of centralizing ERM data, highlighting the challenges, benefits, and strategic considerations vital for implementing an effective ERM framework.

Centralizing ERM Data with a common methodology:

  • The success of effective Enterprise Risk Management systems, like most things in large organizations, is dependent on tone-at-the-top messaging and support. To make the case, they can tie to compliance requirements which call for these kinds of centralized reporting systems for proper risk management.
  • Managing Risk and Compliance, as well as governing the resulting decisions is most effective when information (e.g., assessment results) are correlated across key points of an organization.
    When centralizing, assessment result must be normalized so they can be compared, rolled-up, summarized, integrated and presented consistently.
  • This doesn’t necessarily require changing the methodology of other teams, but it does mean aligning them so that output fits together.
    We see the spectrum of orgs trying to do all of this with Spreadsheets on SharePoint to an overly complicated GRC tool no one knows quite how to use. A middle ground is each team uses what works best for them, but pipes in their normalized results into a central repository (manually at first, but eventually programmatically).
  • Ideally a common methodology is defined that supports all the risk / compliance flavors that the org is dealing with.

Transparency vs need-to-know:

  • Transparency does need to be balanced with need-to-know (which is, after all, a control). It doesn’t need to be a free for all — who may see what should be determined by data classification policies and not up to a single department to decide.
  • The system needs to implement the appropriate least privilege, role-based access. The centralized system should have org-based access rights applied to allow teams to see their own data and restrict broader data as necessary to other teams.

Concerns on compliance scope creep and other second-order effects:

  • The centralized ERM repository shining a more transparent light across all teams is ultimately a good thing. When risk management activities are siloed (by departments, budgets, quick-wins or for whatever reason), they become much more expensive than when they are communicated and coordinated from a central point, and resources can be allocated and prioritized efficiently.
  • Looking at this more practically, what if the org suddenly knows about more risk from, say, the current staffing in the InfoSec department. This may or even should be dealt with ultimately by executive leadership – there is no “us” and “them” in ERM – it is the organization mission, vision, goals and the business impact (should the risks or compliance issues materialize) that drives priorities.
  • If the burden / scope outweigh the means and resources to handle it, something is ultimately broken and this is a good excuse to address it. Either management needs to accept it as is or invest more.
    If Internal Audit is starting to get their hands dirty around the organization…this could be a positive, not negative situation. Internal audit is there to help improve the internal processes and controls, so if they think they need to “jump on” something for legitimate reasons, let them do that and collaborate with them in resolving the issue!
  • If it is just because of “making a mark”, listen to what they have to say and explain how you are working on it (if they are not adding any real value). The ends should also justify the means. IA can’t be breathing down your neck ask for a million things unless it is supporting risk reduction. Normalizing the process will help with this and speaking openly about it at governance meetings will help as well. Again, there is no “us” vs. “them” in ERM.

Conclusion

Centralizing ERM data with a common methodology presents a strategic advantage in navigating the complex regulatory and risk landscapes facing organizations today. By fostering an environment of shared understanding and cooperative risk management, companies can significantly enhance their operational efficiency, compliance posture, and strategic decision-making capabilities. The journey towards a centralized ERM system may pose challenges, including balancing transparency with privacy concerns and managing compliance scope creep. However, the rewards—enhanced coordination, optimized resource allocation, and improved risk visibility—far outweigh these hurdles. As organizations strive to adapt to the ever-evolving business ecosystem, adopting a centralized approach to ERM stands out as a crucial step towards achieving resilience, compliance, and ultimately, sustained success.

Are your risk models making you a bad risk manager? (Part 2)

In part 1, we explored risk model issues and the need for skepticism. Now, let's focus on adapting to these challenges.

Validation

Let’s start with the models themselves.

Even if you aren’t using a formal model, there are a lot of things you probably know today. You likely have at least a working sense of which information assets support critical, revenue-generating business. You can augment this knowledge and may discover some new things by doing a revenue-process map linked into a data flow — a practice we’ve found invaluable in optimizing security efforts.

And you don’t have to wait to complete one of those to sharpen your model. Look at incident history to your critical processes and convert those to event trees to get relatively objective (albeit history-based) probabilities. It’s a defensible start as you gather more intel.

Speaking of probabilities, a mistake I see too often are security teams overly focused on likelihood while normalizing or ignoring impact. Unless your IT shop has done a world-class job optimizing your data footprint, it’s not likely that these two are independent. The business does need to provide their input on impact, and that can be a key input as you do your business process mapping. If you have little impact data today, a half-day workshop with the right business executives at your next off-site can get you and your team a lot of insight on how to weigh impact, or map existing impact scales to assets.

In previous posts we’ve stressed the merits of shifting to more quantitative modeling. It’s easy to hide behind fuzzy data with qualitative models, which makes them a safety blanket when being grilled in front of your board. However, I am witnessing many boards starting to see through that, and pushing for risk reporting at parity with other business units. That parity is often anchored in dollars and cents. I know it’s not easy, but it does simplify the discussion around risk, what it means, how big it is, and how it compares with other risks and risk mitigation costs. Start small, but if you aren’t starting to move in this direction, you will soon be behind.

Finally, stress-test assumptions in your model and root out bias. Run simulations through the models you have built and used to see how they perform under certain circumstances. Specifically look at the impacts of major changes to a few variables. Stress-testing will highlight potential blind spots as well as ways to improve the model.

Get soft data too

I can’t stress this enough: Front line employees’ understanding of security risk should not be underestimated. That doesn’t just mean that they follow the rules, but they tend to have a sense of where weaknesses are. Just as some of the best CEOs walk the floor of their companies to see what is really going on, so should the best CISOs (and their risk teams).

Gather intel on what weak points there are and what keeps them up at night (at least Business Line execs). While you are at it, use the opportunity to see how security can be more helpful or how security is creating perceived barriers that you can remove. These win-win partnerships are invaluable.

I have to admit that I have seen mixed results in having security or risk liaisons in different business groups. I used to be a strong proponent, but have seen that those liaisons drift to the needs of the business line over time i.e. making things easier, not more secure. The real bounty comes from incorporating risk management into the day-to-day jobs of all employees and bubbling it up (see this HBR highlight of Hydro One). Tight integration is fostered via education, tone-at-the-top, and incentives to appropriately owning and managing risk on the front lines.

Greenspan used to check on underwear sales at department stores as a way to see how well economic models were likely to perform, because underwear sales were the first thing to plummet when the economy was weak (clothing you can’t see!). Soft data is a vital way to validate your risk models…to get “out of the box” and into the real world.

Governance

I’m sure you’ve heard that you can have the best risk models and intel in the world, but without good risk governance it won’t go far. The inverse is true as well: even with immature risk models and intel, having good risk governance will go a long way. Risk models are just tools, and how you use them and the arsenal of other tools at your disposal is key to the ultimate objective: reducing risk to tolerable levels.

Put in place a risk operating model that can act on what we know. Existing risk assessment results, vulnerability or pen testing, and incidents all provide good data to point to areas that likely need improvement. The risk model can help optimize and prioritize, but the real key is real progress. I have been in so many organizations with world class risk governance at the heart of business operations, but almost non-existent in the back-office / IT. Get the foundational pieces of GRC in place, the right voices at the table to make priority decisions, and the right sponsorship to see it through. You can focus on improving the data these governance boards get over time. But move forward with governance structures, and you’ll make an impact.

Continuity Planning

A prominent CISO told me early in my career that the first two areas he gets right-sized when he walks into a company are vulnerability management and incident response. Though there may be qualified responses to that, there is a lot of wisdom there too. The bad guys are constantly looking for your vulnerabilities and when something inevitably occurs, you need to be ready.

Given the surprises and shocks to business that have occurred over the last couple of decades, business continuity is taking on a more prominent role, and never more than with this pandemic. And I hope it lasts (the business continuity focus….NOT the pandemic!). So aside from strengthening risk models and getting your risk management operating model in good shape, my final recommendation is to beef up your business continuity and incident response programs.

Let’s use COVID-19 as a lesson on how to do that. No one could have predicted we’d be in such a position. Even Bill Gates, warning about a pandemic for years, didn’t think we’d get to this point. But here we are. So what does that mean for the continuity plans that were half baked and got beaten up during this whole episode?

It means we need to focus these plans not on causes, but on effects. We can’t predict everything that can or will happen…there are an infinite number of possibilities in fact. But we can focus on the results, and what options we can put in place to secure those. Those results are generally losses. Loss of resources, loss of people, loss of facilities, loss of customers. If you have a sudden loss of 50% of your workforce (let’s say this virus was even more contagious, or, heaven forbid, deadlier), what do you do to survive?

Develop a survival plan that might include cross training for your most critical services. Think about what services are critical, what losses could occur (despite how they occur), and what needs to be put in place to mitigate for potential losses. This kind of planning complements your risk models and program, unifying them into one integrated approach that increases resilience across the enterprise.

Conclusion

Risk models, though informative, have their limitations. I’m not ready to throw them out, but they need to be tempered with healthy skepticism and stress-testing. Security is a tough area, and, while silver bullets would be nice, they just don’t exist for this space. We need the people and processes along with the tools to do this right. Good models, validation, governance, continuity planning, coupled with a healthy dose of continuous measurement and improvement will put you in good stead with your customers and your Board.

Author

Are your risk models making you a bad risk manager? (Part 1)

On my reading list for this COVID-19 summer has been a biography of Alan Greenspan. Whatever your thoughts on his role in the 2008 financial crisis, he has an impressive legacy and one of the most distinguished career histories of any economist. One interesting fact about him was his strong skepticism of models, despite the rise, during his lifetime, of econometrics and “big data.” It got me thinking about the use of risk models with our clients, and if my experience building models and risk programs fits within that skeptical viewpoint. This two-part blog entry explores that question…

What risk models do for you…And what they don’t do.

First, some context. I am talking about modeling risk to information assets so that the good guys get to information when they need it, and the bad guys can’t touch it. True risk managers will tell you that risk management is optimizing decisions in the unknown, and thus risk is risk regardless of source. I strongly agree, though there may be some nuances to looking at market risk, liquidity risk, and several others that don’t come into play with looking at risks in the digital world, and vice-versa. We won’t focus on those nuances here.

As with those market risks though, we’re still dealing with a lot of data from various sources (security event logs, threat intel, incident history…to name a few). Models help us distill this data into information. They help us look for potential patterns and if we are exceeding or missing benchmarks.

That diverse data is powerful input to your decision making. Likewise, it gives us a framework to talk about a complex topic. Information security in many ways is like insurance against an invisible potential. Our brains have a hard time wrapping around that. Throw in the complexity of today’s computing systems and abstraction along with an executive’s limited attention and you create a recipe for blank stares. With a model that converts risk concepts into something that these executives understand or are familiar with — money being a good example — you have a fighting chance of getting your point across.

Like any tool, risk models can be misused. As I reflect on my experiences, or even just open a page from a history book, I can see this happens more often than we might care to admit. For one, in our desire to distill and report, models can oversimplify. A red, yellow, green, or a 1 -5 scale, can hide key concepts and leave the reader challenged to make accurate comparisons.

Models can also have variance and be biased. Humans build these models, and we provide the weights and innerworkings that get abstracted away. Behavioral psychologists will quickly point out that it is almost impossible not to embed some bias into how we create these models. At small scale, bias may not be wildly important, but it can quickly compound when you string assumptions and biases together, leading to severely skewed results.

Another challenge with risk models is that they are often based on untested assumptions. Car insurers have the advantage of using millions of data points (i.e. drivers, historical accidents, etc.) to improve their predictions, but doing the same in events that are less common (like a breach to your HR system) leaves us, to some extent, guessing.

What risk models do for you…And what they don’t do.

Clearly not. Like Alan Greenspan, we should be skeptical of them while recognizing that their advantages are clear. Part two of this series covers three keys to minimizing the risk in risk models: validation, governance and continuity planning.

Author

From Cloud Reluctant to Cloud Secure

To say that cloud computing is hot is old news. A majority of organizations have either migrated or are considering migrating their core computational and storage workloads to the public cloud. Gartner predicted that the cloud market was expected to grow by 17.5% in 2019 and exponentially for the next 3 years.

Some have yet to migrate or are thinking about migrating but are hesitant to make the decision. Their major concern lies in the fear of the “unknown” and a perception that the Cloud is generally not secure. Let’s address these concerns by discussing three key points in an order that will help shape your decision:

  1. Cloud Responsibility
  2. Cloud Breaches
  3. Cloud Adoption provides Resilience

Cloud Responsibility

Since AWS has been the leader in the industry, let’s take a moment to understand their Shared Model in regards to cloud security. The AWS Shared Model outlines the roles and responsibilities in a way that is easy to follow and implement. While AWS is responsible for “Security of the Cloud,” such as protecting the infrastructure, it is the customer’s job to make sure that “Security in the Cloud” is achieved by managing data, classifying assets, and applying appropriate rules and permissions for configurations of the application layer and the software-defined network topology. AWS’s infrastructure security measures simplify your life – removing the “undifferentiated heavy lifting” aspects of security, which is common for everyone and shouldn’t be part of your company’s “secret sauce.” This doesn’t, however, absolve you from your responsibility for thinking through, configuring, and implementing core security practices appropriate for your implementations.

Cloud Breaches

Let’s look at some of the high profile cloud breaches that have occured as of late. The recent Orvibo breach where passwords and password reset information for home security systems were left out in the open comes to mind. Because of the interoperability of the cloud, with one switch you can leave a great deal of your infrastructure open to the public. A 3rd party vendor working for Verizon committed a configuration blunder on an AWS S3 bucket, which exposed names, addresses, account details, and pin numbers of millions of US-based Verizon customers.

But was this really a cloud shortcoming? No, it was a result of a weak 3rd party program. Other cases at Target, Home Depot and Apple iCloud also received a lot of media scrutiny. However, most of these breaches were a result of human error and/or weakness in the process, not shortcomings in the cloud. For example, in the case of Target and Home Depot, hackers were only able to get ahold of personal information by bypassing the cloud infrastructure via third-party vendors. The data in the cloud was simply still too secure. In a nutshell, we need to understand that security issues outside the cloud (like with third-party vendors) are similar to those within the cloud and include well-known challenges like 3rd Party Risk, Data Governance, etc.

Cloud Breaches

Cloud adoption is one of the most significant technological shifts that your organization will face, but there must be a reason why a majority of the most innovative companies are going down that path. They treat this choice not as an option, but a mandate. Minimum Viable Cloud (MVC) is a great starting point for your first production cloud as it treats the whole platform as a piece of software. Most of the big CSPs (Cloud Service Providers) provide this utility through automation programming.

Hence, the new mantra for quick and scalable adoption is “infrastructure is deployed as code”. It means to provision and manage IT infrastructure through the use of source code rather than through standard operating procedures and manual processes. What’s the benefit of that? Well, with the ever-improving toolset you are now able to manage configurations more quickly and deploy infrastructure components efficiently, consistently, and in a repeatable fashion. This approach helps architect, build, and operate large-scale systems that are resilient in nature, even while taking advantage of scalability, flexibility and increased agility.

Companies like Netflix have pioneered this approach; they release thousands of lines of code a day and, though you may not be ready for that pace, plan for change and how to learn from errors and failure. The cloud helps facilitate this, but developing good processes to enable these methods is paramount. A dedicated cloud security program keeps these early implementations for descending into chaos.

An Effective Cloud Security Program

Instead of relying purely on conventional security methods, cloud security programs need to be developed so that they cater to (a) Business needs for cloud adoption, (b) Shared-responsibility models, and (c) Compliance requirements. A Cloud Security Operating Model can achieve this while demonstrating a way to optimize the organizational and current security processes for cloud adoption, and while helping them work together to secure the benefits of the cloud. These models typically includes elements like:

A) Cloud Security Strategy

  • Why do we need a new Cloud Security Program?
  • What are the Key Goals?
  • Understanding Cloud Ecosystems & the Regulatory Landscape

B) Cloud Security Governance

  • Strategic Alignment
  • Key Stakeholders Identification
  • Resource Allocation
  • Metrics

C) Key Services and Processes

  • Cloud Risk Management
  • Cloud Controls Management
  • Data Governance
  • Training and Cloud Awareness

An Effective Cloud Security Program

Fears about cloud adoption arise from a lack of education and understanding in the user environment, not from the shortcomings in cloud services. Instead, cloud adoption can help you focus and direct your investments and resourcing efforts more on the application layer, which needs the proper knowledge and set up for the desired maximum level of security. With the Cloud providing more agility, elasticity, and reliability for your services, your security capabilities can now be more innovative and adaptable to change, giving you more resilience in the long-term. Feel free to contact us to learn more about how our cybersecurity experts can assist in easing your migration to the cloud by designing a comprehensive cloud security program.

Author

Customer Experience and Growth in Times of Service Disruption

Maya Angelou once said: “I’ve learned that people will forget what you said, people will forget what you did, but people will never forget how you made them feel.” It’s safe to assume your organization had ambitious goals for 2020, introducing new products, improving market share, increasing customer satisfaction, improving your Net Promoter Score (NPS), etc. Whatever it was, you can table it for now.

Continue reading

Criminal Actions and Motivations, the ROI of Cybercrime

Symantec just released its 2019 Internet Security Threat Report (ISTR). It is largely a comparison of malware trends and cybercriminal activity over the last 1-3 years. A quick look into the data reveals that many of the report’s findings are aimed at the end user or environments with a small IT footprint. Despite this, there are valuable insights can be taken from it about enterprise IT governance and IT risk modeling. This two-part series talks about the economic motivations of cybercriminals and how their actions change as a result. It then talks about how these should influence your IT risk modeling efforts.

The ISTR focuses on two types of cybercrime, that done by the average cybercriminal trying to monetize their efforts by simple means, and that done by the “targeted attack group” (economic and political espionage actors). The two have different motivations and levels of discipline, but they are largely working from the same toolbox. The first group often seems faceless and inscrutable. Representations that model the cybercriminal attack as a random event characterized only by the rate of attack can be used as a first approximation, but more insight can be derived by seeing them as economic actors. They attempt to obtain the most valuable cyber resources possible with the least possible investment. While the expected return varies from criminal to criminal, each has a minimum expected ROI required before they undertake a given attack. They are therefore not random forces of destruction, but, instead, are tractably predictable and influenceable. Factoring their reactive behavior into your allocation decisions and IT governance strategies makes those strategies and decisions even more effective.

Let’s look at three examples of cybercriminals as economic actors that are covered in the ISTR report. First case is about the correlation between the frequency of cryptojacking attacks and the price of monero, a common cryptocurrency that cryptojackers mine. As the value of the monero fell by a factor of seven over the course of a year, the total cryptojacking events rate fell by a factor of two (1). This is a clear case of cybercriminals having a distribution of acceptable ROIs for launching attacks. Incentives decreased, a smaller percentage of the criminals were willing to launch this sort of attack (the number of monero mined per hour was largely a constant). Complicating this picture is the fact that once the cryptojacking infrastructure is developed, there is less of a cost to launch additional attacks. Nevertheless, a simple economic model can be used to make sense of the cybercriminal’s strategy.

As a second example, cybercriminals, after cutting their teeth on ransomware for consumer computers, moved to enterprise computer ransomware. The barrier for entry for consumer ransomware is lower and it takes less planning, so it makes sense that it was the first place it became a threat. Once the tools were developed, however, they could be used against the enterprise in coordinated attacks. The inelastic pricing of the ransom of enterprise ransomware drove the price up and hence increased the motivation for enterprise malware attacks. This is seen in the data because even though general ransomware decreased by 20%, enterprise ransomware frequency increased by 12% (1).

A third example of the economics of cybercriminals is their adoption of powershell as an attack vector. Recently browsers, operating systems, and anti-malware software have improved so that the yield of a brute force “through the front door” attack has become prohibitive. The response is to “live off the land” with native OS utilities instead of breaking down the front door. This is seen by a 1000% increase in the use of powershell in attacks (1). FYI, the standard vector is an office document with a macro that calls powershell to load the malware payload (1). Macro limiting strategies may be useful in some cases.

In each of these three examples, a simple economic model could be developed to understand how cybercrime attacks rose or fell based on optimizing the ROI to the cybercriminal.

Managing Reputational Risk in an Era of the Unthinkable: Brand Implications of Major Breaches

We live in an era of sometimes unfathomable risk. From the November 2015 attacks in Paris that left 130 victims dead to the deep breach at Equifax that exposed 145 million consumers’ most intimate data, people, places, and companies are dealing with the unexpected every day, and one of the primary repercussions is to the integrity of their brands.
It’s common to assume that many of these events have a short lived impact. In the aftermath of the October 2017 massacre in Las Vegas, stocks dipped and investment banks estimated up to 6 months of reduced demand, but tourism was expected to rebound to levels seen before the event within a year.

The repercussions of such events live on far longer in the ongoing calculus of risk, expense, and operations that they so strongly influence. Risk managers for the Las Vegas Police, MGM International and other hospitality companies have to balance the costs of security enhancements with the broader expense and risk landscape of their business. No amount of spending can reduce risk to zero and too much spending can threaten the integrity of the bottom line

All of these high profile security events have not only operational implications but also reputational ones – bottom line impacts on the brand and the ways the brand influences revenue, market valuations, credit worthiness, regulation, and operations themselves. Reputational risk surfaces in surprisingly diverse ways and one of the major ways risk managers can benefit the bottom line is by demonstrating the organization’s flexibility and resilience in the face of brand damage.

This is a comprehensive look on understanding reputational risk as an enterprise-wide concern requiring an enterprise risk management approach. Reputational risk goes far beyond considerations of physical or cyber-security. Let’s talk about all the ways brand damage is likely to materialize.

The first portion will focus on understanding and mitigating the first-order, bottom line impacts of reputational risk – revenue and valuation. In the latter half we’ll focus on the second-order but equally important impacts on credit worthiness and operating costs.

Revenue & Reputation: Securing the Bottom Line When Bad Stuff Happens.

Often the easiest cost to imagine is loss of customers during a brand impacting event. However, some risk managers find it difficult to quantify impacts on future revenue in the aftermath of these incidents. Consider the horrific attacks in Paris in 2015 – Tourism rebounded quickly to pre-attack levels but the attacks undoubtedly reduced the share of international travelers who would otherwise have come to the city. While this impact can be difficult to quantify, it’s not impossible. By focusing on the primary stakeholder for this cost (in this case tourism consumers) we can go a long way toward modeling the impact of brand damaging events. Maintaining information not only on the sentiments of your customers but acquiring data on the average consumer in your sector is key for calibrating risk models. Further, true brand recovery in the aftermath of a high profile event can only be evaluated when you know what potential as well as loyal customers are looking for or are concerned about.

Reputation & the Markets: The Real Risks of Devaluation

After a brand impacting incident companies almost immediately see effects on stock. Stock prices plummet and there are subsequent losses stemming from these initial dips. Given the herd mentality of investors it is critical to reassure savvy shareholders that the risks to the company are being well managed. Here a strong enterprise risk management approach can provide executives with the right information at the right time to convey to the market that the root causes are known and being addressed. A robust culture of risk management can provide hard evidence of the actions the company is taking to ameliorate the costs of the incident in question. In some cases there is little that can be done to prevent an immediate reputational hit, but demonstrating an awareness of all the ways the costs of the incident have and will materialize goes a long way toward demonstrating resilience to investors.

Reputation and the Cost of Money

In some cases lenders may determine that reputational damage has impacted revenues, operational costs or the overall financial health of an organization so much that the costs to borrow increase. For organizations with low debt levels this financial risk can be managed to drastically reduce these costs. For others it may become extremely costly. No matter where your organization sits on the debt spectrum, resilience is crucial in all areas of the business so you can strengthen lenders’ view as they re-evaluate the health of your business. An enterprise-wide response is ideal to mitigate the often expensive effects of increased borrowing costs. Make sure you’ve built that risk resilience into cash flow, operational costs, and the impact of big events on overall market value.

Reputation and Operations

Like revenue, the impact to operational costs is easy to see in the short term. Increased spending on response efforts, outside counsel, security experts, and more are easy to quantify. However, long after the brand impacting event organizations continue to feel further effects on operations.

Three common areas with ongoing operational cost implications are risk mitigation, compliance spending and the cost of personnel. Often firms respond to brand damaging incidents by throwing money at the problem. Stakeholders and executives get comfort from the immediate spending, but that spending is rarely commensurate with the risks involved. Rather than spending boatloads of money on beefed up compliance and audit or unfettered cybersecurity spending, organizations need to ensure that new spending is matched to the amount of risk reduction needed.

The costs of increased turn-over or retention after a big event are harder to quantify. Just as reputational damage impacts customers’ views of an organization, employees may require more compensation or be easier to lure away if your brand suffers. Focusing on employee sentiment may seem unnecessary in the immediate aftermath of a brand damaging event but it may save you in turnover and talent acquisition costs down the road.

Reputation and Regulation

Depending on the type of incident, regulators might have cause to step in. While fines and legal fees may be unavoidable, a strong risk management program can be critical to avoiding more onerous regulatory oversight. The right kind of program goes well beyond demonstrating large, active programs in compliance or audit. True risk management means that organizations demonstrate, on an ongoing basis, how they manage risks effectively, including how they can detect and respond to failures. Furthermore, showing regulators how your organization protects customers through enhanced resiliency efforts can also give the regulators good cause for not taking their most restrictive actions.

Big Reputational Risk Means Big Action

Given the diverse ways big reputational risks can drive up costs, organizations should take a broad approach when managing such risks. Since no part of the enterprise is safe from brand damage, risk management against this damage needs to be undertaken at enterprise scale. Companies need to look broadly at the value of preventative risk mitigation before a major incident occurs, and consider investing in resiliency to limit the eventual costs to the brand and organization of such incidents. In today’s high risk environment, risk managers need to provide executives with prospective information about the enterprise-wide risks they face and then dive in fully to help with both the response to extreme incidents, and with reassuring all those with a stake in recovering from these traumatic events.

Author