Which of the following policies and guidance address continuous monitoring of information systems?

Combining an organization-applicable risk framework with an all-encompassing control set and an information security continuous monitoring (ISCM) methodology provides for a holistic approach to compliance and risk management by providing controls across a wide array of areas with a high level of detail and guidance on tailoring.1 An enterprise could apply this approach to risk management by assessing the organization, integrating the risk management framework and establishing a security baseline based on the security control standards. When the controls are continually monitored, assessed and addressed, the organization has taken a big step toward reducing its security risk potential.

There is an ongoing movement toward adopting ISCM at the federal level, as well as within the US Department of Defense (DoD), due to US Federal Information Security Management Act (FISMA) compliance requirements. Though the compliance issues are federal in nature, there are lessons to be learned and technology improvements that can be implemented in any industry, such as finance, utilities and health care. In 2013, the US Department of Homeland Security (DHS) presented all federal agencies with a blanket purchase agreement worth up to US $6 billion for reduced-cost continuous monitoring software.2 The US Office of Management and Budget (OMB) has offered guidance on how continuous monitoring will be able to replace the current three-year accreditation cycles.3

ISCM has the promise of being the next best thing for cybersecurity and risk management, but there are still some immaturities and challenges that exist in the methodologies and software. In this regard, three areas should be examined relating to ISCM. Those three areas are manual vs. automated logging, current technology available, and control sampling frequency.

Background Information on ISCM

The primary literature studied for this research on ISCM was developed by the US National Institute of Standards and Technology (NIST). “NIST is responsible for developing information security standards and guidelines, including minimum requirements for federal information systems.”4 NIST provides detailed guidance on implementing a risk management framework.5 It also provides a detailed and broad control set for federal agencies to adopt—though any organization can adopt the controls as standards. A combination of the risk management framework, control set and the continuous monitoring implementation guidance can be used to set up a federally accepted continuous monitoring plan. Three key NIST Special Publications are described in figure 1.

Which of the following policies and guidance address continuous monitoring of information systems?

Some of the gaps in the research dealing with continuous monitoring are that the vast array of studies undertaken have been conducted in the area of audit, energy, medical and sensor network. This opens the possibility of transferring a technology or algorithm from a disparate field. For instance, the implementation of continuous auditing and decision processes to be included in the early design stages of emergency response processes6 would have a strong correlation to designing continuous monitoring into a system from the start. Some advances could be orchestrated and pose the potential to leap ahead in the area of ISCM by modeling these other areas.

Evaluation of Continuous Monitoring Risk Management Compliance Framework

Continuous monitoring is one of six steps in the Risk Management Framework (RMF).7 When properly selecting a framework, it is critical to choose one that will effectively support operations as well as the controls that the organization uses for compliance.8 The selection can be viewed across four areas of security, service, operations and governance. Information assurance (IA) exists in all of these areas as well, because the aim is to ensure that the mission can be completed and these four areas all play a role in a mission’s effectiveness. There have been many updates on how to address risk management, but among the more prominent is NIST SP 800-37 combined with the NIST SP 800-53 and NIST SP 800-137. Together these documents thoroughly address the IA area of risk management and compliance, and do so in continuous fashion.

Risk Management Framework Reference

NIST SP 800-37 provides guidance for applying a risk management program to an organization. As the types of sophisticated, well-organized attacks have increased, the potential for higher levels of damage to national security has increased as well.9 For organizations to understand their chances of becoming compromised and the damage done from that compromise, a system of continuous assessment of vulnerabilities, impacts, mitigations and residual risk acceptance should be adopted. Without a comprehensive system in place, an organization is essentially leaving itself open to chance. SP 800-37 provides for that system and a means of implementing it, but it is up to the organization to tailor and implement it effectively.

The process involves the following steps: Categorize information systems, select security controls, implement security controls, assess security controls, authorize information systems and monitor security controls. SP 800-37 revolves heavily around control assessment to determine the level of risk an organization is facing. The level of compliance or completeness with the established security controls can give leadership an idea of the overall risk level of the organization, as well as provide guidance on what areas should be improved through policy, technology or personnel.

Security Controls Reference

Critical to the risk management framework are the controls that fit into that framework. SP 800-53 uses a multitiered approach to risk management through control compliance. This approach includes security control structures, a security control baseline and security control designations.10 SP 800-53 works hand in hand with SP 800-37 in that the controls are overlayed on top of the risk management framework for an organization. The controls are selected based on the criticality and sensitivity of information owned by the system and are applied in a suggested order with identified higher priority controls first. The controls include identification and authentication, contingency planning, incident response, maintenance, risk assessment, and media protection, among many others.

Information Security Continuous Monitoring Reference

Continuous monitoring can be a ubiquitous term as it means different things to different professions. NIST SP 800-137 sets forth a standard to follow when applying the principle in the risk management framework utilizing the NIST control set. The primary process for implementing ISCM is to:11

  • Define the ISCM strategy
  • Establish an ISCM program
  • Implement an ISCM program
  • Analyze data and report findings
  • Respond to findings
  • Review and update the monitoring program and strategy

Factored into this is the use of manual and automated checks to provide continuous updates and feedback to the system as a whole.

Though these three NIST Special Publications form a solid foundation for continuous security monitoring, risk management and compliance, there are some areas that need to be addressed and reviewed for effectiveness. Automated technology drives the push for continuous monitoring and has been the focus of ISCM efforts;12 however, only so many controls can be tracked via an automated process, which presents a potential gap in the control set for activities that are performed manually. There is also the matter of technology available. One of the largest federal ISCM projects has issued a suite of automated tools to provide this function. The question with these tools is how many controls they cover. And, there is the matter of control sampling frequency. NIST SP 800-137 offers guidance, but not specifics.

The Advantages and Disadvantages of the Model:  Manual vs. Automated Processes

One of the advantages of the ISCM model is that it captures aggregate data from already-existing systems in automated fashion. This automated process provides for real-time, up-to-the-minute information to be collected and reviewed by leadership. One of the disadvantages of the model is that not all activities take place in an automated or networked fashion. It may not be easy to capture and log automatically, for example, when planning for acquisitions took place or that a policy was updated. In addition, there is no volume of federal guidance on manual logging. In NIST SP 800-137, manual checks and procedures are called out as needing to comply with the same level as automated checks.

One potential solution would be to provide a manual logging mechanism for actions completed. This could be a login interface to communicate when someone has finished backing up a server or performed a security sweep of a remote location server room. Sign-in sheets for access to controlled areas could also be automated, perhaps by signing in on a tablet that logs times and names and identifies unusual patterns of behavior, such as entry at a late hour that is against the norm. The review of advantages and disadvantages of physical vs. automated solutions can be complemented by a survey of current continuous monitoring solutions.

Comparison of Continuous Monitoring Software Solutions

Guidance from the OMB states that, “The continuous monitoring phase must include monitoring all management, operational, and technical controls implemented within the information system and environment in which the system operates including controls over physical access to systems and information.”13 In this regard, a table was created that lists all the DHS applications that are being offered to federal systems, as noted previously.14 The software was reviewed online and categorized against the NIST control category and control type (figure 2).

Which of the following policies and guidance address continuous monitoring of information systems?

After the data were collected and reviewed, a comparison table was created to show how many control types were used and how many were not used. A high-level estimate was made from these data of the effectiveness at total coverage of the currently offered automated solution.

Continuous Monitoring Software Analysis

Of the 21 control families, eight are covered by the DHS continuous monitoring software offerings. Additionally, there are numerous specific controls under the control types that are not covered. From a very high-level view, only 38 percent of control types are affected by software offering. This leaves room for future improvements. There are software solutions not on this list that cover some of the control categories. In addition, there currently is not a system that integrates the data feeds from each of these individual software packages.

Frequency of Control Assessment

Sampling frequency factors that should be taken into consideration are risk level, changes in the control item (often intermittent), and whether the control is in an open or incomplete state.15 Risk level is how much of an impact there would be if a vulnerability related to the control were exploited. The thresholds and timing have to be set by the organization’s leadership and by that of the overarching governing agency body.

A public web server may have a higher risk level than a file server on the domain located securely within the enclave; the chances are lower of it being attacked, and there would be less impact if it were taken offline. In this way, public servers may be chosen to be sampled more frequently. The sensitivity of the data would have to be taken into consideration as well. If the file server contains US Social Security numbers, it could require a higher sampling frequency than the public web server.

Certain controls, such as reauthorizing user access annually, may have to be sampled only twice a year for a particular program if that process occurs only once a year. It would be a waste of resources, computing power and storage to sample that control every minute, day or week. The spectrum for controls most likely ranges from a scale of annually, to every second year. Developing a road map for an organization, or a standard best practices timeline, would save time and energy. It will also facilitate buy-in from the user community. If they are being asked to report something more frequently than they know they have to, the whole concept of continuous monitoring could gain a bad reputation in the organization.

Conclusion

ISCM has a major positive impact on improving risk management and compliance across many industries and bodies, including the US federal government, the DoD, and commercial and financial organizations. The technology available today goes a long way toward improving security, though temperance should be used when conveying what problems this solves as there are some glaring holes in what is currently available. Future research could include looking for a solution to fill the gaps in control coverage, such as a physical logging mechanism, to input workflow activities into an automated system for aggregation. Establishing best practices for the control sampling frequency provides the necessary timing for the manual logging. One final proposed change to the model would be to connect both the continuous monitoring solution to a single dashboard for managing overall risk. Working from this model would be able to show organizations which areas are being continuously monitored and which areas still need to be tracked the traditional way. Though the promise of ISCM is great, there are many challenges to overcome to realize complete implementation. The only way to overcome those challenges is to get started on implementing ISCM and to share the lessons learned with the cybersecurity community.

Endnotes

1 National Institute of Standards and Technology, Special Publication 800-53, “Security and Privacy Controls for Federal Information Systems and Organizations,” USA, 2013, http://nvlpubs.nist.gov/nistpubs/SpecialPublications/NIST.SP.800-53r4.pdf
2 Bennett, C.; “With $6 Billion Continuous Monitoring Contract, DHS Takes ‘Next Leap’ in Cybersecurity,” Fedscoop, 2013, http://fedscoop.com/with-6-billion-continuous-monitoring-contract-dhs-takes-next-leapin-cybersecurity/
3 Zients, J. D.; “Reporting Instructions for the Federal Information Security Management Act and Agency Privacy Management,” Office of Management and Budget, 2012, www.whitehouse.gov/sites/default/files/omb/memoranda/2012/m-12-20.pdf
4 National Institute of Standards and Technology, Special Publication 800-137, “Information Security Continuous Monitoring (ISCM) for Federal Information Systems and Organizations,” USA, 2011, p. 3, http://csrc.nist.gov/publications/nistpubs/800-137/SP800-137-Final.pdf
5 National Institute of Standards and Technology, Special Publication 800-37, “Guide for Applying the Risk Management Framework to Federal Information Systems,” USA, 2010, http://csrc.nist.gov/publications/nistpubs/800-37-rev1/sp800-37-rev1-final.pdf
6 Chumer, M.; R. Hiltz; R. Klashner; M. Turoff; “Assuring Homeland Security: Continuous Monitoring, Control & Assurance of Emergency Preparedness,” Journal of Information Technology Theory and Application, 2004, vol. 6(3), p. 1-24, http://search.proquest.com.library.capella.edu/docview/200008540?accountid=27965
7 Op cit, NIST 2010
8 Schlarman, S.; “Selecting an IT Control Framework,” Information Systems Security, 2007, 16(3), p. 147-151
9 Op cit, NIST 2010
10 Op cit, NIST 2013
11 Op cit, NIST 2011
12 US Department of Homeland Security, “Continuous Asset Evaluation, Situational Awareness, and Risk Scoring Reference Architecture Report (CAESARS),” Federal Network Security Branch, 2010, https://www.dhs.gov/continuous-asset-evaluation-situational-awareness-and-risk-scoring-reference-architecture-report
13 Op cit, Zients, p. 11
14 US Department of Homeland Security, “BPA Awardees and Tool Suites,” Federal Times, 2013, http://apps.federaltimes.com/projects/files/bpa_awardees.pdf
15 Op cit, NIST 2011

Bill Hargenrader, CISM, CEH, CISSP, is a senior lead technologist at Booz Allen Hamilton, where he is developing a next-generation cybersecurity workflow management software solution. He is working on his doctorate degree in information technology, focusing on the intersection of cybersecurity and innovation.


Page 2

Which of the following policies and guidance address continuous monitoring of information systems?
  ISACA Journal Volume 1 Podcast:  Microwave Software

Let me tell you about my microwave. When I bought it, it was called a microwave oven and I was going to roast turkeys in it in half an hour. I am sure it was white then, but it has turned a pale, sickly yellow. I never did cook a turkey in it and all I ever use it for now is to defrost sauces, reheat coffee and nuke the ice cream so it is soft enough to scoop. Even though it is more than 20 years old, it still works and it does what I need it to do, so there is no reason to buy another with a lot of features in which I have no interest.

I am certain that the data centers in every organization older than 20 years have applications running in them that are just like my microwave. They are old software serving a limited purpose, often for a limited number of business functions (or for just one). They work; they do what their users want them to do, thus there is no reason to buy a new system with a lot of features in which those users have no interest. Ominously, they are indicative of the reason that the problems of cybersecurity will not be solved any time soon.

Software, Old and New

As I was writing this article, a news report announced the discovery of a flaw in a widely used software product called Bash. It is freeware that is incorporated into 70 percent of the machines that connect to the Internet. Created in 1987, the software has been maintained by a volunteer, who evidently introduced the flaw in 1992. According to the report, the bug, known as Shellshock, can be used to take over entire devices, “potentially including Macintosh computers and smartphones that use the Android operating system.”1 Ubiquitous software with a flaw undetected for 22 years! If ever there was microwave software, this is it.

Corporations and government agencies have accumulated their application portfolios over a period of years. Many still have programs written in COBOL, running on mainframe computers and written when most of their employees were in grade school. Others modernized their systems in anticipation of the new millennium, now 15 years behind us. In many companies, applications exist because they served a predecessor corporation that has long since been acquired and absorbed, but which lives on in ancient software. Each of these applications operates atop an infrastructure, often shared with other programs. They each get data from somewhere and send results somewhere else. If not well controlled, they expose those data to theft and misuse.

It is my experience that very few organizations know how all their applications work, which programs they interface with, or how they use operating system and middleware services. Yes, that is an over-broad generalization, and, yes, there might be some organizations that understand all their systems—all of them, no exceptions, 100 percent. But I stick to my assertion—just because it is a generalization does not make it wrong.

Here is the challenge:  Are all applications, data and infrastructural elements2 protected at the same level? Or do the “critical” systems receive the greatest security, control, recoverability and audit attention, while the rest are relegated to “tier 2”? As I said in different context in a previous article, there is no such thing as tier 2.3 Small, lightly used, nearly forgotten systems may be running on the same platforms or in the same highly interconnected infrastructures as those depended upon by large numbers of users for essential business functions. If they are not protected as though they were critical, these systems can expose the ones that are more highly valued when a cyberattacker comes along looking for a weak spot to penetrate.

It is Only

Beware the “Oh, it is only…” response. It is only the forecasting system, which, if illicitly tweaked just a bit, causes a manufacturer to over- or undersupply products to the marketplace. It is only the training system that enables sensitive tasks to be staffed just by qualified personnel. It is only the library system that can be used to display—or to hide—information critical to lawmakers. These are not randomly chosen examples, nor are they hypothetical. They are the equivalents of my microwave, sitting on the kitchen counter or in the data center or the office or the store for so long that they are hardly noticed. But cyberattackers notice and exploit them. For example, the instrument that caused so much damage to Target and Home Depot was not a server array. It was only a cash register.4

The problem of cyberthreats is not going to be solved5 just by replacing microwave software with gleaming new products. Newness is not enough. Should some technoarchaeologist read this piece 20 years hence, I am sure he/she will chuckle about some buggy software introduced in 2015. The fact is that in any significant enterprise, there are so many programs acquired over such a wide span of time, developed to run on so many different infrastructures, that there are almost certainly going to be holes in the code and in the interfaces of which a patient attacker might take advantage. Advanced persistent threats (APTs) reward just such patience.

The Heart of the Matter

The jumble of systems, new and antiquated, well and poorly controlled, leads me to conclude that: Cyberthreats are not a security problem. They are a systems problem.

There is only so much information security professionals can do to build barriers and walls and fences and domes around information systems and data. Ultimately, flawed software cannot be secured. It can only be made more difficult—not impossible—to penetrate.

Those responsible for information systems, beyond the chief information officer (CIO) up to the highest ranks of management, must accept that cyberattacks will occur and that some of them will succeed.6 That being the case, an equal investment should be made in preparing for recovery from such attacks as is given to preventing and detecting them. The Framework for Improving Critical Infrastructure Cybersecurity7 lists “recover” as one of the five functions of cybersecurity. However, I have seen very little money spent on recovering from cyberattacks. This will have to change.

The most important step, to my mind, in mitigating the threat of cyberthreats is for organizations to gain a thorough understanding of all the software running in their environments, the flow of data and control among them, the interfaces among them and within their infrastructures, and the exposures presented by what I have termed microwave software. In too many organizations, neither management nor staff knows these things. Their ignorance is bliss for the malefactors in the darkest regions of our hyperconnected world. They are looking for and finding such exposures. This should be all the incentive required for legitimate organizations to become, at least, aware of what is running in their data centers and, at best, to make all the software—both up to date and microwave—work harmoniously and safely together.

Endnotes

1 Perlroth, Nicole; “Security Experts Expect ‘Shellshock’ Software Bug in Bash to Be Significant,” The New York Times, 24 September 2014, www.nytimes.com/2014/09/26/technology/security-experts-expect-shellshock-software-bug-to-be-significant.html?module=Search&mabReward=relbias%3Ar%2C%7B%221%22%3A%22RI%3A9%22%7D
2 Better known as “configuration items” in ITIL terminology. See ITIL, www.itil-officialsite.com/InternationalActivities/TranslatedGlossaries.aspx
3 Ross, Steven J.; “Shedding Tiers,” ISACA Journal, vol. 2, 2014
4 Kuchler, Hannah; “Home Depot Attack Bigger Than Target’s,” The Financial Times, 19 September 2014, www.ft.com/cms/s/0/7f9a2b26-3f74-11e4-984b-00144feabdc0.html#axzz3EMhI2Uy9
5 I am not even sure that there will ever be a solution as such. As technology advances, so do the tools and incentives for those who would undermine information systems. If we cannot win the war, we can at least reduce the number and severity of casualties.
6 See my previous article: Ross, Steven J.; “Bear Acceptance,” ISACA Journal, vol. 4, 2014.
7 National Institute of Standards and Technology (NIST), Framework for Improving Critical Infrastructure Cybersecurity, USA, 12 February 2014

Steven J. Ross, CISA, CISSP, MBCP, is executive principal of Risk Masters Inc. Ross has been writing one of the Journal’s most popular columns since 1998. He can be reached at .


Page 3

It was a bit of a surprise and a huge compliment to be invited to contribute to this column after many years reading the words of Tommie Singleton in this space. I shall do my best not to disappoint. To give you a hint as to where this column is going during the upcoming year, let us start with a summary of some lessons learned in my many years dealing with information systems, technologies and audits.

Change is fast and profound. Over the last five decades, technical innovation and new legislation relating to data and information have caused major dislocations. These, in turn, have created the need for new approaches to IS/IT audit. Some of these changes are outlined in figure 1.

Which of the following policies and guidance address continuous monitoring of information systems?

While this table is certainly incomplete, the conclusion is that continuous learning is inescapable. Thus, we are required to learn how to learn and then how to unlearn and relearn.1 Failure to do this is a guarantee of professional stagnation and failed careers.

In the IS Audit Basics column, I plan to reflect the lessons I learned both as an auditee and as an IS/IT executive and auditor. I intend for them to be thought-provoking as opposed to sets of procedural “do this” statements.

What We Know We Know

Dependency on IS/IT has become irreversible and its governance and management rely on audit competencies and independence. Innovation cycles are likely to remain short and bring with them new vulnerabilities and management challenges.

Besides, internal and external threats keep changing and, unless mitigated, these could have an adverse and potentially serious effect on organizations. The frameworks for information assurance, security, risk and governance evolve as experience is gained and lessons are learned.

The same is true for audit standards and guidelines. It is prudent to assume that the domains of IS/IT audit have become so large that it is now unlikely that anyone can know everything about it. This makes the development of IS/IT audit strategies that much harder.

On the positive side, the audit profession offers many opportunities for personal and professional growth: progression to chief audit executive (CAE), membership in audit committees, consultancy and senior management roles. The choice is yours, but only if you are prepared.

The following is a good reminder of what the concept of “auditor” covers:2

  • A......Analytical
  • U......Unbiased
  • D......Diplomatic
  • I.......Independent (and inquisitive)
  • T......Thorough
  • O......Objective
  • R......Reliable

Having worked with (and learned much from) many capable auditors, there have been occasions when I came across others who would have done far better to have pursued a different career. Why? Because they showed themselves to be one or more of the following: arrogant, disorganized, undisciplined, opinionated, cynical or emotionally incontinent. Let us say that they were not respected by their victims.

Your Credibility and Other Good Things of Which to be Conscious

Credibility is the essential asset for any auditor. If your independent assessments cannot be backed by your credibility, they are worthless and, therefore, as an auditor, so are you. Credibility is built over time by developing knowledge and experience. It helps to:

  • Fully understand what your CAE considers to be “good enough”
  • Make certain at all stages that anything you say and write is supported by evidence—be it audit tests that you have personally conducted or documentation you have reviewed
  • Maintain confidentiality by discussing audit findings and results with only those who need to know
  • Remember that gossip, rumors and other inside information are not evidence
  • Not jump to conclusions

Integrity is another fundamental requirement for an auditor, involving honesty, fair dealing (or objectivity) and truthfulness.

Finally, after passing the Certified Information Systems Auditor (CISA) examination, you are likely to be dealing with experienced professionals from whom you can learn much. Make sure you take the time to do so, as this is the best way to broaden your understanding and experience of the audit process and the interpersonal and political dimensions of the job. Ask lots of questions, particularly “Why?,” until you are satisfied with your understanding.

It is good to remember that while management understands the role and importance of audits, when the time comes, auditors are rarely welcome. After all, when the auditors descend on a team carrying out project or operational work, the result is disruption: The auditors need documentation and access to data, request meetings over a period of several weeks or more, and keep asking awkward questions.

Bear in mind that some auditees may have had bad experiences if previous auditors created the impression that they were focused on criticism, assigning blame or engaged in the mindless pursuit of perfection. Besides, if members of previous audit teams were not well informed about the role of IS/IT in the organization—its criticality, structure, resources, past performance and related issues—they may have been perceived as not making good use of the time assigned in the audit plan or focusing on irrelevant areas.

It is important for auditors to understand the auditee’s history: What was the scope of past audits? What actions were recommended (particularly those worded “shall” rather than “should”)? And, how many of these implementations were re-audited? It is also important to find out how many of the recommendations were not implemented and why.

Knowledge of the audit history should include the approach taken by your predecessors, the audit strategy, the adopted standards and guidelines, and, especially, the interpersonal relations between past auditors and auditees. A history of disagreements, conflict and lack of trust is hard to recover from and can easily result in mistrust and resistance.

About the Next Column

The next column will continue this introduction to the realities of IS/IT audits by exploring what makes an audit successful from the perspective of the many parties involved: the auditors, the CAE, the audit committee, senior management and, not least, the auditees.

Given that audits are an activity carried out by people who interact with other people, topics related to soft skills will appear in future columns because successful audits depend on how such interactions take place.

Conclusion

You can be confident that IS/IT technologies will continue to change and with them, audit practices. Be prepared!

Endnotes

1 Alvin Toffler, www.alvintoffler.net/?fa=galleryquotes
2 Tangient, “Introduction to Audit,” boruetthsm, boruetthsm.wikispaces.com/file/view/Auditing.ppt

Ed Gelbstein, Ph.D., has worked in IS/IT in the private and public sectors in various countries for more than 50 years. He did analog and digital development in the 1960s, incorporated digital computers in the control systems for continuous process in the late 60s and early 70s, and managed projects of increasing size and complexity until the early 1990s. In the 1990s, he became an executive at the preprivatized British Railways and then the United Nations global computing and data communications provider. Following his (semi)retirement from the UN, he joined the audit teams of the UN Board of Auditors and the French National Audit Office. He also teaches postgraduate courses on business management of information systems. He can be contacted at .


Page 4

Which of the following policies and guidance address continuous monitoring of information systems?
Hackers and negative social media hypes have proven able to bring proud organizations to their knees, yet many information and communications technology (ICT) security managers lack a strategy to anticipate and overcome such unpredictable challenges. A survey conducted among key people in the ICT security field reveals how perilously far behind their strategic thinking has fallen and what managers and board members can do to catch up.

The unforeseen risk in new media today can hardly be overstated. A burglary at the San Diego (California, USA) headquarters of Impairment Resources LLC, resulted in the leak of 14,000 patients’ medical records and the bankruptcy of the company in 2012.1 Last year, the Dow Jones Industrial Average dropped 143 points after hackers broke into the Twitter feed of the Associated Press and sent a false message saying US President Barack Obama had been injured in a White House explosion.2 Dutch certificate authority DigiNotar was hacked in 20113 with fraudulent certificates issued in the company’s name. The company lost its government contract, and within three months, it went bankrupt.

Despite such clear and present dangers, ICT security managers remain ill-equipped for future incidents. This is reinforced by an April 2013 study conducted by B-Able, a Netherlands-based consultancy, in cooperation with the University of Antwerp Management School (Belgium). Forty-one experienced ICT security managers, all of whom have worked for 10 years or more in the field, were asked a range of questions about the forces they deal with when formulating their security strategy.

Survey Details

Which of the following policies and guidance address continuous monitoring of information systems?
The questions within the survey were based on Michael Porter’s Five Forces analysis.4 Porter’s Five Forces are a commonly used tool to analyze how attractive an industry is. Porter distinguishes (figure 1):

  1. Competition from rival sellers
  2. Competition from potential new entrants
  3. Competition from substitute products producers
  4. Supplier bargaining power
  5. Customer bargaining power

This model can be used as a frame of reference to examine numerous forces a security professional can address when establishing a “security strategy.”

In the survey, managers were asked whether the various forces they faced were dynamic or static in nature and whether the managers felt able to bend these forces to their strategic advantage. The results were used to compile a list of suggestions meant to help managers develop a more robust strategy.5

The results were sobering. Two-thirds of the forces ICT security managers said they face are dynamic. In other words, they are unpredictable factors such as intellectual property theft, extortion, hacking, social media rumors gone wild and other new-technology phenomena. Only one-third of the forces they deal with are static, such as compliance legislation, ISO standards and mandatory audits. Of respondents, 58 percent consider it important to address these external forces in their strategy formulation in the future. Since they had not done this up until the survey, the survey results show ICT security managers focus their strategy on the more predictable, recurrent forces (compliance-related), rather than on the more plentiful and potentially more damaging forces.

Blind Spot

It is not as if ICT security managers are naive. They are not. In response to the survey, in fact, they overwhelmingly indicated that supply chain risk management (e.g., cascade failures due to overlooked forces) should be one of the highest priorities in their organization. So they understand they have a blind spot preventing them from anticipating risk. But knowing that is not enough. The survey showed that managers are poorly informed about the specific dangers they face and the potential impact of dynamic forces, much less about how they should respond in the event of a full-blown crisis. Of respondents, 78 percent said they poorly or fairly influence these forces once they impinge. An example of this can be seen through the April 2013 distributed denial-of-service (DDoS) attack that paralyzed ING Bank, a global financial institution based in The Netherlands. The incident slashed shareholder value and a flurry of criticism via social media cost ING customers.6 If the bank had understood and respected the power of such dynamic forces—in this case, uncensored social media caused confusion7 —and been transparent about the attack, the damage could have been limited. Instead, ING denied the seriousness of the attack, evaded questions and remained silent for far too long,8 allowing the conversation on Twitter to proliferate and leave the lasting impression that the bank had failed to respond. This incident, in addition to many others,9 reveals a lack of preparedness—a gaping hole in the ICT security strategy that is all too common.

Positive exceptions are observed now and then, at least in terms of crisis management. A good recent example is how a Dutch hospital, Het Groene Hart Ziekenhuis, responded when it was hacked in October 2012.10 Upon discovery that thousands of patients’ medical details had been leaked, the hospital immediately responded to the media and notified other stakeholders. Management wasted no time in admitting they had a problem and swiftly followed up with preventive measures so the leak could not recur. The hospital’s candid response profoundly influenced the tone of the ensuing (social) media debate, leading to more favorable public perception in the long term.

Containing vs. Averting Damage

Surely, though, it would be better if organizations averted such a crisis in the first place. By the time it was discovered that Impairment Resources had lost control of medical records belonging to the roughly 600 insurance companies it served, the damage was done. The lawsuits quickly piled up and no amount of transparency could have stopped the company’s impending demise.11 So an ounce of prevention is worth a pound of cure.

Businesses need to develop an overall business strategy in which ICT security is truly integrated, employing two of Michael Porter’s management frameworks: the Five Forces analysis12 and the value chain. It has been shown how the Five Forces can be subdivided into dynamic and static forces and how inadequate ICT security strategy is, with its inordinate focus on static forces. The second important concept that should be borrowed from Porter is the value chain. And here too, according to the survey findings, ICT security misses the mark, typically focusing on individual activities of the organization rather than considering the role each activity plays in the wider picture. For instance, security specialists see that their business has relationships with third parties, but seldom recognizes these parties as potentially influential forces.

Understanding the value chain and the five forces is a prerequisite for business success.13 Yet, surprisingly, Porter’s frameworks have yet to take hold in the ICT security field.

The top five forces of which ICT security managers say they recognize the impact are:

  1. Legislation—95 percent
  2. Inspection and supervisory agencies—88 percent
  3. Law enforcement (district attorney and police)—69 percent
  4. Partners in the (digital) chain (e.g., freight forwarders, Internet service providers, payment handlers)—64 percent
  5. Public opinion—60 percent

The top five forces of which ICT security managers say they do not recognize the impact are:

  1. Trade unions—79 percent
  2. Social media (uncensored reporting)—57 percent
  3. Criminals—48 percent
  4. Customers—48 percent
  5. Suppliers—43 percent

It is too easy to say that organizations simply need to get a grasp on the dynamic forces in the chain and all their problems will be solved. However, the problem is that very few management tools, steering mechanisms or key performance indicators (KPIs) are available to deal with these forces.

Dynamic forces can have major consequences. A surprising 71 percent of experts surveyed indicated that these forces are critical to their business and security strategy. They require the attention of every manager, board member and shareholder. This research shows that strategies based on an awareness of value chains and the five forces can help organizational leaders to:

  • Heighten preparedness for unforeseen influences
  • Better identify risk and establish the organization's risk appetite
  • Anticipate crises and remain in control of strategy

The top five elements for business information security strategy, according to the survey, are:

  • Stakeholder approach—When developing a strategy, involve the board of directors (BoD), management, business and all external stakeholders in the chain. Know the KPIs, stakeholder expectations, and how to translate these demands, using the right KPIs, into concrete benchmarks for the organization, management and BoD.
  • Risk-based approach—Look at the organization’s critical data security in the context of the entire chain. Start by gaining insight into all digital stakeholders and their potential dependencies, weaknesses and risk—both technologically and legally.
  • Beware the blind spot—Many forces are dynamic. Ensure the organization is not caught unaware. No one person can stay abreast of every development in this field, so let others update stakeholders on what they do not know.
  • Do the right things well—It may seem easier to “learn by doing,” but those who prepare a good strategy are less dependent on impromptu solutions.
  • Integrated organizational process—Be aware of the chain of forces that influences the organization. Make room for addressing these forces in the strategy and policy plans of the entire organization.

Case Study

A strongly Internet-dependent Dutch business with an annual revenue of € 500 million used these elements, Porter’s forces, to help it gain a better overview of its stakeholders. The organization realized that it had 266 percent more stakeholders than previously thought. By identifying all 166 digital stakeholders involved in critical business processes and their technical and/or legal dependencies, the organization was able to effectively map out and quantify all risk factors and feed this information back to process owners so risk management could be integrated throughout the organization. This made it easier to specify the knowledge and competencies needed to manage risk and to identify blind spots.

In this case, it became clear that the business lacked the expertise to strategically manage the entire value chain and to set the right KPIs. The organization is currently taking this final step in the process by introducing an integrated dashboard called the SecuriMeter for Governance, Management and Operational Data. The result will be a far stronger businesswide security strategy.

Conclusion

The message is simple: Zoom in on specific threats and prepare for them; zoom out and consider the entire context in which the organization operates.

This is not just a lesson for ICT security managers. It can be argued that the most important decision makers in every organization need to take ownership of this problem. “It is imperative that organizations deliver on the promise, or they will soon become irrelevant.”14 Decision makers should give ICT security people a voice in the formulation of overall business strategy. ICT security policy should be made a core aspect of the whole.15 Only then can an organization consider itself ready to face an uncertain and rapidly changing context and future.16

Endnotes

1 Stech, K.; “Burglary Triggers Medical Records Firm’s Collapse,” Bankruptcy Beat blog, Wall Street Journal, 12 March 2012, http://blogs.wsj.com/bankruptcy/2012/03/12/burglary-triggers-medical-records-firm%e2%80%99s-collapse/
2 Moore, H.; D. Roberts; “AP Twitter Hack Causes Panic on Wall Street and Sends Dow Plunging," The Guardian, 23 April 2013, www.theguardian.com/business/2013/apr/23/ap-tweet-hack-wall-street-freefall
3 Prins, R.; "DigiNotar Bancruptcy Public Report," Dutch Government, Den Haag, 2011
4 Porter, M.; “How Competitive Forces Shape Strategy,” Harvard Business Review, 1979
5 Porter, M.; “Competitive Advantage: Creating and Sustaining Superior Performance,” Free Press, 1985
6 NOS, “Disruptions in Online Banking—377%,” 2014, http://nos.nl/artikel/618846-storingen-online-bankieren-377.html
7 RTL Nieuws, “Disruption at ING Caused Hours of Unclearness About Account Balances,” 3 April 2013, www.rtlnieuws.nl/nieuws/storing-ing-urenlang-onduidelijkheid-over-saldos
8 Van der Lans, Chantal; “Online Disruptions, Don’t Lose Your Customers’ Trust,” Usability.nl, 10 March 2014
9 NU.nl, “The Netherlands: Number One in Online Banking Disruptions,” 13 January 2014
10 NU.nl, “Hospital Regrets Data Breach to the Public” (“Groene Hart Ziekenhuis betuigt spijt voor lek”), 2012
11 Op cit, Stech
12 Op cit, Porter, 1979
13 McBeath, B.; “Supply Chain Orchestrator—Management of the Federated Business Model in This Second Decade,” 2010, www.clresearch.com
14 Stackpole, B. O. E.; Security Strategy, Auerbach Publications, USA, 2011
15 May, C.; “Dynamic Corporate Culture Lies at the Heart of Effective Security Strategy,” Computer Fraud & Security, iss. 5, UK, 2003, p. , 10-13
16 Sveen, F. T. J. S. J.; “Blind Information Security Strategy,” International Journal of Critical Infrastructure Protection, Spain, vol. 2, 2009, p. 95-109

Yuri Bobbert is professor at LOI University of Applied Sciences (The Netherlands) and Ph.D. Researcher at Antwerp University (Belgium) in the field of business information security governance and management. Bobbert is also non-executive director of DPA|B-Able, a security governance consulting firm. In 2010, Bobbert published Maturing Business Information Security (MBIS), a framework to establish the desired state of security maturity. In 2015, Bobbert will publish his book How Safe Is My ‘Share’?


Page 5

Managers frequently request a return on security investment (ROSI) calculation. While this is a usual business practice for significant investments, the practice is not free from controversy when applied to information security.

Several guidelines and calculators are readily available, for example, the publication by the European Network and Information Security Agency (ENISA).1, 2 As with most methodologies, they need to be applied with due care.

An information security practitioner preparing a ROSI calculation needs to prepare it in such a way to ensure that it leads to the requested resources and preserves the practitioner’s credibility.

Expenditures in information security rarely, if ever, generate revenues. They may add business value in many ways, e.g., reducing the potential occurrence of a security incident, faster resolution of security incidents, supporting the organization’s reputation and other essentially intangible areas.

While a marketing department faces similar challenges in justifying expenditures, it can invariably point to revenue and/or market share increases, which, like all forecasts, may or may not materialize.

The concept of value relates to the worth, importance or usefulness of something to somebody. Alas, data and information do not appear as valuable assets in balance sheets.

Figure 1 presents the 15 topics that practitioners need to consider in the context of their working environment to arrive at a credible and, therefore, valuable ROSI calculation.

Which of the following policies and guidance address continuous monitoring of information systems?

Part I: Preparation

Preparation is key. Those preparing a ROSI would benefit from knowing the inventory of valuable information assets, related business impact analyses, risk analyses and their associated mitigation measures, as well as critical dependencies linked to the ROSI.

This enables the ROSI to focus on previously documented analyses, relates the proposed expenditures to them and, in this way, puts them in an appropriate business context.

The Starting Point
A ROSI calculation, however well it is done, is based on assumptions about how future security issues are likely to evolve. Likely assumptions include:

  • The value of historical data on threats is low. Threats continually evolve in their nature and capabilities. Future threats may include the unimaginable.
  • The impact of as-yet-identified threats is unpredictable.
  • New products and processes contain unknown vulnerabilities. These may or may not be first identified by the vendor; in a worst case, hackers are the first to identify and exploit them.

The author of the ROSI needs to be aware of what assumptions have been made and be ready to explain and justify them to other managers. Good intelligence on other organizations’ experiences is useful to have.

The Organization’s Risk Culture
Two dominant features can be used to describe a risk culture:

  • Risk appetite: conservative (risk avoidance) and aggressive (risk taking)
  • Reaction towards negative outcomes: blaming and learning

Organizations with a culture of risk avoidance may go through many stages of dithering before committing to a decision, with many what-if scenarios—unless the risk is high, imminent and recognized (by which time it is no longer a risk, but an issue, and may be too late). Even then, the organization may raise questions about what options have been considered. These organizations are also likely to tend toward blame when events occur.

Risk-taking organizations may request a ROSI calculation only when large sums are involved (“large” being a flexible term).

Failure to understand the risk culture of the organization implies the possibility that the proposal will fail regardless of the quality of the ROSI. It is prudent to remember that a slice of the budget going to information security is a slice that will not go to another function, and this can become the subject of organizational politics.

The Accounting Nature of the Proposed Expenditures
The issue here is identifying what constitutes an investment and what is an operational expense. There is no universal right answer as this is defined by the accounting practices of each organization, which may be funded from different budgets, such as a revenue expenditure budget and a capital items budget, and how the budgets are treated for tax purposes.

There is also a need to distinguish between expenditures to replace or upgrade an existing facility (hardware, software and/or services) and those to acquire something new and different, as would be the case when purchasing innovative solutions, migrating to a cloud security service or outsourcing information security operations.

The argument that expenditures in information security are comparable to buying insurance or insurance-related items (e.g., better locks, fire-proof safes, inert gas fire suppression in computer rooms) may or may not be valid in any given organization. Determining the organization’s thinking and practices on such topics is part of prudent preparation.

Information Security Expenditures in Context
This can be thought of as the big numbers/small numbers game. An example of big numbers can be found in the approved 2015 budget for the US Department of Defense, which identifies more than US $5 billion for cybersecurity.3 Bigger numbers than this are in circulation elsewhere. Warning: Big numbers tend to worry decision makers and incite them to look for cuts, but small numbers can be interesting, too. A recent report by Gartner shows that in the US, the three sectors with the highest spending on information security are insurance, utilities and banking.4 The report presents the figures as US dollars per year per employee and, assuming that there are 220 working days in a calendar year, works out at about US $2.50 per employee per day—about the price of a cup of coffee.

A simple calculation reveals that the average total cost of an employee to an organization is in the order of US $1 per minute. Assuming that a working year consists of 220 days and that each working day is of seven and a half hours, this amounts to 99,000 minutes. The US Census for 2012 states that the median national income was US $51,371 (the highest state median income was found in Maryland [US $71,112]).5 Adding to this all employers’ costs (e.g., office space, utilities, health insurance, pension contributions), a figure of US $99,000 per employee per year appears to be a plausible estimate. Hence, US $1 per minute is a rough guide the reader may adjust to reflect specific situations.

Therefore, a person taking a cigarette break (outside the office building) of 10 minutes represents four times the amount spent on information security. (And what smoker smokes just one cigarette per day?) Are the senior managers and decision makers in the organization familiar with this perspective?

Timing Is Critical: Spend on Protection or on Correction
Software developers learned (or should have) a long time ago that the cost of getting it right the first time is much smaller than that of correcting a bug later in the development process. This, in turn, is a minute fraction of the cost of putting things right once the software is in production. Looking at design or control failures in various industries is instructive:

  • Poor controls and management supervision cost the French bank Société Générale US $6.6 billion in 2008.6 A similar situation in 2005 put Barings Bank out of business.7
  • Airbus encountered problems with the wiring of the A380 aircraft. Redesign and delays have cost Airbus US $6.4 billion so far.8
  • In 2014, General Motors had to recall 2.6 million automobiles to fix a defective ignition key component (valued at US $2 per unit).9 The estimated cost so far has not yet been made public, but it is expected to be big.

It is worth remembering: Saving money regardless of cost (SMRC) is not always a winning strategy.

The Theories of Risk Assessment
There are many detailed books on the history of risk assessment10 and, therefore, this section is deliberately short. Probabilistic theories of risk date back to the 16th and the 17th centuries and are related to games of chance. Epidemiological and actuarial theories also began in the 17th century with compilations of births and deaths in London (United Kingdom

By 1990, risk (and policy) analysis was seen as “an analytical activity undertaken in direct support of specific public-or private-sector decision makers who are faced with a decision that must be made or a problem that must be resolved.”11

There are many definitions of “risk,” each reflecting specific domains of activity, and there are several books discussing theories and their applicability as well as the languages of risk in domains such as medicine, environment, aerospace, finance and information.12

Risk Assessment Methods
One of the earlier probabilistic assessment techniques for the overall risk of an entire major hazard facility is considered to be WASH-1400, commissioned by the US Nuclear Regulatory Commission (NRC) in 1975.13

Several other quantitative methods are available, but these are believed not to be applicable to information security on the grounds that there are insufficient data, particularly on evolving threats, and that such methods are too complex. This may be the case with techniques such as Monte Carlo simulations,14 which years ago required access to a mainframe computer and can now be carried out on almost any computer that supports spreadsheet software. However, they are not intuitive and require time to be mastered.

The belief that there are no data on probabilities is not necessarily valid. In the case of a complete lack of event intelligence (i.e., ignorance), the probability is 50 percent—either it happens or it does not. Additional information can then be used to determine if the probability is greater (it happened to someone else in a comparable line of business) or lower (it is recognized as a possible event, but it has not been reported as having happened). Such numbers may never be accurate, but are better than not having numbers at all.

There are several methods widely adopted by the information industry, notably:

  • The Operationally Critical Threat, Asset and Vulnerability Evaluation (OCTAVE), developed by the Software Engineering Institute of Carnegie Mellon University (Pittsburgh, Pennsylvania, USA) and first published in 2001
  • The US National Institute of Standards and Technology (NIST) SP 800-30 Revision 1 of 201215
  • COBIT 5 for Risk, published by ISACA in 2013

The last one is preferred because of its historic coverage and structure and the lessons learned from the previous framework, Risk IT, which it incorporates. It may take longer to learn than drawing simple risk maps, but the result is well worth it.

Some of these methods can be criticized as being qualitative and subject to bias, therefore representing little more than an educated guess. These are also limited to known-knowns, a modest element of known-unknowns as well as not taking into account unknown-unknowns, black swans and other events thought to be extremely unlikely and, thus, rightly or wrongly treating them as irrelevant.

In an attempt to create an illusion of scientific rigor, some simple methods based on little boxes colored green, yellow or red assign weights to likelihood and impact that are then multiplied to give a number representing risk.

This can be seriously misleading. Take, for example, two ratings of five: One applies to a DVD of a movie rated by viewers as five for quality and the other to a collection of five DVDs each rated by viewers as one for quality. Does the multiplication of number of DVDs and their quality (apples and oranges) make sense?

Impact Is Multidimensional
As information systems, data and technologies are ubiquitous in most organizations, service disruption, loss of confidentiality and/or loss of data integrity are likely to have consequences beyond the IT department. The scope of impact includes direct financial losses, loss of productivity, legal implications and reputational damage from the moment a security incident is detected until it is diagnosed, dealt with, and contained, and recovery from the incident can be described as complete. There may be additional consequences, e.g., involving law enforcement, regulators, and depending on the severity of the incident and which organization suffered it, the long-term consequences of inadequate crisis management.

These should all be identified and examined by (and with) the appropriate functions in the organization. In turn, these functions should take ownership of estimates of financial losses and the benefits identified by implementing the proposed investment.

Part II: Estimating the ROSI

One may wish to choose a relevant example of a recent investment in information security and apply the following steps to acquire a better feel of what is involved and the challenges of obtaining the information required. It would be good to reflect the points raised in the previous section (part I) as part of the preparation of this exercise.

Estimating Financial Losses
Impact assessments may already be available in business impact analysis (BIA) carried out by the organization, usually to support business continuity planning. Other losses (e.g., fraud and other forms of financial theft, loss of trade secrets and other proprietary information including software) may not appear in a BIA and may be harder to predict and quantify. To these, there may be a need to add the cost of recovering data that have been corrupted.

The theft or disclosure of personal information—both customers and employees—may infringe on data protection and privacy legislation and result in disclosures and legal processes, as well as reputational loss and expenses in crisis management and public relations. Not to be forgotten, there are direct losses associated with the processes of managing a security incident through its phases of detection, containment, correction and recovery, as well as costs of invoking business continuity arrangements.

Other indirect losses may arise depending on circumstances, such as the inability to fulfill contracts, delayed deliveries, compensation payments, fines and other legal fees.

Monetizing Expected Benefits
Given that security investments, unlike those in marketing, do not generate revenue, the benefits identified are likely to include reduced financial losses (as described previously), reduced risk of a security incident occurring, reduced cost of a security incident should it happen, meeting audit and/or regulatory issues, and reduced indirect costs.

These are all forecasts to which the information security professional may not be equipped to put a financial value. It is better that they be assessed and agreed to by those who stand to gain from the benefits.

Ownership of the Benefits
To put it bluntly, any benefits listed in a ROSI calculation that do not have a clearly identified owner are not credible. Nonetheless, if presented, they risk affecting the proposer’s credibility, from which it may be hard, or even impossible, to recover.

Estimating Costs
The complete life cycle of procurement includes many components that are not always included when preparing a financial case. Typically, these include the cost of preparing a request for proposals (RFP), the cost of evaluating offers, and the involvement of the procurement and legal departments in placing a contract.

Once the contract has been placed, there are the one-time costs of delivery, installation and configuration; integration with other tools when appropriate; and, possibly, training the personnel who will use whatever has been purchased.

Then, there are the recurring operating costs that include maintenance, support, upgrades and the usual data center services such as power and staff.

Conditionalities
Any purchase and installation of new facilities does not necessarily meet the buyer’s requirements unless the following conditions are met:

  • The product (or service) actually matches the real requirements, as these may not be quite the same as specified in the RFP.
  • The product (or service) delivered works exactly as the vendor described it in its offer.
  • The product (or service) is properly configured and used.

Experience suggests that these three conditions are not always met.

Calculating the ROSI
There are several formulas for doing this, from the relatively simple one proposed by ENISA to complex models involving mathematical models, differential equations and other challenges to those gifted in mathematics. Even the simple ENISA calculation contains traps for the unaware, in which the annualized loss expectancy (ALE), the mitigated ALE (mALE) and cost represent the cost of the proposed solution.

A word of warning: Before showing this calculation to finance professionals, find good answers to the following questions, which are likely to be asked:

  • What does the cost include?
  • What is the expected service life of the proposed purchase (amortization period)?
  • How long before the benefits materialize?
  • What discount factor should be used over the period?
  • How long would the payback period be?

ROSI Quality
Question to the proposer: After all this effort and discussions, are your ROSI calculation and supporting documentation better than a horoscope? If so, how?

Conclusion

While the need for a return on investment (ROI) calculation is a well-established practice, like most activities involving predicting the future, ROSI is fraught with perils ranging from omissions (accidental or deliberate) to optimistic assumptions about costs, benefits and the effectiveness of what is proposed.

The 15 points listed may not result in a robust and credible ROSI, but showing that they have been considered and applied to the maximum possible extent may help.

Endnotes

1 ENISA, Introduction to Return on Security Investment, December 2012
2 The issues surrounding ROSI were explored in a previous article: Gelbstein, E.; “Quantifying Risk and Security,” ISACA Journal, vol. 4, 2013.
3 Corrin, A.; ”Defense Budget Routes at Least $5 Billion to Cyber,” Federal Times, 5 March 2014, www.federaltimes.com/article/20140305/MGMT05/303050005/Defense-budget-routes-least-5B-cyber
4 Gartner, “Don’t Be the Next Target—Information Security Spending Priorities for 2014,” 8 April 2014
5 US Census Bureau, Household Income: 2012, www.census.gov/prod/2013pubs/acsbr12-02.pdf
6 Walsh, F.; D. Gow; “Société Générale Uncovers £3.7bn Fraud by Rogue Trader,” The Guardian, 24 January 2008, www.theguardian.com/business/2008/jan/24/creditcrunch.banking
7 Prof. Ted Azarmi’s Forum, “Making Sense of the Collapse of Barings Bank,” 25 January 2010, www.azarmi.org/forum/index.php?topic=965.0
8 http://calleam.com/WTPF/?p=4700
9 General Motors, GM Ignition Recall Safety Information, www.gmignitionupdate.com/
10 Bernstein, P. L.; Against the Gods: The Remarkable Story of Risk, Wiley, 1998
11 Morgan, M. G.; M. Henrion; Uncertainty: A Guide to Dealing With Uncertainty in Quantitative Risk and Policy Analysis, Cambridge University Press, UK, 1992
12 For example: Rausand, M.; Risk Assessment: Theory, Methods, and Applications, Wiley, 2011
13 US Nuclear Regulatory Commission, Reactor Safety Study: An Assessment of Accident Risks in U.S. Commercial Nuclear Power Plants, NUREG-75/014 (WASH-1400], www.nrc.gov/reading-rm/doc-collections/nuregs/staff/sr75-014/
14 Microsoft Corporation, “Introduction to Monte Carlo Simulation,” https://support.office.microsoft.com/en-US/Article/Introduction-to-Monte-Carlo-simulation-64c0ba99-752a-4fa8-bbd3-4450d8db16f1?ui=en-US&rs=en-US&ad=US
15 National Institute of Standards and Technology, Managing Information Security Risk: Organization, Mission, and Information System View, NIST Special Publication 800-39, USA, 2011

Ed Gelbstein, Ph.D., has worked in IT for more than 50 years and is the former director of the United Nations (UN) International Computing Centre, a service organization providing IT services around the globe to most of the organizations in the UN System. Since leaving the UN, Gelbstein has been an advisor on IT matters to the UN Board of Auditors and the French National Audit Office (Cour des Comptes) and is a faculty member of Webster University (Geneva, Switzerland). A regular speaker at international conferences covering audit, risk, governance and information security, Gelbstein is the author of several publications. His most recent book Good Digital Hygiene—Staying Secure in Cyberspace can be downloaded from www.bookboon.com. He lives in France and can be reached at .


Page 6

For 50 years and counting, ISACA® has been helping information systems governance, control, risk, security, audit/assurance and business and cybersecurity professionals, and enterprises succeed. Our community of professionals is committed to lifetime learning, career progression and sharing expertise for the benefit of individuals and organizations around the globe.

Today, we also help build the skills of cybersecurity professionals; promote effective governance of information and technology through our enterprise governance framework, COBIT® and help organizations evaluate and improve performance through ISACA’s CMMI®. We serve over 145,000 members and enterprises in over 188 countries and awarded over 200,000 globally recognized certifications. ISACA is, and will continue to be, ready to serve you.

Validate your expertise and experience. Whether you are in or looking to land an entry-level position, an experienced IT practitioner or manager, or at the top of your field, ISACA® offers the credentials to prove you have what it takes to excel in your current and future roles.

Take advantage of our CSX® cybersecurity certificates to prove your cybersecurity know-how and the specific skills you need for many technical roles. Likewise our COBIT® certificates show your understanding and ability to implement the leading global framework for enterprise governance of information and technology (EGIT). More certificates are in development. Beyond certificates, ISACA also offers globally recognized CISA®, CRISC™, CISM®, CGEIT® and CSX-P certifications that affirm holders to be among the most qualified information systems and cybersecurity professionals in the world.

ISACA® is fully tooled and ready to raise your personal or enterprise knowledge and skills base. No matter how broad or deep you want to go or take your team, ISACA has the structured, proven and flexible training options to take you from any level to new heights and destinations in IT audit, risk management, control, information security, cybersecurity, IT governance and beyond.

ISACA delivers expert-designed in-person training on-site through hands-on, Training Week courses across North America, through workshops and sessions at conferences around the globe, and online. Build on your expertise the way you like with expert interaction on-site or virtually, online through FREE webinars and virtual summits, or on demand at your own pace.

Get in the know about all things information systems and cybersecurity. When you want guidance, insight, tools and more, you’ll find them in the resources ISACA® puts at your disposal. ISACA resources are curated, written and reviewed by experts—most often, our members and ISACA certification holders. These leaders in their fields share our commitment to pass on the benefits of their years of real-world experience and enthusiasm for helping fellow professionals realize the positive potential of technology and mitigate its risk.

Available 24/7 through white papers, publications, blog posts, podcasts, webinars, virtual summits, training and educational forums and more, ISACA resources.

Book Review: Secure—Insights From the People Who Keep Information Safe

Which of the following policies and guidance address continuous monitoring of information systems?

Author: Mary Lou Heastings | Reviewed by A. Krista Kivisild, CISA, CA
Date Published: 1 January 2015
PDF

Which of the following policies and guidance address continuous monitoring of information systems?
There is always a new information security issue to focus on, another area of key concern relating to IT security, data security or business continuity planning that security professionals need to be aware of to keep on top of the relevant risk. But how can security professionals determine the relevant risk to their industry? At a time when changes in technology continue to accelerate, how can anyone decide what should be the information security areas of concern to their company and the places where they should focus their team’s work in the future?

Secure: Insights From the People Who Keep Information Safe is a collection of works from senior IT leaders in various industries providing what they feel are the biggest security concerns right now and for the future. In this quick, compact read, readers can gather understanding from those in the know and can consider if these experts’ ideas about leadership competencies needed in the future, design security or application delivery networks are applicable to their enterprise/industry. Everyone from technical practitioners to those just beginning their IS audit, security, risk or governance careers can find value in this general management book as it keeps readers aware of the latest risk concerns.

The book’s primary strength is its ability to provide the reader with valuable information on upcoming information security and technology issues, which are highlighted by the opinions of 10 IT information security leaders. The writings of each leader are engaging and succinct. As a result, readers can quickly get through a chapter and gather the information they need on a bus or train ride or between meetings. This book is ideal for anyone who does not have time to read a full book on the subject, but wants to be aware from where the next risk to IT is coming. Additionally, background on each leader and his/her company is provided, so the reader can determine if the author’s industry shares the same risk factors/concerns.

The world of information security is constantly changing. The number of Internet users has grown exponentially, smartphone and mobile use is exploding, and social media web sites are used more and more to do business. Those at all levels within IS audit, risk, security and governance struggle to stay abreast of these changes and keep aware of what the real concerns are to know where to focus their efforts. While the risk is also exploding, IS professionals need to focus on the right risk: those that are growing, those that are relevant and those that are of a bigger concern.

Despite the rapidly changing nature of security and risk, this book will remain relevant for years. The majority of the leaders in this book focus on entity-level and governance risk; as a result, the insights provided in this book are at a high enough level that they will remain relevant for years to come. This book is perfect for today’s IS professional who needs to learn a lot of information, but does not have much time to do so.

Reviewed by A. Krista Kivisild, CISA, CA, who has had a diverse career in audit while working in government, private companies and public organizations. Kivisild has experience in IT audit, governance, compliance/regulatory auditing, value-for-money auditing and operational auditing. She has served as a volunteer instructor, training not-for-profit boards on board governance concepts; has worked with the Alberta (Canada) Government Board Development Program; and has served as the membership director and CISA director for the ISACA Winnipeg (Manitoba, Canada) Chapter.