Why DIARMF, "Continuous Monitoring," and other FISMA-isms Fail

I've posted about twenty FISMA stories over the years on this blog, but I haven't said anything for the last year and a half. After reading Goodbye DIACAP, Hello DIARMF by Len Marzigliano, however, I thought it time to reiterate why the newly "improved" FISMA is still a colossal failure.

First, a disclaimer: it's easy to be a cynic and a curmudgeon when the government and security are involved. However, I think it is important for me to discuss this subject because it represents an incredible divergence between security people. On one side of the divide we have "input-centric," "control-compliant," "we-can-prevent-the-threat" folks, and on the other side we have "output-centric," "field-assessed," "prevention eventually fails" folks. FISMA fans are the former and I am the latter.

So what's the problem with FISMA? In his article Len expertly discusses the new DoD Information Assurance Risk Management Framework (DIARMF) in comparison to the older DoD Information Assurance Certification and Accreditation Process (DIACAP). DIARMF is a result of the "new FISMA" emphasis on "continuous monitoring" which I've discussed before.

Len writes "DIARMF represents DoD adoption of the NIST Risk Management Framework process" and provides the diagram at left with the caption "The six major steps of Risk Management Framework aligned with the five phases of a System Development Lifecycle (SDLC)."

Does anything seem to be missing in that diagram? I immediately key on the "MONITOR Security Controls" box. As I reminded readers in Thoughts on New OMB FISMA Memo, control monitoring is not threat monitoring. The key to the "new" FISMA and "continuous monitoring" as seen in DIARMF is the following, described by Len:

Equally profound within DIARMF is the increased requirements for Continuous Monitoring activities. Each control (and control enhancement) will be attributed with a refresh rate (daily, weekly, monthly, yearly) and requisite updates on the status of each control will be packaged into a standardized XML format and uploaded into the CyberScope system where analysis, risk management, and correlation activities will be performed on the aggregate data.

Rather than checking on the security posture every three years or whatever insane interval that the old FISMA used, the new FISMA checks security posture more regularly, and centralizes posture reporting.

Wait, isn't that a good idea? Yes, it's a great idea -- but it's still control monitoring. I can't stress this enough; under the new system, a box can be totally owned but appear "green" on the FISMA dashboard because it's compliant with controls. Why? There is no emphasis on threat monitoring -- incident detection and response -- which is the only hope we have against any real adversary.

Think I'm wrong? Read Len's words on CyberScope:

CyberScope is akin to a giant federal-wide SEIM system, where high-level incident management teams can quickly pull queries or drill down into system details to add analysis on system defenses and vulnerabilities to the available intelligence on an attack. CyberScope data will also be used to track trends, make risk management decisions, and determine where help is needed to improve security posture.

If you're still not accepting the point, consider this football analogy.

Under the old system, you measured the height, weight, 40 yard dash, and other "combine" results on a player when he joined the team. You checked again three years later. You kept data on all your players but had no idea what the score of the game was.

Under the new system, you measure the height, weight, 40 yard dash, and other "combine" results on a player when he joins the team. You check again more regularly -- maybe even every hour, and store the data in a central location with a fancy Web UI. You keep data on all your players but still have no idea what the score of the game is.

Until DoD, NIST, and the other control-compliant cheerleaders figure out that this approach is a failure, the nation's computers will remain compromised.

Note: There are other problems with DIARMF -- read the section where Len says "This shakes out to easily over a hundred different possible control sets that can be attributed to systems" to see what I mean.

Comments

davehull said…
Wow, so that's what they mean by "continuous monitoring?" I guess I took it to mean something akin to security monitoring. This is tragic.

There is a place for monitoring controls, but I agree threat monitoring is of critical importance, especially as threats emerge that existing security controls can't protect against.
paulj said…
I 100% agree FISMA is and has been lacking for sometime. And while NIST has extremely smart people, they are too slow to put out relevant security guidelines. However the problem still remains, people need a baseline security posture to be held accountable to. So what do you suggest? It's a given that it would be driven by field assessments. But you still need an initial starting point for your overall IT security program, something akin to the SANS Top 20 critical controls. To me the major problem with FISMA is more on the enforcement side and less on the requirement side. Its a complete waste of trees and the auditors and auditees use very subjective means to achieve compliance.
Adam Montville said…
I have a post over at Tripwire's "State of Security" blog (see http://www.tripwire.com/blog/compliance/state-of-macro-continuous-monitoring-enabling-effective-cybersecurity/) on the topic of continuous monitoring.

What you say here is correct, but you're missing the long view. The government knows configuration and patch assessment, so it starts there. The continuous monitoring you're speaking of is what I call "macro" continuous monitoring. The "micro" continuous monitoring - that which entails your actual security monitoring processes - is presently implied, but will explicitly follow - especially as the Event Management Automation Protocol and related security automation efforts (i.e. IODEF/MILE) start coming up to speed.

Remember we're talking about the Feds - they......move.......slowly.
Len Marzigliano said…
For an upstart writer, Richard's analysis of my article is akin to having Stephen Hawking find your masters dissertation on theoretical physics and write a book on it. I appreciate even further the deference given to the article itself versus the points made regarding RMF and FISMA.

I do feel compelled however, to dispel the myth that FISMA is the only driver/effort of Information Assurance within the federal government. While popular because of its birth in Congress and position as a gatekeeper to system funding, FISMA is just a defensive coordinator, not the head coach. Other defensive lines (Cyber CND) and special teams (US-CERT) exist in the game, and threat analysis is an underpinning subsystem of the government's playbook. even as it permeates throughout the NIST Risk Management Framework in subtle but important ways that critics might overlook.

I have a pingback article in the hopper to detail this and contsructively build upon the conversation. I also intend to bring important feedback from this topic into the DoD and civilian federal communities to achieve the desired effect of process improvement.

-=Len
Len, you win for best intro to a comment in the history of this blog!

I look forward to your future posts too.
I see this as a step in the right direction. While I am on the same side of the fence with the monitoring issues and ACTUAL PWNAGE problems. The big improvement I see is that this new(ly implemented) system spells out the controls much better and there isn't as much voodoo magic left up for interpretation by the DAA. Is it still a failure? Maybe, but it is a better failure than we had before.
Ken said…
I agree with Garrett. I see "continuous control monitoring" as a step in the right direction. Once we know we're properly securing systems, then we can move on to continuous threat and attack monitoring.

However, there are two challenges which need to be overcome for continuous monitoring to actually be beneficial:

Information Overload - Too much information detail will confuse senior leadership, and possibly result in someone pushing the panic button over a minor issue.

Micromanagement - Senior leadership will become so obsessed with "100% compliance" that lower level technical staff will be spending too much time chasing systems which are "just slightly out of compliance", and not enough time will be devoted on a local level actually analyzing intrusion attempts.

Until the above two issues are overcome, continuous control monitoring will only hinder, not help.

Ken
CaffeineSecurity
http://caffeinesecurity.blogspot.com
Anonymous said…
This almost seems to follow the route that PCI has taken. A slow, tortuous path that seems to lead to a reasonable place, but in reality, has no grassy meadow at its end.

Plain and simple: continuous monitoring of controls, at any frequency, is not equivalent to knowing, let alone proving, you are operating a secure environment.
Anonymous said…
As an in-the-trenches guy, all this means is more tons of paperwork. the changes get lost and then we get yet more questions and more paperwork. Add in PIV cards for computers in locked rooms, encrypted disks for totally uninteresting data and the constantly increasing attacks, all I see is lots of money and email about nothing while attackers have open season. Plus we have the new federal bureaucracy of security contract managers who only can forward email and award contracts. Then there will be an artical in the Post and a committee will ask for yet more paper.
Anonymous said…
As usual, someone who bashes FISMA, but doesn't come up with any specifics on something that would help secure our Governments computers better... FISMA is far from perfect, but is does work and does give a level of security. If you have something better, then outline it.
Anonymous said…
Interesting analogy, but the statement, "...under the new system, a box can be totally owned but appear "green" on the FISMA dashboard because it's compliant with controls. Why? There is no emphasis on threat monitoring -- incident detection and response -- which is the only hope we have against any real adversary", is misleading as it implies there is no link between security controls and an incident. In every incident I can think of at my organization, it was a failed control or a known (and accepted) missing control that resulted in an incident and it was other controls in place that minimized impact, and led to detection and recovery. Also, authorizing officials and assessors should be taking threats and threat monitoring information into account. I do agree that it would be useful to better connect more real-time threat data as part of the assessments and there have been some products where that has been attempted, but I'm not aware of any products that were actually successful.

The football analogy doesn't quite work since in football both teams have to follow the same set of rules, etc. With IT security, the good guys have limited budgets, must protect data despite flawed consumer grade products (that system users insist on having as soon as they are available), and must deal with complex architectures and the need for a lot of control exceptions to allow work to get done. The only way the good guys can score is to not let the bad guys score (i.e. we can only defend). The bad guys need very small budgets to attack, can attack from anywhere in the world, and all they have to do is find one bug in one piece of widely distributed software or a single device that is not up-to-date on patches, and they can score against thousands of computers and use their access to generate profit. Also, in football, if your team does well, you attract more fans and get bigger budgets. In security, if your team does well, it's assumed that there are no problems and your budget can be reduced.
Hey Anonymous who said I bash FISMA... this blog has dozens of posts on what the Feds should do instead of FISMA. For example, from 2007: What the Feds Should Do.
Anonymous said…
"Monitor controls" is as vague as existing DISA STIGs that drive DIACAP C&A. The effectiveness will be a "it depends" matter. If a control to be monitored is "existence of affirmative executable constraints" (i.e., NAC/NAP disallowing other than defined executables) then "monitor controls" is effective. It is means running Retina regularly then it is one more bureaucratic boondoggle.
Anonymous said…
Though I like the idea of a common security language and framework, OMG, quick! the aspirins! We are in for a more rude awkening than what we have run into with DIACAP implementations.
Anonymous said…
While I agree that the focus of information security should not be solely on compliance of security controls, compliance does provide a good starting point for managers to gauge the cost of security.
I will build upon the football analogy introduced in the article “Why DIARMF, "Continuous Monitoring," and other FISMA-isms Fail” by Richard Bejtlich. Security controls provide the criteria in which one can score the game. If the continuously monitored security control is the “game score”, then it allows the coach to change factors as needed to improve team performance and consequently the score. For instance, it might be necessary to put the fastest 40 yard player in for a deep pass play, or the heaviest player in for a special block. Measuring compliance of security controls provides managers with a method for acquiring the funds necessary to safeguard critical system resources that may otherwise be left exposed. Security costs money and managers must juggle between protection of resources and balanced budgets.
Continuously monitoring security controls at an interval relevant for the security control and the protection of the resource is essential to gauging how well you are playing the game. No resource is absolutely secure. Implementing and continuously monitoring security controls provides a point of origin for further security efforts. Obviously a system that is in total compliance is still vulnerable. However, the risk of a threat agent exploiting vulnerabilities on a resource that is in compliance is reduced. Completely securing a resource is not the idea behind compliance; it is instead focused on reducing risk to a level acceptable to senior managers. RG
Anonymous said…
The problem with DIACAP implementation results from all the additional processes introduced by the people that are charged with the implementation. DIACAP is outlined within DoDI 8510.01 and utilizes the security controls contained within DoDI 8500.2 which are further explained within the DIACAP Knowledge Service. DIACAP is fundamentally simple.

In the simplest form DIACAP should work as such;
- Collect the data required to identify the resource and populate the attributes that comprise the System Identification Profile (SIP)
- Use the Mission Assurance Category (MAC) and Confidentiality Level (CL) to identify the security controls required for resource protection and list them on the DIACAP Implementation Plan (DIP)
- Track the implementation of required security controls using the DIP
- Perform a controls validation test (CVT) to verify and validate (V&V) that the controls have been implemented correctly
- Use the results from the CVT interviews, documentation/demonstrations, observations, and tests (IDOT) to assign a compliance rating to each of the required security controls
- Indicate the compliance rating for each control on the Scorecard
- Transfer all non-compliant (NC) and not applicable (NA) controls to the Plan of Action and Milestones (POA&M)
- Assign a severity category to all non-compliant security controls on the POA&M
- Work the POA&M items with the highest severity category and impact code first
- Meet with the Certifying Authority (CA) to review the DIACAP Artifacts (SIP, DIP, Scorecard, and POA&M) and receive an accreditation determination
- CA recommends that an accreditation decision be made by the Designated Approval Authority (DAA) and/or may require additional POA&M items to be mitigated
- DAA annotates the accreditation decision on the Scorecard
- Maintain compliance posture through periodic CVTs and annual reviews

The introduction of processes that deviate or attempt to modify DIACAP is what causes the complication. For instance, DIACAP recognizes that a control may be inherited from another source but there are organizations that attempt to introduce the inheritance of a portion of a control - the validation step. Understandably this is to report to managers the actual validation step of the control that is non-compliant and subsequently the organization responsible; ideally so it can be properly funded and fixed and not to shift blame. However, one can argue that if any portion of a security control is inherited, the control is inherited. After all, if any portion of a security control is non-compliant, the whole security control is non-compliant.

DIACAP was introduced to manage the level of risk associated with interconnecting systems. Unfortunately, it has been twisted into something that is in some extreme cases crippling to the organization. The problems that are present in DIACAP will be present in DIARMF unless the implementation is handled more delicately. The people that are responsible need to be properly trained and excessive modification/deviation from the process should be restricted.

For instance consider a high school report card for a child. The report card lists all the courses that the child is taking and the present condition of the child’s performance in each of those courses. Whether the child’s parents realizes it or not they either accept the performance and the associated risk or put into action a plan to improve the performance and consequently the grade. Periodic assessments can help reassure that the plan is working but the true performance gauge is the next report card. I hope the similarities to the DIACAP DIP, Scorecard, and POA&M are obvious. RG
Anonymous said…
All enterprise security frameworks will be control based. It is the content within a specific control that will address the continuous monitoring requirement. ISSE, SDLC, RMF, DIACAP, etc. all have continuous monitoring components, which are validated through periodic self assessments. Enterprise security frameworks should be scalable to address new requirements as they arise, without having to overhaul the entire framework, which will only result in wasted tax dollars and a national security posture that will diminish during the transition, only to return to its original state. However, the implementers of the new framework will line their pockets something handsome. The deficiency is the lack of a motivated and trained workforce.

Liquid
Anonymous said…
"Monitor controls" is as vague as existing DISA STIGs that drive DIACAP C&A....

Add to that the POORLY implemented S-CRAP OVAL code developed by the TAPESTRY contractor crew, and you have a REALLY BIG problem. From what I understand in speaking with their "TECH Support Team", the whole system was designed upside down, ie: tools to manage the software environment have yet to be developed, even to this very day. They simply jumped in and started coding with (mostly) inexperienced programmers, inexperienced systems software developers, no true visionary to lead, all while sunsetting the UNIX SRR scripts and the Gold Disk. Now, the GD could probably not have continued, but I believe the U-SRR scripts could have been incorporated into the envisioned futurescape. It is BY FAR much easier to script than to attempt to force Mitre's OVAL lexicon (created primarily and successfully for "Winders") to extrapolate data from UNIX/like systems.

Popular posts from this blog

Zeek in Action Videos

New Book! The Best of TaoSecurity Blog, Volume 4

MITRE ATT&CK Tactics Are Not Tactics