Some History of Security Levels in ISA/IEC 62443

One of the comments/complaints I’ve heard often about the ISA/IEC 62443 series is that the definition of Security Levels (SLs) is too vague and can’t be used. Many of these comments come from people that are either safety engineers or have worked around inside and/or safety systems for long periods of time. Now that 62443-4-2 has been published and 62443-3-3 is being revised, I felt it would be a good idea to give some history behind the decisions made surrounding the SL language.

SL Definitions

The ISA/IEC 62443 series define SLs in terms of four different levels (1, 2, 3 and 4), each with an increasing level of security.  (SL 0 is implicitly defined as no specific requirements or security protection necessary.)  The model for defining SLs depends on protecting against an increasingly more complex threat and differs slightly depending on what type of SL it is applied.

  • SL 1 – Protection against casual or coincidental violation
  • SL 2 – Protection against intentional violation using simple means with low resources, generic skills and low motivation
  • SL 3 – Protection against intentional violation using sophisticated means with moderate resources, IACS specific skills and moderate motivation
  • SL 4 – Protection against intentional violation using sophisticated means with extended resources, IACS specific skills and high motivation

These definitions were intentionally written vague in order to be used in various instances without the need to change the overall format. There are a few main points to consider with these definitions.

Risk Reduction Factors (RRFs) & Safety Integrity Levels (SILs)

NOTE: I’m not a safety engineer, so my explanation is strictly from what I’ve learned from working alongside safety engineers and Google searches.  There are probably some mistakes and oversimplifications in this section.  I include this information to help justify why we went the direction we did when developing 62443-3-3.

When you look up “risk reduction”, many different topics come up, including, financial, medical, disaster, and safety. For industrial control systems (ICS), most professionals look to disaster recovery and safety when thinking about risk reduction.  For safety systems, the level of risk reduction that a system needs to bring it to within a desired range is called the risk reduction factor and is measured by Safety Integrity Levels (SILs).

When the safety of a system is evaluated, a native risk factor is determined by using a Process Hazards Analysis (PHA).  These techniques often use the results of a Hazards and Operability (HAZOP) study and/or a Failure Modes and Effects Analysis (FMEA) to determine the overall hazards associated with the process and its equipment.  A HAZOP looks at the process itself and looks for ways in which the process can cause undesirable consequences and impacts.  This allows designers focus on the areas of the process that may be riskier and helps to prioritize the mitigation efforts.  A FMEA looks at the ways in which a system can fail and determine the potential consequences and impacts of that failure.  They look at it from a feed forward point-of-view, meaning they look at initiating events and then look at the potential consequences and impacts of those specific initiating events.

From the PHA, an organization can determine the processes and systems that may have a native risk that is outside a level that the organization has deemed tolerable.  If that is the case, the organization needs to apply risk reduction measures to bring it within a tolerable level.  The risk reduction factor (RRF) needed for the system is determined and a target SIL is assigned based on whether the system runs continuously or only at select times.  SIL ratings are assigned a number from 1 to 4 with 1 being the lowest RRF and 4 being the highest.

Risk Reduction Based vs. Attacker Based

One major thing to note about this discussion of safety systems and all of the analysis and mitigation techniques revolve around accidental and unintentional failures of a system.  They usually don’t consider the intentional circumvention of safety systems or sabotage.  In those cases, all of the SIFs applied to a system may be useless to prevent a critical situation that results in potential disastrous consequences and impacts.

This is where the application of risk reduction has a major difference between safety and security.  Security, especially that for ICS, is mostly about preventing a condition from arising that results from the accidental or intentional circumvention of policies, procedures, practices, and technology.

Risk reduction, therefore, cannot be calculated completely quantitatively for security.  The mindset of an attacker cannot be broken down into any mathematical formula that can then be used to determine a strict order of magnitude type measurement of RRF.  Antivirus companies, like Symantec, McAfee, and TrendMicro, that operate in the information technology (IT) and business environment have enough statistical sampling from their installed clients that they can produce some level of approximation, however, it is highly biased by their sample set.  They give some indication of trends in the overall IT/business attacker mindset and techniques, but they cannot be used as direct input for risk calculations.  When it comes to attackers in the ICS environment, there is not enough statistical data for designers and defenders to make any logical conclusions about motivation or specific techniques.

Writing the 62443-3-3 Standard

While writing the 62443-3-3 standard, we found that we wanted some logical separation between different sets of requirements and also wanted to be able to provide some justification as to why particular requirements were placed in those different sets.  We knew that we would never reach complete agreement on which requirements belonged into which buckets, however we could at least show that some thought went into the choices.

The four main groups that we chose were as follows:

  • (Unintentional) Personnel violating policies, procedures, and techniques inadvertently while trying to do their daily jobs.
  • (Intentional) Generic business-style attackers, script kiddies, bleadover from the business network, etc.
  • (Intentional) Industrial aware attacker or insider, but not someone with elevated privileges or detailed knowledge of the systems.
  • (Intentional) Industrial aware attacker or insider with elevated privileges and detailed knowledge of the systems.

When we looked at the first level (SL 1), we generally thought about well-meaning personnel that were trying to do their job in a way that made sense to them.  They may have violated policies, procedures, or techniques, but it was not intentional.  These may be cases like personnel posting the password for the engineering workstation on the monitor or side stepping a policy in order to get something done in a time critical situation.  They may go against basic security principles, but it is done with the intention of getting their normal job done.

The second level (SL 2) is the first where we thought about intentional attacks.  These would generally not be directed ICS attacks, but would be the typical bleedover from the business network.  They would be generic types of malware or Internet-based attackers that don’t have an understanding of ICS and ICS-specific attacks.  For the most part, these types of attackers would be interested in using the ICS system for some sort of financial gain, such as ransomware, botnet, industrial espionage, etc.

The third level (SL 3) is where ICS-specific attacks first appear.  This type of attacker might be someone that’s worked in the ICS environment or it might be an insider in an organization that has access to some systems but not everything.  The attacker in this case might be a disgruntled employee or a competing organization that is trying to infiltrate the systems.  These would generally be normal operations staff, not those with detailed knowledge of the systems and defenses.

The fourth level (SL 4) is the highest level of defense.  This is the level where “Trust No One” is the general rule of thumb.  The attacker in this case would have deep insider knowledge of the systems and processes being utilized.  These would generally be engineering staff, system administrators, database administrators, or other personnel with elevated privileges or extensive access.

While some may classify SL 4 as “nation state” defense or being able to defend against “Stuxnet 2.0”, this would not really be the case.  In most cases, if a “nation state” actually came after an organization, it is doubtful that the any level of defense could completely eliminate the possibility of being penetrated.  At best, your organization would hope that they are able to detect the attacker’s presence and initiate some sort of remediation effort prior to the attacker reaching their final objective, whether that is collecting information or causing damage.

These four categories of attacker allowed us, as the writers of the 62443-3-3 standard, to look through our set of requirements and decide that one requirement was really trying to defend against a known insider versus a general business virus.  The levels were never intended to have quantifiable, distinct, easily identifiable separations.  They were not intended to be used as absolute, set in stone values or to be used by themselves to define the security applied to the system.  They were only intended to allow us, as the authors, to convey to the reader of the standard some level of justification for why certain requirements ended up in the levels they did.

Leave a comment