{"id":19,"date":"2019-09-23T15:12:42","date_gmt":"2019-09-23T20:12:42","guid":{"rendered":"http:\/\/jimgilsinn.com\/blog\/?p=19"},"modified":"2019-09-23T15:12:42","modified_gmt":"2019-09-23T20:12:42","slug":"some-history-of-security-levels-in-isa-iec-62443","status":"publish","type":"post","link":"https:\/\/jimgilsinn.com\/blog\/2019\/09\/23\/some-history-of-security-levels-in-isa-iec-62443\/","title":{"rendered":"Some History of Security Levels in ISA\/IEC 62443"},"content":{"rendered":"\n<p>One of the comments\/complaints I&#8217;ve heard often about the <a href=\"https:\/\/www.isa.org\/isa99\">ISA\/IEC 62443<\/a> series is that the definition of Security Levels (SLs) is too vague and can&#8217;t be used. Many of these comments come from people that are either safety engineers or have worked around inside and\/or safety systems for long periods of time. Now that 62443-4-2 has been published and 62443-3-3 is being revised, I felt it would be a good idea to give some history behind the decisions made surrounding the SL language.<\/p>\n\n\n\n<h1 class=\"wp-block-heading\">SL Definitions<\/h1>\n\n\n\n<p>The ISA\/IEC 62443 series define SLs in terms of four\ndifferent levels (1, 2, 3 and 4), each with an increasing level of security.&nbsp; (SL 0 is implicitly defined as no specific\nrequirements or security protection necessary.)&nbsp;\nThe model for defining SLs depends on protecting against an increasingly\nmore complex threat and differs slightly depending on what type of SL it is\napplied.<\/p>\n\n\n\n<ul class=\"wp-block-list\"><li><strong>SL 1 \u2013 <\/strong>Protection against casual or coincidental violation<\/li><li><strong>SL 2 \u2013 <\/strong>Protection against intentional violation using simple means with low resources, generic skills and low motivation<\/li><li><strong>SL 3 \u2013 <\/strong>Protection against intentional violation using sophisticated means with moderate resources, IACS specific skills and moderate motivation<\/li><li><strong>SL 4 \u2013 <\/strong>Protection against intentional violation using sophisticated means with extended resources, IACS specific skills and high motivation<\/li><\/ul>\n\n\n\n<p>These definitions were intentionally written vague in order\nto be used in various instances without the need to change the overall format.\nThere are a few main points to consider with these definitions.<\/p>\n\n\n\n<h1 class=\"wp-block-heading\">Risk Reduction Factors (RRFs) &amp; Safety Integrity Levels (SILs)<\/h1>\n\n\n\n<p><em>NOTE: I\u2019m not a safety engineer, so my explanation is\nstrictly from what I\u2019ve learned from working alongside safety engineers and Google\nsearches.&nbsp; There are probably some mistakes\nand oversimplifications in this section.&nbsp;\nI include this information to help justify why we went the direction we\ndid when developing 62443-3-3.<\/em><\/p>\n\n\n\n<p>When you look up \u201crisk reduction\u201d, many different topics come up, including, financial, medical, disaster, and safety. For industrial control systems (ICS), most professionals look to disaster recovery and safety when thinking about risk reduction.\u00a0 For safety systems, the level of risk reduction that a system needs to bring it to within a desired range is called the risk reduction factor and is measured by <a href=\"https:\/\/en.wikipedia.org\/wiki\/Safety_integrity_level\">Safety Integrity Levels (SILs)<\/a>.<\/p>\n\n\n\n<p>When the safety of a system is evaluated, a native risk factor is determined by using a <a href=\"https:\/\/en.wikipedia.org\/wiki\/Process_hazard_analysis\">Process Hazards Analysis (PHA)<\/a>. \u00a0These techniques often use the results of a <a href=\"https:\/\/en.wikipedia.org\/wiki\/Hazard_and_operability_study\">Hazards and Operability (HAZOP)<\/a> study and\/or a <a href=\"https:\/\/en.wikipedia.org\/wiki\/Failure_mode_and_effects_analysis\">Failure Modes and Effects Analysis (FMEA)<\/a> to determine the overall hazards associated with the process and its equipment.\u00a0 A HAZOP looks at the process itself and looks for ways in which the process can cause undesirable consequences and impacts.\u00a0 This allows designers focus on the areas of the process that may be riskier and helps to prioritize the mitigation efforts.\u00a0 A FMEA looks at the ways in which a system can fail and determine the potential consequences and impacts of that failure. \u00a0They look at it from a feed forward point-of-view, meaning they look at initiating events and then look at the potential consequences and impacts of those specific initiating events.<\/p>\n\n\n\n<p>From the PHA, an organization can determine the processes\nand systems that may have a native risk that is outside a level that the\norganization has deemed tolerable.&nbsp; If\nthat is the case, the organization needs to apply risk reduction measures to\nbring it within a tolerable level.&nbsp; The\nrisk reduction factor (RRF) needed for the system is determined and a target\nSIL is assigned based on whether the system runs continuously or only at select\ntimes.&nbsp; SIL ratings are assigned a number\nfrom 1 to 4 with 1 being the lowest RRF and 4 being the highest.<\/p>\n\n\n\n<h1 class=\"wp-block-heading\">Risk Reduction Based vs. Attacker Based<\/h1>\n\n\n\n<p>One major thing to note about this discussion of safety\nsystems and all of the analysis and mitigation techniques revolve around accidental\nand unintentional failures of a system.&nbsp;\nThey usually don\u2019t consider the intentional circumvention of safety\nsystems or sabotage.&nbsp; In those cases, all\nof the SIFs applied to a system may be useless to prevent a critical situation that\nresults in potential disastrous consequences and impacts.<\/p>\n\n\n\n<p>This is where the application of risk reduction has a major\ndifference between safety and security.&nbsp;\nSecurity, especially that for ICS, is mostly about preventing a\ncondition from arising that results from the accidental or intentional\ncircumvention of policies, procedures, practices, and technology.<\/p>\n\n\n\n<p>Risk reduction, therefore, cannot be calculated completely quantitatively\nfor security.&nbsp; The mindset of an attacker\ncannot be broken down into any mathematical formula that can then be used to\ndetermine a strict order of magnitude type measurement of RRF.&nbsp; Antivirus companies, like Symantec, McAfee,\nand TrendMicro, that operate in the information technology (IT) and business\nenvironment have enough statistical sampling from their installed clients that\nthey can produce some level of approximation, however, it is highly biased by\ntheir sample set.&nbsp; They give some\nindication of trends in the overall IT\/business attacker mindset and techniques,\nbut they cannot be used as direct input for risk calculations.&nbsp; When it comes to attackers in the ICS\nenvironment, there is not enough statistical data for designers and defenders\nto make any logical conclusions about motivation or specific techniques.<\/p>\n\n\n\n<h1 class=\"wp-block-heading\">Writing the 62443-3-3 Standard<\/h1>\n\n\n\n<p>While writing the 62443-3-3 standard, we found that we\nwanted some logical separation between different sets of requirements and also\nwanted to be able to provide some justification as to why particular\nrequirements were placed in those different sets.&nbsp; We knew that we would never reach complete\nagreement on which requirements belonged into which buckets, however we could\nat least show that some thought went into the choices.<\/p>\n\n\n\n<p>The four main groups that we chose were as follows:<\/p>\n\n\n\n<ul class=\"wp-block-list\"><li>(Unintentional) Personnel violating policies,\nprocedures, and techniques inadvertently while trying to do their daily jobs.<\/li><li>(Intentional) Generic business-style attackers,\nscript kiddies, bleadover from the business network, etc.<\/li><li>(Intentional) Industrial aware attacker or\ninsider, but not someone with elevated privileges or detailed knowledge of the\nsystems.<\/li><li>(Intentional) Industrial aware attacker or insider\nwith elevated privileges and detailed knowledge of the systems.<\/li><\/ul>\n\n\n\n<p>When we looked at the first level (SL 1), we generally thought\nabout well-meaning personnel that were trying to do their job in a way that\nmade sense to them.&nbsp; They may have\nviolated policies, procedures, or techniques, but it was not intentional.&nbsp; These may be cases like personnel posting the\npassword for the engineering workstation on the monitor or side stepping a\npolicy in order to get something done in a time critical situation.&nbsp; They may go against basic security\nprinciples, but it is done with the intention of getting their normal job done.<\/p>\n\n\n\n<p>The second level (SL 2) is the first where we thought about\nintentional attacks.&nbsp; These would\ngenerally not be directed ICS attacks, but would be the typical bleedover from\nthe business network.&nbsp; They would be\ngeneric types of malware or Internet-based attackers that don\u2019t have an\nunderstanding of ICS and ICS-specific attacks.&nbsp;\nFor the most part, these types of attackers would be interested in using\nthe ICS system for some sort of financial gain, such as ransomware, botnet,\nindustrial espionage, etc.<\/p>\n\n\n\n<p>The third level (SL 3) is where ICS-specific attacks first\nappear.&nbsp; This type of attacker might be\nsomeone that\u2019s worked in the ICS environment or it might be an insider in an\norganization that has access to some systems but not everything.&nbsp; The attacker in this case might be a\ndisgruntled employee or a competing organization that is trying to infiltrate\nthe systems.&nbsp; These would generally be\nnormal operations staff, not those with detailed knowledge of the systems and\ndefenses.<\/p>\n\n\n\n<p>The fourth level (SL 4) is the highest level of\ndefense.&nbsp; This is the level where \u201cTrust\nNo One\u201d is the general rule of thumb.&nbsp;\nThe attacker in this case would have deep insider knowledge of the\nsystems and processes being utilized.&nbsp;\nThese would generally be engineering staff, system administrators, database\nadministrators, or other personnel with elevated privileges or extensive access.<\/p>\n\n\n\n<p>While some may classify SL 4 as \u201cnation state\u201d defense or\nbeing able to defend against \u201cStuxnet 2.0\u201d, this would not really be the\ncase.&nbsp; In most cases, if a \u201cnation state\u201d\nactually came after an organization, it is doubtful that the any level of\ndefense could completely eliminate the possibility of being penetrated.&nbsp; At best, your organization would hope that\nthey are able to detect the attacker\u2019s presence and initiate some sort of\nremediation effort prior to the attacker reaching their final objective,\nwhether that is collecting information or causing damage.<\/p>\n\n\n\n<p>These four categories of attacker allowed us, as the writers\nof the 62443-3-3 standard, to look through our set of requirements and decide\nthat one requirement was really trying to defend against a known insider versus\na general business virus. &nbsp;The levels\nwere never intended to have quantifiable, distinct, easily identifiable separations.&nbsp; They were not intended to be used as\nabsolute, set in stone values or to be used by themselves to define the security\napplied to the system.&nbsp; They were only\nintended to allow us, as the authors, to convey to the reader of the standard\nsome level of justification for why certain requirements ended up in the levels\nthey did. <\/p>\n","protected":false},"excerpt":{"rendered":"<p>One of the comments\/complaints I&#8217;ve heard often about the ISA\/IEC 62443 series is that the definition of Security Levels (SLs) is too vague and can&#8217;t be used. Many of these comments come from people that are either safety engineers or have worked around inside and\/or safety systems for long periods of time. Now that 62443-4-2 &hellip; <\/p>\n<p class=\"link-more\"><a href=\"https:\/\/jimgilsinn.com\/blog\/2019\/09\/23\/some-history-of-security-levels-in-isa-iec-62443\/\" class=\"more-link\">Continue reading<span class=\"screen-reader-text\"> &#8220;Some History of Security Levels in ISA\/IEC 62443&#8221;<\/span><\/a><\/p>\n","protected":false},"author":2,"featured_media":0,"comment_status":"open","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[11,2],"tags":[14,13,12],"class_list":["post-19","post","type-post","status-publish","format-standard","hentry","category-security","category-technology","tag-14","tag-security","tag-standards","entry"],"_links":{"self":[{"href":"https:\/\/jimgilsinn.com\/blog\/wp-json\/wp\/v2\/posts\/19","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/jimgilsinn.com\/blog\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/jimgilsinn.com\/blog\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/jimgilsinn.com\/blog\/wp-json\/wp\/v2\/users\/2"}],"replies":[{"embeddable":true,"href":"https:\/\/jimgilsinn.com\/blog\/wp-json\/wp\/v2\/comments?post=19"}],"version-history":[{"count":1,"href":"https:\/\/jimgilsinn.com\/blog\/wp-json\/wp\/v2\/posts\/19\/revisions"}],"predecessor-version":[{"id":20,"href":"https:\/\/jimgilsinn.com\/blog\/wp-json\/wp\/v2\/posts\/19\/revisions\/20"}],"wp:attachment":[{"href":"https:\/\/jimgilsinn.com\/blog\/wp-json\/wp\/v2\/media?parent=19"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/jimgilsinn.com\/blog\/wp-json\/wp\/v2\/categories?post=19"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/jimgilsinn.com\/blog\/wp-json\/wp\/v2\/tags?post=19"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}