Friday
Oct302015

Safety, the 4th pillar of the CIA Triad? 

Every security engineer is no doubt familiar with the critical principles of security, namely the CIA Triad:  Confidentiality, Integrity and Availability (or, AIC Triad).[1]  If the Gartner Group has its way, there soon will be a forth pillar to the CIA Triad:  Safety.   In CISSP terminology, safety is related to the term “safeguards,” countermeasures put in place to mitigate possible risks.

Earl Perkins, Research VP at Gartner, in a recent webinar (Top Security Trends for 2015-2016, September 1, 2015) opined that Safety is so important in the cybersecurity arena there will soon be four principles of security, e.g., CIAS. He made this remarks in the context of a discussion about the damage done to people, critical infrastructure and organizations by cyberattacks, cyber breaches and other cyber risks. What he calls the “enterprization” of devices and applications is making safety even more important than in the past. The scale is grander and the risks greater. In the context of the Internet of Things (IoT), many devices are sensor-based, and are connected to the Internet, which opens them up to attack.

Some of the more grievous safety concerns include identity theft and malicious tampering with medical devices (with both embedded and non-embedded) so as to cause bodily harm or even death. Not only can these hackers steal our identities, corrupt our health records and clean out our bank accounts, they now have the ability to take over the braking system of a car, as demonstrated recently on TV newscasts. Sci-Fi depictions of evil hackers tampering with heart monitors and pacemakers are not so far-fetched and have raised concerns at the FDA.  

This all raises the question as to what countermeasures are being put in place to mitigate these risks. Over the next few years, reactive monitoring will give way to proactive detection and predictive analysis of applications, apps and systems, including human users. Some of the solutions will be based on Software Defined Perimeter (SDP) technology, and, increasingly, “Software Defined Anything.”

So, safety will become increasingly demanded by end-users, as well as enterprises, for both have much to lose without it.

Contributed by: Judy Fincher


[1] Harris, Shon, CISSP Exam Guide, Fourth Edition, p. 61

Tuesday
Oct132015

Data Provenance (DPROV) for Health IT

Data Provenance for Health IT

Data Provenance (DPROV) in the health Information Technology (IT) context refers to the creation of health IT data and the tracking of its permutations throughout its life cycle.  As the demand for data exchange increases, the need for confidence in the “authenticity, trustworthiness and reliability” of the data being shared also increases, to ensure robust privacy, safety, and security-enhanced health information exchange. [Federal Register, ONC Certification Criteria]

Synopsis of the Trend

Health Level 7, International (HL7) and Office of the National Coordination for Healthcare IT (ONC) coined the term “Data Provenance” to refer to “evidence and attributes” that describe the origin of electronic health information as it is captured, modified and exchanged throughout its lifespan.” [Federal Register, NPR]  DPROV refers specifically to the lifecycle and chain of trust of a record entry in a health information system, whether the information is owned by the health care provider or the patient, in the case of Patient-Generated Health Data (PGHD).  DPROV as described in the draft HL7 Data Provenance Implementation Guide [HL7 DPROV IG (DSTU)] is based on HL7 Clinical Document Architecture (CDA). DPROV is specifically designed for use with CDA and its probable replacement standard for Electronic Health Record (EHR) data content and exchange, FHIR.  Fast Healthcare Interoperability Resources (FHIR) is a grounds-up architecture for health IT data definition and exchange currently being developed and piloted. [FHIR, Project Scope Statement]

Why it Matters

Before DPROV, there was no way to ensure vendors and organizations could handle provenance of Health IT data in a well-defined, consistent way. Issues of authenticity, veracity and quality invariably arose when health information, especially Personally Identifiable Information (PII), was created, exchanged and integrated across multiple organizations, parties and systems. As the sheer volume of health information has exploded in the past decade, the demand to track the provenance of that data also increased. There was no way to know whether the information had been tampered with (unauthorized use) or corrupted for malicious intent (medical identity theft). Even when provenance systems were implemented, there were no guarantees they could communicate. Interoperability of provenance data was not possible because there was no minimum set of provenance data elements, metadata and vocabulary to allow the data to be routed correctly, and so it did not fall into the wrong (unauthenticated) hands.

“Confidence in the authenticity, trust worthiness and reliability of the data being shared is fundamental to robust, privacy, safety, and security-enhanced health information exchange.” [Data+Provenance+Charter] The DPROV CDA Implementation Guide (IG) presents a standardized way of capturing data provenance, retaining and exchanging the provenance (metadata) of health data, and using the existing CDA (and future FHIR) standard as their vehicle for provenance data content and exchange. Ultimately, DPROV will support use cases for clinical care, interventions, analysis, decision making and clinical research, legal and others such as “chain of trust” and “chain of custody” and other business and legal requirements. Evidentiary support and clinical decision support are examples of the latter.

Standardization Initiatives

Data Provenance is an emerging healthcare standard. Work is underway in several international standards organizations, including HL7, ISO and W3C, to standardize DPROV, expand its scope and applicability, and create a common reference model.  Last year HL7 balloted the DPROV CDA Implementation Guide as a Draft Standard for Trial Use.  The IG is structured as an overlapping set of templates which allows prospective users to pick and choose, a la “cafeteria style,” the functionality and outcomes they need, depending on their business requirements. Thusly, the IG structure can be applied to any CDA EHR, irrespective of the use case or model applied. ISO issued two technical specifications: ISO/TS 8000-110 and ISO/TS 8000-120. The latter, issued in 2009, includes a Unified Modeling Language (UML) conceptual model for provenance data. W3C, a WWW international standards organization, has published four (4) PROV specifications: PROV Data Model (PROV-DM), PROV Ontology (PROV-O), PROV Constraints, PROV Notation (PROV-N). [Soho] Currently, the models are not reconciled.

Summary

DPROV refers to the tracking of evidence and attributes describing the origin of an EHR, encoded in CDA, and supported throughout the lifespan of the data, as it is created, modified and exchanged. Work is underway in several international standards organizations, including HL7, ISO and W3C, to expand its scope and applicability and create a common reference model. Its legitimacy and maturity is indicated by the fact that the Centers for Medicare & Medicaid Services (CMS) recently (March 20, 2015) included the HL7 Data Provenance Implementation Guide in its Notice of Proposed Rulemaking for Meaningful Use Stage 3 (MU3).

Although DPROV did not make the final cut this time, its inclusion in the proposed Certification Criteria attests to its maturity and growing acceptance among clinicians, researchers, vendors and security and privacy experts and the U.S. government, notably at CMS, ONC and the Federal Health Architecture/FHIM. The ONC Standards & Interoperability Initiative continues to work on Data Provenance.

Contributed by: Judy Fincher

Key References

Tuesday
Oct132015

Authenticator Management

Until passwords are extinct, we must manage them.

After passwords are extinct, we must manage their replacements.

For those familiar with the Federal Information Security Management Act (FISMA) evaluations, the concept of Authenticator Management should be familiar from the security control IA-5.  As with many security controls, this short designation covers a considerable number of requirements. This series of blog post will review the contents of the control requirements, how they are commonly implemented and how they are assessed, with the overall goal of determining whether improvements are needed to implement and assess the security control properly.

This blog post covers a definition of authenticator and related terms.  Subsequent posts will explain the status of the Federal Government’s Identity and Credential Access Management (ICAM) security program, and unpack the requirements of the FISMA security control IA-5 and its control enhancements.

What are Authenticators?

Authenticators come in the form of passwords (aka PINs) or cryptographic keys.

Authenticator is defined by National Institute of Standards and Technology (NIST) Special Publication (SP) 800-53 as “the means used to confirm the identity of a user, processor, or device (e.g., user password or token).”  The closest equivalent term in NIST 800-63-2 (Electronic Authentication Guidance) is the Token Secret (not, unfortunately for consistency, the Token Authenticator) which is a kind of Long Term Authentication Secret. In practice, authenticators come in the form of passwords and keys. 

Biometric Secrets are not Authenticators.

Authenticators are distinct from biometric secrets, which are also long term data that pertain to an individual, but unlike authenticators, biometric secrets are bound to a person’s physical characteristics and are not subject to the same sort of management functions. 

Physical Tokens are not Authenticators, but they can contain Authenticators

The term Token is overloaded in the security world, carrying meaning of a container for keys or other secrets, deriving from the use of the term in the PKCS#11 Cryptographic Token standard.  With the development of Federal Information Processing Standard (FIPS) 140-2, the term Cryptographic Module is now a more precise expression of what PKCS#11 tokens embody.  Whether implemented in hardware or software, a token or cryptographic module is a container that stores and may or may not provide management functions related to authenticators.  For example, FIPS 201 Personal Identity Verification (PIV) cards physical tokens that bind an identity to an authentication key, so the actual authenticator on a PIV card is the PIV authentication key.

Why Manage Authenticators?

Authenticators are encountered at many levels – operating system, database, application, communication protocols – and everywhere they are implemented, there is a potential point of security weakness.  Even if the password function is carefully implemented by product developers, users always have a responsibility on their end to participate in a system that minimizes risks by managing passwords and keys according to best industry standards and practices.

In brief, authenticator management is important because passwords are a terrible source of security risk, especially when they poorly managed.  There are many efforts underway to replace passwords, including the National Strategy for Trusted Identities in Cyberspace, whose spokespeople always include a pitch for the destruction of passwords.  The FIDO (Fast Identity Online) Alliance has made progress in implementing the Universal Second Factor (U2F) and the Universal Authentication Framework (UAF), with version 1.0 specifications of each published on 2014-12-09.  However, until passwords can be eliminated, there are ways to minimize the risks.

For example, according to Krebs on Security’s recent article about the Target breach of 2013, auditors and penetration testers found that Target’s management of passwords had a number of flaws, including weak and default passwords, valid passwords being shared in files on the network, with overall password strength being weak enough that 86% of them could be cracked in a week by the penetration test team.  These practices resulted in Target facing a class action lawsuit from banks whose customers were affected by the breach, so there are very serious consequences to an organization for not following authenticator management practices.

Stay tuned for the next article on the U.S. Federal Government’s Identity and Access Management Security Program.

Contributed by: Scott Shorter

Thursday
Aug062015

Healthcare in the Cloud

Technologies in the healthcare IT industry are converging with time and are far out pacing the legacy systems used by hospitals and healthcare providers. Cloud technology offers a replacement for these legacy systems  with easier and faster access to this data as defined by the way it is stored (i.e. public, private or hybrid). Cloud computing brings significant benefits to the healthcare sector.

Cloud computing will “enable healthcare related businesses to adapt their business models; develop new capabilities quickly and cost-effectively; and connect, collaborate and share information with more flexibly.” [1]

With a need for mobile healthcare expansion, cloud computing helps ensure seamless, personalized healthcare anywhere in the world, through ubiquitous and secure data sharing. Cloud computing supports the worldwide move towards Electronic Health Records (EHRs) by opening up the prospect of patients’ digitized health information (medical histories, scan images, blood types, allergies,  etc.) flowing freely across the world, accessible via secure authentication to people authorized by the patient.

Cloud computing caters to the following key technology requirements of the healthcare industry:

  • Enables on-demand access to computing and large storage facilities which are not provided in traditional IT environments.
  • Supports big data sets for EHR, radiology images and genomic data offloading, a burdensome task, from hospital IT departments.
  • Facilitates the sharing of EHRs among authorized physicians and hospitals in various geographic areas, providing more timely access to life-saving information and reducing the need for duplicate testing.
  • Improves the ability to analyze and track information (with the proper information governance) so that data on treatments, costs, performance, and effectiveness studies can be analyzed and acted upon. [2]

Even with all of the great advantages, there are still security and privacy challenges to the adoption of cloud computing within the healthcare industry. Data maintained in a cloud may contain personal, private or confidential information, such as healthcare-related information that requires the proper safeguards to prevent disclosure, compromise or misuse. Globally, concerns related to data jurisdiction, security, privacy and compliance are impacting adoption by healthcare organizations. [3]

Despite these concerns, the healthcare industry’s migration to the cloud is inevitable—driven by an irresistible blend of competitive realities and patient demand.

Contributed by: Alicia Brown

References:

[1] Accenture: http://www.accenture.com/sitecollectiondocuments/pdf/accenture-new-era-healthcare-industry-cloud-computing-changes-game.pdf

[2] Cloud Standards Consumer Council: http://www.cloud-council.org/cscchealthcare110512.pdf

[3] http://searchhealthit.techtarget.com/tip/Must-knows-about-cloud-computing-in-healthcar