Tuesday, May 10, 2016

Secure Boot Simplified

In the world of bootable devices like (PC, Mobile Phone, Automotive IVI etc.), Secure boot is a mechanism which is used by device OEMs ,primarily, to ensure that users are not able load and run any other software which is not given by the device OEMs themselves. This helps the OEMs (and operators) to lock the device to run their own software e.g. US operators give devices to their subscribers on a subsidized rates, and they do not want users to change the software and start using the device with some other operators.

Secure boot is designed to add cryptographic checks to each stage of the secure world boot process. This process is designed to verify and ensure the integrity of all the boot-loaders running on the device. In general there are various boot-loaders run on a device, and they get verified one after the other. The first boot-loader which runs on the device is called the Primary Boot Loader (PBL), and this gets verified with the help of a cryptographic key (OEM public key) which is stored in the device hardware itself. This key is commonly referred as base for Hardware Root of Trust.

Secure boot mechanism is achieved by utilizing the cryptographic support provided with the help of hardware as well as software. The cryptographic process of secure boot works on the principle of asymmetric key encryption algorithms like RSA and digital signature where OEM signs the boot-loader with its own private key, and during the boot process this signature gets validated using the public key present in the device itself. If the signature validation succeeds, boot process continues in a normal manner, and if it fails boot process shows and warning/error to user before proceeding further.

The complete process life cycle of the secure boot process is as follows:

Bootloader Image Signing Process

  1. Generate the unsigned boot-loader image.
  2. Generate a RSA private/public key pair
    • The private key of this pair is kept in an extremely confidential storage at the OEMs geographical location. As this is the OEMs private key, and it will be used to sign the update packages (if OEMs updates the software later on), the access to this key is very much restricted to authorized people at the OEMs location as well
    • The public key of the pair which is used as a root of trust need to be stored on-SoC ROM (chipset). The SoC ROM is the only component in the system that cannot be trivially modified or replaced by simple reprogramming attacks. But, On-SoC storage of the key for the root of trust can be problematic; embedding it in the on-SoC ROM implies that all devices use the same public keys because all the devices will be using the same chipset. This makes them vulnerable to class-break attacks if the public key is stolen or successfully reverse-engineered. On-SoC One-Time-Programmable (OTP) hardware, such as poly-silicon fuses, can be used to store unique values in each SoC during device manufacture. This enables a number of different key values to be stored in a single class of devices, reducing risk of class break attacks. As these fuses are one time programmable, OEM uses these during the device manufacture and stores their on public key (hash*)
      • *Also, OTP memory can consume considerable silicon area, so the number of bits are that available is typically limited. A RSA public key is over 1024-bits long, which is typically too large to fit in the available OTP storage. However, as the public key is not confidential, it can be stored in off-SoC storage (or sent as part of the image as certificate itself), provided that a cryptographic hash of the public key is stored on-SoC in the OTP. The hash is much smaller than the public key itself (256-bits for a SHA256 hash), and can be used to authenticate the value of the public key at run-time.
      • Note: It also seems to me that sometime even though the hash of public key is stored on the device OTP, public key itself is not stored on the device chipset. As public key is not confidential, it can also come as part of the image itself (as certificate), and can be verified using the hash in OTP at run time as well. I am not still sure about the mechanism followed by OEMs in general.
  3. Create the hash of the unsigned image using e.g. SHA256.
  4. Encrypt the hash from step 3 using the RSA private key of step 2.
  5. Attach the encrypted hash (from step 4) to the end of the unsigned image – this is a signed image now.
  6. Create the SHA256 hash of the public key from step 2
  7. Store the public key hash on the OTP memory on the SoC by blowing the fuses.
  8. Store the public key of the pair from step #2 to the off-SoC storage (or attach it as certificate to the image itself) on the device itself.

    Bootloader Image Verification Process

  1. During the boot process, read the primary boot-loader signed image.
  2. Now generate the SHA256 hash of the public key stored in the off-SoC storage (or using the certificate attached to the image itself) and compare it with the hash which is stored in the OTP area to validate the integrity of the stored OEM public key. 
  3. If the public is validated successfully, this public key will be used to decrypt the encrypted hash attached to the image to verify the authenticity of the image sender ( as this is OEM public key, if the decryption happens successfully, we can claim that it is sent by the OEM itself)
  4. Detach the encrypted SHA256 hash which is attached at the end of the image, and decrypt it using the public key which we validated above. 
  5. After removing the encrypted hash, generate a new hash using the remaining boot-loader image from step 1.
  6. Compare this decrypted hash (from step 4) with the hash generated using the image (step 5).
  7. If the comparison passes, we can say that the bootloader image which is available on the device, and the one which was originally sent by the OEM are same (as the hashes are same). We can concluded that Primary Boot Loader is in its original unmodified state, and we can move to next step in the boot process. (Note that, if the hashes are not same that will primarily mean that either the bootloader image has been changed, or the private key which is used to sign the bootloader has is not the OEM private key itself).
The process mentioned above is used to verify the integrity on the first-level boot-loader, which subsequently, in the similar manner, verifies the integrity of subsequent levels like application boot-loader and eventually the kennel. Once the kernel is verified, and if dm-verity is enabled in the kernel, it can go and verify the integrity of the system image.

Tuesday, May 3, 2016

Relation between FIPS, CC, CMVP, CAVP and CAVS   


When Defence and Government agencies like healthcare, finance, social security (who have confidential but unclassified information about users) needs to choose devices for their official usage, they need to choose one or the other approved standards/criteria on which they can rely on for keeping the user’s data secured. FIPS and CC are two of those standards which are followed by these agencies in current time.

Whereas FIPS (Federal Information Processing Standard) and CC (Common Criteria) are two security product certification programs run by government(s), CMVP(Cryptography Module Validation Program), CAVP(Cryptography Algorithms Validation Program) and CAVS (Cryptography Algorithm Validation Scheme) are the programs which are there to help meeting some of the prerequisites for acquiring the FIPS and CC certifications . Both FIPS and CC standards seems to have a set of cryptographic requirements listed down in form of standards, and the products which seek to acquire these certificates must fulfil these requirements to claim the certifications. Once the product is awarded these certificates, it can become eligible to be bought by different government agencies for their official usage.

A product can be certified for either CC or FIPS or both. Both FIPS and CC offer different level of certifications based on the requirement met by the product. While FIPS offers Level 1 to Level 4 certificates based on the level of security met by the product (security increases with level in the ascending order – level 1 being least secure and level 4 being highest security), CC offers levels from EAL 1 to EAL 7 (EAL 1 is the least verified and EAL 7 is the most verified level) **.
United States and Canada tops the list in terms of FIPS usage right now. FIPS defines the requirements and standards for the cryptography modules that include both hardware and software components. As part of the software standard FIPS defines various parameters like the way algorithm need to be designed, the complexities which need to be taken care by those algorithms, and number of different algorithms which should be supported by the security modules. Hardware requirements and standard may include feature like temper resistance, temper resistance coating, and operating conditions etc.

CC on the other hand is an international standard which is covered by almost 19-20 countries right now. CC is a framework in which users can specify their security functional and assurance requirements through the use of Protection Profiles (various protection profiles can exists like MDFPP – mobile device fundamental protection profile, Firewall PP, Smartcard PP etc.), and private vendors/OEMs can then implement and/or make claims about the security attributes of their products, and authorized testing laboratories can evaluate the products to determine if they actually meet the claims. Unlike FIPS (140 -2), CC primarily focuses on software security requirements (not hardware). Also, Details of cryptographic implementation (algorithms) within the device are outside the scope of CC; instead, it uses the specification given by standards like FIPS - 140 to specify the cryptographic modules requirements and algorithms.  Below is the snippet from MDFPP 2.0 which shows the MDFPP requirement specified in terms of FIPS PUB 197 specification:  

FCS_COP.1(1) Cryptographic operation
FCS_COP.1.1(1) The TSF shall perform [encryption/decryption] in accordance with a specified cryptographic algorithm Protection Profile for Mobile Device Fundamentals
AES-CBC (as defined in FIPS PUB 197, and NIST SP 800-38A) mode.

While FIPS and CC defines the standards and requirements for the certification, CMVP Program is run by United State and Canadian government to define the tests, test methodologies, and test structures which need to be followed by any vendor who wants their devices (modules) to be certified for FIPS or CC. As per setup all the tests under CMVP are run by third party CMVP authorized laboratories only.

Additionally, CAVP is a program which provides guidance for the testing and validation for the FIPS approved software algorithms. The CAVP provides assurance that cryptographic algorithm implementations adhere to the detailed algorithm specifications. A suite of validation tests – a test tool - is designed for each cryptographic algorithm (called CAVS) to test the algorithm specifications, and functionality of that algorithm. The validation of cryptographic algorithm implementations in the cryptographic module are a prerequisite to the validation of that cryptographic module itself, so in easy and simple words:  CAVP is a prerequisite for CMVP and CMVP is a prerequisite for FIPS and CC certification.

Okay now, once the CAVP and CMVP certificate numbers (e.g. Cert # 470) are available to vendors, they can mention those in their supporting document and apply to get the FIPS and CC certificates from government (NIST). And once the product/module is awarded a FIPS or CC certifications, it will be listed on the NIST website which can be referred by different agencies to choose a product for their official usage.

** While FIPS ensures increasing levels with security, CC levels just specify the regress level of verification done by the CC testing laboratories.