CCSP: 11 Answers to Domain 2 Questions

Updated: Sep 15, 2020

In Domain 2, the CBK focuses on the data owned by the cloud customer, hosted in the cloud. The domain discusses methods for securing the data including specific tools and techniques.



Question #1 In which of these options does the encryption engine reside within the application accessing the database?

  • Transparent encryption

  • Symmetric-key encryption

  • Application-level encryption

  • Homomorphic encryption

Database encryption refers to the use of encryption techniques to transform a plaintext database into a (partially) encrypted database; thus, making it unreadable to anyone except those who possess the encryption key(s). Database security encompasses three main properties:

  • Confidentiality - preventing unauthorised disclosure

  • Integrity - guarantees that data cannot be corrupted in an invisible way

  • Availability - ensures timely and reliable access to data

To preserve the data confidentiality, enforcing access control policies defined on the database management system (DBMS) is a prevailing method. An access control policy can take different forms depending on the underlying data model (e.g. relational, XML), and the way by which authorisations are administered, following either a Discretionary Access Control (DAC), Role-Based Access Control (RBAC) or Mandatory Access Control (MAC).


Whatever the access control model, the authorisations enforced by the database server can be bypassed in a number of ways. For example,

  • An intruder can infiltrate the information system and try to mine the database footprint on disk.

  • Another source of threat comes from the fact that many databases are outsourced to Database Service Providers (DSPs). Data owners therefore, have no choice but to trust the word of the DSP who will argue that their systems are fully secured and their employees are trustworthy.

  • A database administrator has enough privileges to tamper the access control definition and to spy on the DBMS behaviour.


With Defence-in-depth, the resort to cryptographic techniques to complement and reinforce the access control model has received much attention from the database community. The purpose of database encryption is to ensure database opacity by keeping the information hidden to any unauthorised persons. Even if the attackers get through the firewall and bypass access control policies, they still need the encryption keys to decrypt data.


Encryption can provide strong security for data at rest, but developing a database encryption strategy must take many factors into consideration. For example,

  • where should you perform the encryption?

  • how much data should be encrypted to provide adequate security?

  • what should the encryption algorithm and mode of encryption be?

  • who should have access to the encryption keys?

  • how should you minimise the impact of database encryption on performance?

In this section, I will answer the question: "where should you perform the encryption?"


Encryption Level Strategies


Storage-level encryption

It refers to encrypting data in the storage subsystem; thus protecting data at rest.

  • It is well-suited for encrypting files or entire directories in an operating system context.

From a database perspective

  • Storage-level encryption has the advantage to be transparent, thus avoiding any changes to existing applications.

  • On the other hand, since the storage subsystem has no knowledge of database objects and structure, the encryption cannot be related with user privileges (e.g., using distinct encryption keys for distinct users), nor data sensitivity.

Selective encryption therefore, is limited to file granularity. Moreover, selectively encrypting files is risky since one should ensure that no replica of sensitive data remains unencrypted (e.g., in log files, temporary files, etc).


Database-level encryption

It allows securing data as it is inserted to, or retrieved from the database.

  • The encryption strategy can be part of the database design and can be related to data sensitivity and/or user privileges.

  • Selective encryption is possible and can be done at various granularities, such as tables, columns, rows.

Depending on the level of integration between the encryption algorithm and the DBMS, the encryption process may incur some changes to the applications. It may also cause DBMS performance degradation since it forbids the use of indexing on encrypted data.


For both strategies (storage-level and database-level), data is decrypted on the database server at runtime. This means the encryption keys must be transmitted or kept with the encrypted data on the server side; providing a limited protection against the server administrator or any intruder usurping the administrator identity (privilege escalation).


Application-level encryption

It moves the encryption/decryption process to the applications that generate the data.

  • Encryption is performed within that application that introduces the data into the system.

  • The data is sent encrypted, and thus, naturally stored and retrieved encrypted, to be finally decrypted within the application.

This approach has the benefit of separating encryption keys from the encrypted data stored in the database, since the keys never have to leave the application side.

However,

  • Applications need to be modified to adopt this solution

  • Depending on the encryption granularity, the application may have to retrieve a larger set of data than the one granted to the actual user, this opening a security breach.

  • It introduces performance overhead; that is

  • indexing encrypted data is not possible; and

  • it forbids the use of some advanced database functionalities on the encrypted data, like stored procedures and triggers.

In terms of granularity and key management, application-level encryption offers the highest flexibility since the encryption granularity and the encryption keys can be chosen depending on application logic.


Answer

  • Transparent encryption

  • Symmetric-key encryption

  • Application-level encryption

  • Homomorphic encryption


Question #2 You are the security team leader for an organization that has an infrastructure as a service (IaaS) production environment hosted by a cloud provider. You want to implement an event monitoring (security information and event management [SIEM]/security information management [SIM]/security event management [SEM]) solution in your production environment in order to acquire better data for security defences and decisions. Which of the following is probably your most significant concern about implementing this solution in the cloud?

  • The solution should give you better analysis capability by automating a great deal of the associated tasks.

  • Dashboards produced by the tool are a flawless management benefit.

  • You will have to coordinate with the cloud provider to ensure that the tool is acceptable and functioning properly.

  • Senior management will be required to approve the acquisition and implementation of the tool.

Contracted cloud computing is unlike other operational modes and also unlike other managed IT service arrangements. In the case of cloud computing, the data owner ostensibly owns the information being stored and processed but does not control how it is stored and processed or who specifically handles the information (in terms of administration).

The data owner/cloud customer does not actually have physical access to the places and devices where the information is. The customer has the responsibility and liability for protecting the information according to legal standards and regulation but often cannot mandate the actual protections and security measures in order to accomplish this.


Nonetheless, an area where providers and customers may find common ground in sharing responsibilities is security monitoring and testing.

  • The provider may allow the customer access to data streams or administrative capabilities on devices in order for the customer to perform their own monitoring and testing activities, in conjunction with or more likely, in addition to the provider's own efforts.


Security Information and Event Management

We use monitoring tools to

  • know how well the systems and security controls in our IT environment are functioning,

  • to detect anomalous activity, and to

  • enforce policy.

A large part of the monitoring effort comes

  • in the form of logs: recording activity as it happens, sometimes from specialised devices that only conduct monitoring; and other times

  • from operational systems themselves (with their integrated logging functions).


To better collect, manage, analyse, and display log data, a set of tools specifically for that purpose have become popular. These tools are collectively known as

  • Security Information Management (SIM)

  • Security Event Management (SEM); or

  • Security Information and Event Management (SIEM)


Differentiating between these terms is pointless. We will refer to them inclusively, as "SIEM". The goals of SIEM implementation include the following:

  • Centralise collection of log data

  • Enhanced analysis capabilities

  • Dash-boarding

  • Automating response

Because the tool will require at least some installation and reporting capability within the cloud environment, it is essential to coordinate with the cloud provider to ensure that the solution you choose will function properly and is allowed by the provider.


Answer

  • The solution should give you better analysis capability by automating a great deal of the associated tasks.

  • Dashboards produced by the tool are a flawless management benefit.

  • You will have to coordinate with the cloud provider to ensure that the tool is acceptable and functioning properly.

  • Senior management will be required to approve the acquisition and implementation of the tool.


Which of the following is not a step in the crypto-shredding process?

  • Encrypt data with a particular encryption engine.

  • Encrypt first resulting keys with another encryption engine.

  • Save backup of second resulting keys.

  • Destroy original second resulting keys.

As a security architect you will be expected to use security concepts to address business needs. The CISSP and CCSP exams are no exceptions to this. During the exam you can expect to be presented with scenarios related to pressing business needs.


We have written extensively about crypto-shredding and how it helps alleviate the challenge of being compliant with the EU GDPR's "Right to be forgotten" principle. Read it here before responding to this question.


Answer

  • Encrypt data with a particular encryption engine.

  • Encrypt first resulting keys with another encryption engine.

  • Save backup of second resulting keys.

  • Destroy original second resulting keys.

In crypto-shredding, the purpose is to make the data unrecoverable; saving a backup of the keys would attenuate that outcome because the keys would still exist for the purpose of recovering data. All other steps outline the crypto-shredding process.


Question #3 Which of the following sanitisation methods is feasible for use in the cloud?

  • Crypto-shredding

  • Degaussing

  • Physical destruction

  • Overwriting

Cloud customers are allowed to encrypt their own data and manage their own keys; crypto-shredding is therefore possible. Degaussing is not likely in the cloud because it requires physical access to the storage devices and because most cloud providers are using solid-state drives (SSDs) for storage, which are not magnetic. Physical destruction is not feasible because the cloud customer doesn’t own the hardware and therefore won’t be allowed to destroy it. Overwriting probably won’t work because finding all data in all aspects of the cloud is difficult and the data is constantly being backed up and securely stored, so a thorough process would be very tricky.


Answer

  • Crypto-shredding

  • Degaussing

  • Physical destruction

  • Overwriting


Question #4 Which of the following is not a method for enhancing data portability?

  • Crypto-shredding

  • Using standard data formats

  • Avoiding proprietary services

  • Favourable contract terms

Cloud portability is the ability to move applications and associated data between one cloud provider and another, or between legacy and cloud environments.


To avoid vendor lock-in, the organisation has to think in terms of portability when considering migration. We use the term "portability" to describe the general level of ease or difficulty when transferring data out of a provider's data center.


There are several things an organisation can do to enhance the portability of its data:

  • Ensure favourable contract terms for portability

  • Avoid proprietary formats

  • Ensure there are no physical limitations to moving - make sure the bandwidth leaving the old provider is sufficient for the purpose of moving your organisation's entire data set and that the new provider can handle that size of importation.

  • Check for regulatory constraints


Answer

  • Crypto-shredding

  • Using standard data formats

  • Avoiding proprietary services

  • Favourable contract terms


Question #5 When implementing a digital rights management (DRM) solution in a cloud environment, which of the following does not pose an additional challenge for the cloud customer?

  • Users might be required to install a DRM agent on their local devices.

  • DRM solutions might have difficulty interfacing with multiple different operating systems and services.

  • DRM solutions might have difficulty interacting with virtualised instances.

  • Ownership of intellectual property might be difficult to ascertain.


Technology solutions that protect intellectual property are referred to as digital rights management (DRM) tools. Employing DRM in the cloud poses some challenges. These include the following:


Replication restrictions

  • Because DRM often involves preventing unauthorised duplication, and the cloud necessitates creating, closing, and replicating virtualised host instances (including user-specific content stored locally on the virtual host), DRM might interfere with automatic resources allocation processes.

Jurisdictional conflicts

  • The cloud extends across boundaries and borders, often in a manner unknown or uncontrolled by the data owner, which can pose problems when intellectual property rights are restricted by locale.

Agent/enterprise conflicts

  • DRM solutions that require local installation of software agents for enforcement purposes might not always function properly in the cloud environment, with virtualised engines, or with the various platforms used in a bring your own device (BYOD) enterprise.

Mapping identity and access management (IAM) and DRM

  • Because of the extra layer of access control, the DRM IAM processes might conflict or not work properly with the enterprise/cloud IAM.

API conflicts

  • Because the DRM tool is often incorporated into the content, usage of the material might not offer the same level of performance across different applications, such as content readers or media players.

DRM should provide the following functions, regardless of the type of content or format:


Persistent protection

  • The DRM should follow the content it protects, regardless of where that content is located, whether it is a duplicate copy or the original file, or how it is being utilised.

Dynamic policy control

  • The DRM tool should allow content creators and data owners to modify ACLs and permissions for the protected data under their control.

Automatic expiration

  • The DRM protections should cease when the legal protections cease.

Continuous auditing

  • The tool should allow for comprehensive monitoring of the content's use and history.

Replication restrictions

  • Much of the purpose of DRM is to restrict illegal or unauthorised duplication of protected content.

Remote rights revocation

  • The owner of the rights to specific intellectual property should have the ability to revoke those rights at any time.

Answer

  • Users might be required to install a DRM agent on their local devices.

  • DRM solutions might have difficulty interfacing with multiple different operating systems and services.

  • DRM solutions might have difficulty interacting with virtualised instances.

  • Ownership of intellectual property might be difficult to ascertain.


Question #6 When implementing cryptography in a cloud environment, where is the worst place to store the keys?

  • With the cloud provider

  • Off the cloud, with the data owner

  • With a third-party provider, in key escrow

  • Anywhere but with the cloud provider

Key Management

How and where encryption keys are stored can affect the overall risk of data in severe ways. Some things to remember and consider regarding key management for cloud computing:

  • Level of protection: Encryption keys must be secured at the same level of control, or higher, as the data they protect.

  • Key recovery: Make it possible to recover a key say for a user who was just fired. Usually, this entails a procedure that involves multiple people, each with access to only a portion of the key.

  • Key distribution: Keys should never be passed in the clear - it requires a secure connection to initiate the key creation and distribution.

  • Key revocation: You would need a process to revoke keys, especially after a user leaves an organisation.

  • Key escrow: Having copies of keys held by a trusted third party in a secure environment is highly desirable; this can aid in many of the other key management efforts listed in this section.

  • Outsourcing key management: Keys should never be stored with the data they are protecting, and we should not make physical access to keys readily available to anyone who does not have authorisation and need-to-know for that data. In cloud computing, it is preferable to have the keys stored somewhere other than the cloud provider's data center. An option is to use a cloud access security broker (CASB).

Answer

  • With the cloud provider

  • Off the cloud, with the data owner

  • With a third-party provider, in key escrow

  • Anywhere but with the cloud provider


Question #7 Which of the following is not a security concern related to archiving data for long-term storage?

  • Long-term storage of the related cryptographic keys

  • Format of the data

  • Media the data resides on

  • Underground depth of the storage facility

While reviewing questions in Q&A CCSP Domain 1, we looked at the 6 phases of the Data Lifecycle. In this section, let's take a deeper look into the Archive phase.




Archive Phase of the Data Life Cycle

This is the phase for long-term storage, and we necessarily have to consider this longer timeframe when planning security controls for the data.


Cryptography will, as with most data-related controls, be an essential consideration. Key management is of utmost importance, because mismanaged keys can lead to additional exposure or to total loss of the data. If the keys are improperly stored (especially if they are stored alongside the data), there is an increased risk of loss; if the keys are stored away from the data but not managed properly and lost, there will be no efficient means to recover the data.


The physical security of the data in long-term storage is also important. We need to weight risks and benefits for the following aspects of physical security:


Location

  • Where is the data being stored?

  • What environmental factors will pose risks in that location?

  • What jurisdictional aspects might bear consideration?

  • How distant is the archive location?

  • Will it be feasible to access the data during contingency operations?

  • Is it far enough to be safe from events that impact the production environment, but close enough to reach that data during those events?

Format

  • Is the data being stored on some physical medium such as tape backup or magnetic storage?

  • Is the media highly portable, and in need of additional controls for theft?

  • Will that medium be affected by environmental factors?

  • How long do we expect to retain this data?

  • Will it be in a format still accessible by production hardware when we need it?

Staff

  • Are the personnel at the storage location employed by your organisation?

  • If not, does the contractor implement a personnel control suite sufficient for our purposes?

Procedure

  • How is data recovered when needed?

  • How is it ported to the archive on a regular basis?

  • How often are we doing full backups (and the frequency of incremental or differential backups)?


Answer

  • Long-term storage of the related cryptographic keys

  • Format of the data

  • Media the data resides on

  • Underground depth of the storage facility


Question #8 Data dispersion is a cloud data security technique that is most similar to which legacy implementation?

  • Business continuity and disaster recovery (BC/DR)

  • Redundant Array of Inexpensive Disks (RAID)

  • Software-defined networking (SDN)

  • Content delivery network (CDN)

Storage Operations

In addition to hosts used to run virtualised instances for customer operations, the cloud data center will also include devices used for near-term and long-term storage of both data and instance images.


Clustered Storage and Coupling


Most often, storage devices will be clustered in groups, providing increased performance, flexibility and reliability. Clustered storage can take one of two types:

  • Tightly coupled - all storage devices are directly connected to a shared physical backplane. Each component of the cluster is aware of the others and subscribes to the same policies. A tightly coupled architecture will enhance performance as it scales: the performance of each element is added to the overall performance of the cluster, allowing greater power as it increases.

  • Loosely coupled - It allows for greater flexibility as each node of the cluster is independent of the others and new nodes can be added for any purpose or use as needed. They are logically connected and do not share the same proximate physical framework. Performance does not necessarily scale because the nodes do not build on one another.

Volume versus Object




Another way of viewing storage options is how the data is stored. Typically, two modes could be used:

  • Volume (block)storage - the disk space is apportioned to the customer and is allocated to each of the guest instances the customer users. The virtualised OS of the guest then imposes a filesystem on the volume as necessary.

  • Object Storage - All data is stored in a filesystem, and customers are given access to the parts of the hierarchy to which they are assigned.

Resiliency




There are two general ways for creating data protection in a cloud storage cluster:

  • RAID (redundant array of inexpensive disks) - in most configurations, all data is stored across the various disks in a method known as striping. This allows data to be recovered in a more efficient manner because if one of the drives fails, the missing data can be filled in by the other drives. In some RAID schemes, parity bits are added to the raw data to aid in recovery after a drive failure.

  • Data dispersion (bit splitting) - Data dispersion is a similar technique where data is sliced into "chunks" that are encrypted along with parity bits, and then written to various drives in the cloud cluster. Data dispersion can be seen as equivalent to creating a RAID array in a cloud environment.


SAN versus NAS



A decision to be made in storage architecture is whether to use network attached storage (NAS) or a storage area network (SAN).

  • NAS - The user will see a NAS as a file server and can share files to it. NAS commonly uses TCP/IP.

  • SAN - Typically, the storage apportioned to the user is mounted to that user's machine, like an empty drive. The user can then format and implement a filesystem in that space according to their own preference. SANs usually use iSCSI or Fibre Channel protocols.

Answer

  • Business continuity and disaster recovery (BC/DR)

  • Redundant Array of Inexpensive Disks (RAID)

  • Software-defined networking (SDN)

  • Content delivery network (CDN)


Question #9 Data dispersion uses ___________, where the traditional implementation is called “striping.”

  • Chunking

  • Vaulting

  • Lumping

  • Grouping

Answer

  • Chunking

  • Vaulting

  • Lumping

  • Grouping

67 views0 comments

Recent Posts

See All

© 2021 by CloudTAC