Amazon Web Services Cloud Information Security Risks and Compliance

Amazon Web Services Cloud Information Security Risks and Compliance

Table of Contents

  • Introduction
  • Problem Statement
  • Relevance and Significance
  • The AWS Cloud Shared Responsibility Model
  • Customer Responsibilities
  • AWS Responsibilities
  • AWS Risk, Compliance & Controls
  • Account and Users Management
  • AWS EC2 Operating System Security
  • Encrypting and Securing Data
  • Network Security Options
  • Security Monitoring
  • Incident Response
  • Conclusion
  • References

Introduction

Cloud computing is not a new technology. It has existed since the 1960's as something called a timesharing computer system. John McCarthy, discussed the idea of computing as a utility in a similar way as one received electricity in their household as described by Krutz & Vines (2010). Cloud computing is a model to enable on-demand network access to a shared pool of configurable and reliable virtual computers running on a host computer. The computer systems get created with scripting languages that allow for instantly provisioning computer infrastructure needed quickly and instantly to roll out an application server and database system as Krutz & Vines (2010) points out.

In simple terms, the cloud model makes up six essential characteristics that provide computing services to customers. First, is providing a self-service on demand computer service as Krutz & Vines (2010) points out. Second is to provide ubiquitous network access. The third is to allow access to a computing resource pool and the fourth is to enable location independence. The fifth is enabling the ability to scale out and in by elastically starting and stopping servers to provide rapid elasticity. The last characteristic is to allow a measured computer service.

Three Cloud service computing models exist. The first one is a Software as a Service (SaaS) and provides applications over the network or the Internet. The second computer model is the Platform as a Service (PaaS) which entails customers creating an application in the cloud, and last computer model is the Infrastructure as a Service (IaaS). IaaS is renting network, processing, storage and other resources from the cloud computing service provider as described by Krutz & Vines (2010).

Four deployment models are available in different implementations. They are either internally or externally implemented as described by Khan, Khan & Farooqui (2016). The owner of the private cloud is usually by the enterprise or organization or leased in the first deployment model. The second model is the community cloud which shares infrastructure within a specific community. The third is the public cloud which is usually very large scale infrastructure equipment. The last deployment model is the hybrid cloud where two or more options discussed are implemented to provide computing resources as Khan, Khan & Farooqui (2016) points out.

On March 13, 2006, the Amazon Web Services (AWS) cloud services business opened for business. The first service offered was called Simple Storage Service (S3) as described by Golden (2013). The launching of AWS was the beginning of the cloud service boom. S3 enable storage of files to be in the cloud. S3 launched first and shortly after that, in 2006 Elastic Compute Cloud (EC2) launched. EC2 is a computing service that offers computing on demand, immediately available and no commitment to the length of use. AWS provides a scalable computing platform that is highly available and provides tools that enable a customer to run a range of applications as described by AWS (2017a). AWS's goal is to protect the confidentiality, integrity, and availability of customer systems and the data stored in the AWS infrastructure is the greatest importance to AWS, as is maintaining customer trust and confidence. AWS takes the responsibility of providing services on an extremely secure and controlled platform offering an extensive assortment of security features customers can benefit from as described by AWS (2017b). The economies of scale have reached the computer services industry by increasing revenue for cloud providers like AWS and lowering costs for cloud customers as Rubóczki & Rajnai (2015) discuss. The on-demand services model allows providers to achieve better resource utilization through multiplexing and enables customers to avoid costs of overallocated resources by taking advantage of dynamic scaling as Rubóczki & Rajnai (2015) points out.

Problem Statement

Countless survey results show that the number one concern when migrating to the cloud is security as Golden (2013) discusses. The average IT professional does not trust cloud provider security, as Golden (2013) points out they believe that they, the IT Professionals, are the only people that can implement a truly secure computing environment. A significant security risk exists in the cloud environment as described by Marinescu (2013). The threats as described by Marinescu (2013) are traditional security threats, the system availability threat and the threat of third party data control. AWS Cloud computing is perceived to be high risk and insecure when migrating systems and applications to the cloud. The paper's purpose is to outline and understand the information security, risks, and compliance when running applications and storing data in the AWS cloud. Research of AWS services model will provide details on what AWS and customers own and implement to lower security risks when using AWS cloud services. Identify and recommend what policies customers need to implement to run in one of the cloud service models to defined.

Relevance and Significance

The purpose of the paper is to discuss information security risks within the AWS Cloud services. A security responsibility model will be defined to understand the customer and the AWS service provider's roles. The AWS global infrastructure security and service will be researched to understand what security risks and activities need action by the customer and what security and risks are under the protection of AWS. By covering the responsibilities of AWS and the customer from a security perspective, this will clarify and resolve the confusion on why AWS is not an information security risk to IT professionals when AWS services get implemented with best practices in mind.

The AWS Cloud Shared Responsibility Model

Migrating data processing and information technology systems to cloud infrastructure changes how one manages security in an organization's data center, as AWS (2017a) points out, security now becomes a mutual concern. Understanding cloud security one should know what a trust boundary is. A trust boundary, as described by Golden (2013), defines a clear divide between AWS and their customer. Moving to the cloud reduces the operational liability. Many customers experienced an improved security posture by migrating applications to the AWS cloud environment.

Customer Responsibilities

The security configuration handled by the customer is dependent on the services selected by the customer. Security features used by all clients are the configuration of user accounts and credentials as described by AWS (2017a). The security elements a customer is responsible for securing are the operating system, application, security groups, operating system firewalls, network configuration, account management and certifying applications are secure as describe by AWS (2013).

The customer as described by AWS (2013) is responsible for what one puts on or connects to AWS. The security steps that are needed depend on the services used and the complexity of the migrated system. In the example of EC2, the customer is required to handle patching of the guest operating system, the applications installed on the instance, firewalls and security groups as AWS (2013) points out. To secure S3, the customer is responsible for securing external access to S3 and encryption of the data at rest. For Relational Data Services (RDS), a managed service AWS handles things like backups and patching, but the customer must configure the network access and creation of a secure master password.

AWS provides different infrastructure and platform services as AWS (2016) points out. To make the shared responsibility model is clear, AWS (2016) breaks the shared services into three types. They are infrastructure, container and abstract services.

Infrastructure services include computing services and related services to computing services or EC2 instances as described by AWS (2016). EC2 services are similar to the virtual machines in the data center. Virtual machine services such as Virtual Private Cloud (VPC), Elastic Block Store (EBS) and auto-scaling are technologies comparable with on-premise virtual machines software such as VMware. Just as on-premise virtual machines, the customer controls the operating systems and identity management access to the user layer of the virtualized stack.

Container Services run on AWS managed EC2 instances or other AWS infrastructure instances. Examples of container services are Amazon RDS, Elastic Map Reduce (EMR) or Elastic Beanstalk. In some cases, customers do not manage the platform layer or operating system as AWS (2016) points out. The container is provided by AWS to run the service, and the client is responsible for managing networking firewalls, platform access and identity management in Identity, and Access Management (IAM).

Abstract services are high-level services provide by AWS such as S3, Glacier, Dynamo DB, Simple Queue Service (SQS) and Simple Email Service (SES) as described by AWS (2016). Each one of the platform services is managed by AWS and only expose services, APIs or web browser access is provided to interact with each service. These abstracted infrastructure services are shared service with other customers and are known as a multi-tenant platform and isolates customer data securely and is tightly integrated with IAM.

AWS Responsibilities

The AWS team is accountable for the protection of the global infrastructure the AWS services run on as described by AWS (2017a). Examples of services that run on top of the infrastructure are EC2, EBS, and VPC's. AWS manages and is accountable for computer system hardware, software, network, data centers and facilities where the installed equipment resides. The infrastructure is the primary responsibility, and AWS maintains reports from third-party auditors that confirm the compliance is in accordance to a plethora of computer regulations, and standards as AWS (2017a) points out. Managed services within the global infrastructure are also the responsibility of AWS. The services managed by AWS, as described by AWS (2017a), provide scalable and flexible services such as Amazon RDS which a managed service for relational database management systems such as MySQL, PostgreSQL or Oracle. Amazon EMR is another example of a managed service. Amazon EMR is a scalable Hadoop framework managed service.

AWS Risk, Compliance & Controls

The AWS information security team maintains a risk, compliance and comprehensive control program that gives a customer the ability to include the controls into an organization's existing controls easily and governance framework as described by AWS (2017c). Robust controls are maintained and executed by the AWS team members.

Risk

There are many ways to classify security risks for cloud computing. One way, as described by Marinescu (2013), classifies risk into three broad categories security risks, system availability related risks, and risks that relate data control by third parties. Traditional security risks include protecting the infrastructure, authentication, and authorization. Traditional attacks such as distributed denial-of-service attacks phishing, SQL injection attacks, and cross scripting attacks also affect the cloud as Marinescu (2013) points out.

Compliance

AWS customers require maintenance of adequate governance overall IT control environment regardless of the method Information Technology deployed as described by AWS (2017c). Compliance practices comprise of the understanding that the compliance objectives and requirements are required. Examples of compliance objectives are the establishment of a controlled setting that is meeting the goals and demands of the organization, the understanding that confirmation is necessary and built on the group's security risk tolerance, and the confirmation that organizations operational success in the organizations controlled environment as AWS (2017c) points out. Different options apply to the types of controls, and many verification techniques available when deploying to the AWS cloud as described by AWS (2017c).

Compliance and governance can have simple steps as outlined by AWS (2017c). They include reviewing information available from AWS along with other information to understand as much of the information technology enforcement and then document the compliance requirements. Control objects are next and need to be designed and implemented to meet the compliance requirements. Once completed any controls owned by outside parties, need to be identified and incorporated into the organization's compliance and governance plan. The last step is to verify the control objects, and all the key controls are created as intended and operating efficiently and effectively as AWS (2017c) points out. Getting the organization to approach compliance and governance using these steps assist the organisation in gaining a clear picture of their control environment and will help in outlining the verification that activities get executed correctly as AWS (2017c) points out.

Controls

AWS maintains internal controls. Internal and external auditors validate the control design and operations as AWS (2017c) points out. Auditors monitor controls by observing implementation of checks by staff and review results of the checks implemented. Direct external auditor confirmation, by the customer's external auditor, is usually completed to confirm controls.

AWS share and customers can request and evaluate third-party attestations and certifications to verify the reassurance of the control design and operating efficiency as described by AWS (2017c). Although the AWS customer's controls are under the management of AWS, the control environment is still unified so that controls are accounted for and verified as operating effectively. In many cases, third-party attestations and certifications that are maintained by AWS provides a higher level of validation in the controlled environment and relieve the AWS customers' requirements to perform certain validation efforts themselves for the organization's IT environment in AWS.

Account and Users Management

AWS Accounts

To begin using AWS one needs to set-up an account. The account identifier is an email address. The account represents the business relationship between AWS and the customer as described by AWS (2016). The AWS account is referred to as the root account and has access to all services. The account, as recommended by AWS is not meant to be used on a daily, or regular basis as AWS (2016) points out. Many organizations will set up different accounts for various environments. For example a production test and development account. Many accounts can be combined to form one bill with the assistance of AWS. AWS best practice is to set-up IAM accounts for regular access to the AWS environment as AWS (2016) points out.

IAM Users

IAM users are the recommended access method for AWS as described by AWS (2016). IAM user account usage can be used to access the AWS environment by the management console web page, using the Command Line Interface (CLI) and directly through the Application Programming Interface (API). AWS recommends creating an IAM user for individuals or resources that require access to an AWS Account. Once IAM accounts have been set-up permissions to resources can be granted via groups that are created for specific needs and assigned to IAM users.

IAM Groups and Roles

IAM groups manage access to AWS resources and roles lets one create a set of permissions to access resources or service needs on a short term basis as described by AWS (2016). Examples of short access needs to resources or services are temporary cross-account access, an application running on EC2 instances needing to access AWS resources and temporary, identity federation.

Any number of IAM users can be part of a group in one AWS account as AWS (2016) points out. Groups can be made up of functional, organizational, geographic, or by project, or fill other needs as identified by the IAM administrator. The IAM group has rights to use AWS resources by assigning one or many policies as described by AWS (2016). The policies granted to a group and then inherited by the users included in the group.

IAM roles are temporary security credentials as described by AWS (2016). A role allows the definition of a group of authorizations to access resources that users and services required, but permissions are not attached to a specific IAM user or group. The role allows the user to assume a role as described by AWS (2016). The role returns a temporary authorization that a user or application use to access AWS. Each temporary authorization is configured with an expiration and is changed automatically. The use of roles and temporary authorizations gives flexibility to administrators because it means they do not have to manage long-term authorizations and users for an entity that requires access to a resource as described by AWS (2016).

AWS EC2 Operating System Security

AWS's infrastructure as a service provides an elastic computing capacity using server hosts in the Amazon data centers running a hypervisor that allows virtualization. The AWS EC2 instances are virtual machines that run on an AWS customized version of the Xen hypervisor as described by AWS (2017a). Amazon EC2 instances architecture was created to enable simple web-scaling and enable one to configure capacity with minimal friction. EC2 provides layers of security as described by AWS (2017a). The layers start with the operating system or hypervisor on the host, the virtual machine instances operating systems, firewalls and signed API calls. Each of the security layers builds on the capability of another.

To access the operating systems on the different EC2 instances, one needs a set of credentials as described by AWS (2016). Asymmetric keys generated at the time an EC2 instance gets created are used to access the administrator account. The keys are called Amazon EC2 key pairs. The key pairs are RSA keys. Once an EC2 instance starts, one can set-up operating system security in line with the organization's security policies. The keys and EC2 instances are only accessible by the AWS customer and not by AWS as pointed out by AWS (2013). The private half of the RSA key pair is downloaded and owned by the customer.

Windows

AWS creates a Windows EC2 and is online for the customer; one needs to decrypt the password to log in as described by AWS (2013). If the Windows machine gets created from an Amazon Machine Image (AMI), the ec2config service creates the random administrator password for the new instance and encrypts it using the public key as described by AWS (2016).

Linux

The Linux EC2 instance executes the cloud-init service when creating a new instance from an AMI. The public key gets appended to the end of the ssh authorized_key file on creation. The Linux machines are set up with a default root account as described by AWS (2016). The user is different for each Linux distribution, but one common name is ec2-user. Linux machine is SSH'ed into by the root user, in this case, ec2-user, and one supplies the private key to the instance to gain access and configure for the organizations use.

Encrypting and Securing Data

Securing Data at Rest

Data at rest or sitting on a disk in AWS is encryptable, and that feature is the responsibility of the client to enable not AWS as AWS (2016) points out. S3 buckets can be encrypted by a customer key using the client encryption library called the Amazon S3 Encryption Client before transferring the data to S3 as described by AWS (2017a). S3 also provides the ability to encrypt the data at rest with a feature called S3 Server Side Encryption (SSE). SSE is the easiest way to encrypt files on S3 since the service manages the encryption when enabled during the copy of the file into the S3 bucket.

Decommission Data and Media Securely

Data that is no longer needed or deleted in AWS and handled differently than one's own data center since the storage is a shared resource as AWS (2016) points out. At file deletion time within AWS, the storage does not get decommissioned immediately the blocks are tagged as not allocated. AWS enables secure mechanisms and procedures to reallocate blocks somewhere else. The hypervisor keeps track of the customer's instance has written to. The hypervisor also zero's out blocks before reuse as described by AWS (2016). When storage or media is ready to be decommissioned AWS follows techniques identified in the Department of Defense National Industrial Security Program Operating Manual (5220.22-M) to decommission and correctly destroy data on the device. In addition to decommissioning the device, best practice recommends deleting the keys that protected the data as AWS (2016) points out.

Securing Data in Transit

Communicating with AWS is most likely done over the public internet, so it is important to protect data in transit while hosting application in a cloud environment. Network traffic between servers and between client and servers need to be protected as AWS (2016) discusses. A few examples of common concerns are accidentally information disclosure, compromised data integrity, and compromised peer identity or identity spoofing. Accidental information disclosure gets mitigated by encrypting data in transit using IPSec ESP and SSL/TLS connections as described by AWS (2016). Protecting against comprised data integrity is protected by authenticating data integrity using IPSec ESP/AH and SSL/TLS. Identity spoofing or peer identity getting compromised becomes mitigated by using X.509 certificates, IPSec with IKE with pre-shared keys or SSL/TLS with server certificate authentication.

Network Security Options

VPC Networks

The Amazon VPC normally contains EC2 instance launched with a public random IP address. The VPC allows EC2 instances to be isolated and launched in a private address space in a normal private IP range (e.g., 10.0.0.0/16). Having private subnets means one can define more subnets within the one VPC. A VPC allows one to group similar instances based on the IP address range. Once the VPC subnets are set up routing and security is used to control the traffic flow to and from the EC2 instances and subnets.

A customer uses a VPC to shield EC2 instances that don't need outside access to the Internet as described by Golden (2013). Shielding using a VPC is a great way to increase application security, and a VPC is installed by default, so customers will need to learn how they are used to divide public-facing and private EC2 instances.

The VPC provides the ability to connect current IT infrastructure of the customer and the AWS cloud as Marinescu (2013) points out. The current customer's infrastructure is linked via a virtual private network (VPN) to a set of isolated AWS compute resources. The VPC permits existing network security capabilities such as firewalls, intrusion detection systems or other security services that will operate flawlessly within the AWS cloud.

VPC Flow Logs

VPC flow logs capture IP traffic traveling to and from the network interfaces or virtual network cards in the VPC as describe by AWS (2017a). The VPC logs get written to CloudWatch. CloudWatch is an AWS service which delivers AWS monitoring of services such as EC2. The web interface enables a customer visibility to service performance and utilization, and assists in identifying demand service usage metrics to help in tuning AWS services as by AWS (2017a) points out. Alarms are set in CloudWatch to alert customers when thresholds of interest. One example of using of CloudWatch alerts to notify network staff of certain traffic passing in and out of the network interfaces.

AWS WAF

AWS Web Application Firewall (WAF), when used, is incorporated into CloudFront as described by AWS (2017a). The WAF product assists customers in protecting client-server web applications from web page exploits that affect accessibility, consume resources or breach the security of the application. WAF assists in blocking known attackers. The WAF network may have seen a certain attack on another site and can block an IP address when it attempts to breach the same way with a current application in production as by AWS (2017a) points out.

Security Monitoring

The shared responsibility model calls for customers to monitor and manage the services in the account. Security monitoring and alert set-up correctly assist in answering questions about the environment as described by AWS (2016). Laws requires some organizations to audit and track actions on a system. Before logging and alerting can be set-up the requirements for what is urgent or necessary to the security team needs identification. Many questions come up for monitoring. Questions to answer such as what parameters should be monitored. How should the service be monitored and what thresholds will be set. If a policy that is monitored alarms what is the escalation process. These are all important for a team to figure out and what audit logs to keep and how long the retention period is for the audit logs. What should an organization monitor as a best practice? Six things were identified by AWS (2016). The six things are that recommendation to log actions taken by any accounts with administrative privileges. Any audit trails access should be logged as well as invalid access attempts, any authentication mechanisms, the initialization of audit logs and the creation or deletion of any system level objects.

One should understand how log files are collected, transported and stored. The logs should be categorized by classifications identified by the security team as described by AWS (2016). Log files provide security intelligence and can be analyzed in real time or on scheduled runs. Many log files are sensitive, so they need to be protected with identity management, encryption and a correct time stamping with the proper time syncing, so we know the occurrence of the event.

Alerting

The alert tool that monitors logs in AWS is called CloudWatch. CloudWatch, an AWS service that monitors AWS resources as described by Golden (2013). Setting up a custom metric to be monitored by CloudWatch is quite an easy process. To set-up monitoring, one can make a PUT API call with a metric and then CloudWatch monitoring begins. CloudWatch has a two-week data retention policy. To store data over two weeks, one needs to extract CloudWatch data via the API and store it in S3 or another data store. CloudWatch is enabled by default when an AWS account gets created by a customer. To get started one simply needs to select or define a metrics to track and then use the metrics generated to create an alert or extract the data for further analysis.

Audit Trail

Audit trails get set up by the application, operating system or in the case of AWS by services. Services such as S3 and EMR have built-in audit trail features that can be enabled. Change management should also get monitored for the application systems. Change management includes move, add change and delete. New software needs to identify the change management and then migrated from test to quality assurance to production.

Incident Response

Without the right Incident Response resources, process and software in place one cannot take action on issues found in the logs. Incident response and management, as described by Krutz & Vines (2010), is very important in the cloud computing arena. This capability has to be present in the organizations of both the cloud user and the cloud provider. Incident response resources need two primary components which are the creation and maintained of intuition detection system and the process for instance and network monitor and event notification in place as described by Krutz & Vines (2010). The second component as described by Krutz & Vines (2010) is the to have in place an incident response team for analysis of incoming events and response to an incident if needed. Also, a defined process of escalation for the incident and one a resolution is in place post-incident follow-up and communication of incident to appropriate team members in the organization.

Conclusion

Many IT professionals see migrating applications to cloud computing as risky and unsafe. Many times the reason AWS is identified as risky or unsafe is that the professional has not had the right training or understanding to come to this conclusion properly. This paper set out to lay to rest any fears by outlining the steps to build or migrate the application to the cloud safely and securely. When migrating to the cloud, as discussed, one of the most important first concepts is to understand what security and risks the customer is required to address and which ones are overseen by AWS. It is important to separate the responsibilities between the customer and AWS, so each understands the roles and responsibilities in keeping a client's data and application(s) safe in the cloud. Having the right risk, compliance and controls in places assist in making the responsibilities clear. Another important part is understanding the architecture of the AWS cloud, what are best practices for setting up users, virtual machines, networking and monitoring and alerting. It is important to remember professionals that don't do the background training to understand the concepts and architecture of the AWS operating environment before migrating to the service will naturally create security risks for themselves and their organization. The AWS cloud or any other virtualized environment will continually have an impression that the architecture is insecure and unsafe because many of these professionals do not understand the ins and outs of the new AWS cloud environment they have been asked to migrate the organization applications or data.

References

Architecting on AWS 2.6 Student Guide - AWS. (2013). VitalSource Bookshelf version. Amazon Web Services training guide covering AWS architecture fundamentals and best practices.

AWS Security Best Practices - AWS. (2016). PDF whitepaper providing comprehensive guidance on implementing security controls and best practices for AWS cloud infrastructure.

Amazon Web Services: Overview of Security Processes - AWS. (2017a). PDF whitepaper detailing AWS security processes, infrastructure protection, and the shared responsibility model.

AWS Risk and Compliance Overview - AWS. (2017b). PDF document outlining AWS risk management framework, compliance programs, and regulatory standards.

Amazon Web Services: Risk and Compliance - AWS. (2017c). PDF whitepaper covering risk assessment methodologies, compliance frameworks, and control implementation in AWS environments.

Amazon Web Services For Dummies - Golden, B. (2013). Somerset: Wiley. Comprehensive guide introducing AWS services, cloud concepts, and practical implementation strategies for beginners.

A survey on cloud security and various attacks on cloud - Khan, S., Khan, S., & Farooqui, Z. (2016). International Journal of Computer Applications, 147(14), 17-20. Research paper examining cloud security threats, vulnerabilities, and attack vectors in cloud computing environments.

Cloud Security - Krutz, R. L., Vines, R. D. (2010). Hoboken: John Wiley & Sons, Incorporated. Textbook covering cloud security fundamentals, risk management, and security controls for cloud deployments.

Cloud Computing - Marinescu, D. C. (2013). Saint Louis: Elsevier Science. Technical reference book discussing cloud computing architecture, security challenges, and distributed computing systems.

Moving towards cloud security - Rubóczki, E. S., & Rajnai, Z. (2015). Interdisciplinary Description of Complex Systems, 13(1), 9-14. Academic paper analyzing cloud security trends, economic benefits, and resource optimization in cloud environments.

Posts in this series