Philippe Beauchamp Philippe Beauchamp

Cloud Security: Sample Cloud Analysis

Introduction

You have recently been hired by another voting software company as the Chief Information Security Officer. This company does not have any cloud presence at all, mainly due to fear of the unknown on the part of the CEO. You have been hired to prepare a recommendation on the viability of AWS as a solution to their soaring data center costs. Your part in this report is based on the security services and practices of Amazon Web Services, and how those services and practices impact customer data.

Situation

Using what you have learned about AWS security, you are confident that the company can safely migrate their data and servers to it. While you are not responsible for determining the method of migration, or any aspect of the setup, you are responsible for briefing the CEO and executive committee on your findings related to security. When the migration is complete, you will be responsible for the company’s website (the primary source of revenue for the company), as well as many internal servers and internal data that will be stored in EC2/S3. You are to consider the security aspects of:

      1. The internal data

      2. The internal servers

      3. The external (Internet) website, including the database servers that support that site

      4. The company’s Microsoft Active Directory

      5. The separation of data based on classification (i.e. regular company data vs. confidential financial data)

      6. Disaster recovery/business continuity

      7. Company policies related to the cloud

Technical Write-Up

Prepare a report for the executive committee detailing your findings of the security in AWS related to the points above. Do not go into detail about the implementation of the services in AWS, but rather concentrate on how AWS should be secured by your company. Include the services you will use from AWS and how they enhance or create the security infrastructure that protects your data. Discuss policies that you will need to create, but you do not have to specifically write those policies, rather just outline what they need to cover. This should be a fairly high-level report, going into details only where necessary to enhance the readers understanding of the security aspects and services provided by Amazon.

Requirements

3-6 pages in length (not including references or title pages) with references in APA style:

—-

Overview

As part of the current evaluations into the viability of AWS as a potential IT infrastructure platform, an assessment highlighting the necessary security precautions on a technical and procedural level has been needed. This includes protection of all critical technical assets impacted by such a move to AWS to ensure the continued operation of this company and protection of information ans services. This includes internal servers and services moving to the cloud as well as the external facing Voting Application that will also move to the cloud.

Purpose

This document lists the recommended technologies and policies to secure a potential migration to the Cloud using AWS.

Recommended Security Solutions

The internal data

Internal data moving to the cloud needs to be secured in transit and in storage. As data will be managed by a third party; evalation of the data should be performed prior to it moving to the cloud. Sensitive data will need to be encrypted in storage by default. This applies to all systems handling this data as well. As such EC2 and S3 storage will need encryption for all sensitive data. Unless otherwise stated all internal data should be considered sensitive.

An encrypted channel should be put in place for all communication going between on premises and AWS. AWS Direct Connect is recommended. As an alternative a site-to-site VPN may be put in place at a lower cost, with higher latency.

Backup and archiving of internal data is also a necessity to ensure ongoing availability of services and data and prevent business outages. As an interim measure Cloud Virtual Tape Libraries can be used as part of the transition to the cloud. On the final target architecture, data should be moved onto S3 services with replication across availability zones, and taking snapshots onto Glacier for long term storage.

The internal servers

The Amazon EC2 VM Import Connector should be used as to assist in the move of internal VMs onto AWS as EC2 instances in the move to AWS for IaaS. this will help to ensure that the VM instances are configured identically to on-premise servers and that they are transferred securely.

Servers will need to be patched on a regular basis by leveraging the AWS System Manager Patch Management capabilities.

Internal servers are expected to reside within private subnets in AWS, and only key servers which internal users should have access to should be routeable from on-premise computers. This can be defined within the subnet routing rules.

A second offline site should be defined within AWS at a different availability zone with active data replication (see internal data). In a disaster scenario the Disaster Recovery Plan should be carried out to bring the secondary site online to continue internal services and make internal data available again.

The external (Internet) website, including the database servers that support that site

Most servers should be placed within a private subnet, with only major interface points (ie Webservers gateways) placed within a public subnet.

Amazon API Gateway should be used as the interface point for any exposed APIs to ensure that back-end servers are not directly accessible by the public. Web servers should be protected by a Web Application Firewall to eliminate known web-based exploits on websites.

Servers should be set up with elastic load balancing and autoscale groups to dynamically grow with acceptable load to help prevent a DoS attack.

The servers for this service should reside in at least two different availability zones with data replication much like in the case of Internal Data. Ideally in this case the service would be able to operate across availability zones simultaneously with Route53 determining the appropriate availability zone instances to respond with based on user location. This will help in providing an overall resilient service and segmenting the impact in the case of an outage in one availability zone.

Servers should be set up with load balancing and autoscale groups to be able to grow within expected top limits. Logging and monitoring should be put in place with alerts on scaling to detect abnormal growth rates. A process will need to be defined on how to handle abnormal growth. For example whether to limit access for specific locations using CloudWatch.

The company’s Microsoft Active Directory

The company’s Active Directory should be synchronizes with AWS Directory services to allow AWS to leverage corporate user information and provide a unified authentication that is traceable for security events.

IAM rules and groups should be used in AWS, and should leverage Active Directory Roles where possible to determine what users can perform what actions within AWS. A process needs to be defined for change control on Active Directory and IAM updates or changes so that adequate approvals are recorded and verified.

Multi Factored Authentication needs to be enforced for all privileged users performing privileged actions in AWS, and all critical actions must be logged with verifiable user identification information along with event and action details.

The separation of data based on classification (i.e. regular company data vs. confidential financial data)

Data classification can be determined using Amazon Macie and it’s compliance requirements before analyzing data sets. The results of Macie’s analysis can be used to determine the appropriate controls that can be placed on the data, and whether data should be re-organized to provide better data enhancements.

Depending on which services are using the data sets (internal, external, or other categories) different access control rules can be put in place or data can be blocked all together through a combination of this analysis and restructuring; as well as some of the controls described in the Active Directory section above.

Disaster recovery/business continuity

A set of Disaster Recovery Plans and Business Continuity Plans across both internal and external services needs to be defined and be complete and tested on a regular basis.

Technologically internal and external disaster recovery plans can be different based on the Recovery Time Objectives and Recovery Point Objectives they each have. A warm or cold DR strategy can be employed for the internal applications where a hot DR strategy may be needed for the external Voting Application service based on expected usage and demands. Strategies for each have been suggested in the sections above.

Policies

The following policies should be written or updated as a part of this move to AWS cloud

  • patching

  • change management

  • Password Construction Guidelines

  • Password Protection Policy

  • Security Response Plan Policy

  • End User Encryption Key Protection Policy

  • DR Plan

  • Business Continuity Plan

  • Acceptable Use Policy

  • Data Breach Response Policy

  • Remote Access Policy

  • Remote Access Tools Policy

  • Router and Switch Security Policy

  • Wireless Communication Policy

  • Wireless Communication Standard

  • Database Credentials Policy

  • Technology Equipment Disposal Policy

  • Information Logging Standard

  • Lab Security Policy

  • Server Security Policy

  • Software Installation Policy

  • Web Application Security Policy

References

AWS Systems Manager Patch Manager. (n.d.). Retrieved from https://docs.aws.amazon.com/systems-manager/latest/userguide/systems-manager-patch.html

How Patches Are Installed. (n.d.). Retrieved from https://docs.aws.amazon.com/systems-manager/latest/userguide/patch-manager-how-it-works-installation.html

IAM Best Practices. (n.d.). Retrieved from https://docs.aws.amazon.com/IAM/latest/UserGuide/best-practices.html

determine and trace sensitive and private information:

Features: Amazon Macie: Amazon Web Services (AWS). (n.d.). Retrieved from https://aws.amazon.com/macie/details/

“Restricting the Geographic Distribution of Your Content.” Amazon, Amazon, docs.aws.amazon.com/AmazonCloudFront/latest/DeveloperGuide/georestrictions.html.

(n.d.). Retrieved from https://docs.aws.amazon.com/rds/?id=docs_gateway

Deekonda, A. (2019, April 17). Implementing a disaster recovery strategy with Amazon RDS: Amazon Web Services. Retrieved from https://aws.amazon.com/blogs/database/implementing-a-disaster-recovery-strategy-with-amazon-rds/

Creating an SMB File Share. (n.d.). Retrieved from https://docs.aws.amazon.com/storagegateway/latest/userguide/CreatingAnSMBFileShare.html

Best Practices for Amazon EC2. (n.d.). Retrieved from https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/ec2-best-practices.html

(n.d.). Retrieved from http://docs.media.bitpipe.com/io_13x/io_136616/item_1510269/CloudBerry_Lab_Whitepaper_A_Complete_Guide_for_Backup_and_DR_on_AWS.pdf

Read More
Philippe Beauchamp Philippe Beauchamp

Cloud Security: Sample DR from Hybrid Cloud Infrastructure

Introduction

Your company has an on-premise data center, as well as a few applications that run entirely in AWS. Recently, the CEO of your company heard of a company that was forced to go bankrupt due to the failure of a data center. She wants to ensure that your company is protected in the event of a loss of data center services, either the on-prem or cloud-based ones. Migration of the on-prem services to the cloud is not possible right now, so a multi-part solution will be needed. You have been tasked to generate a high-level design document on how to achieve this at the lowest possible cost.

Situation

Your company currently runs their website in AWS, as described in previous lab reports. In addition to the servers you have interacted with, assume there are several databases and other large storage servers in AWS to facilitate the voting application. In addition to these AWS services, your company has a number of VMs (running on VMware ESX) in a co-location data center near the company headquarters. These VMs run equipment, such as file servers for word/Excel documents, the email system, internal accounting systems, and other business-specific applications. While there are backups to a local tape drive, nothing else is done to protect the data.

Technical Write-Up

Prepare a report describing how you will protect both the on-prem and AWS data in the event of a disaster. You should attempt to minimize costs, as your company is not willing to spend a great deal of money on this initiative, as they feel the odds of needing it are small. Outline how you will protect the data, how a disaster would be handled, and how services will be restored in the event a disaster occurs. The audience of this paper will be the executive committee of the company, so they have some, but not a great deal, of IT knowledge.

—-

Overview

The company currently hosts a number of services at a Data center that is near its headquarters, as well as hosting the voting application in AWS. The Company is in the process of moving more services to the cloud, but needs a DR solution for both current and future architectures.

Disaster Recovery is defined here as the ability to recover critical IT data and services at the occurrence of any event can impact those services. This includes complete data-center or city-wide disasters.

No Recovery Time Objective (RTO) or Recovery Point Objective (RPO) is defined at this point, however given that the the voting application is publicly accessible it is assumed to need a quick recovery; while business data and services are assumed to have more flexibility, especially on the RTO.

Purpose

This report describes how both on-prem and AWS data and services will be protected in the event of a disaster; while minimizing costs; as well as defining how services will be restored after the disaster is over.

Main areas of concern

On AWS

A set of Linux and Windows servers, databases and application servers are hosted on AWS to provide an internationally accessible voting application, including potentially sensitive personal data.

“On Prem” Data centre near headquarters

The on-prem data centre hosts accounting applications, file servers, an email system and other internal business applications. The data centre is located near the corporate headquarters which means that in some disaster scenarios both the headquarters and the data center might be implicated at the same time. recovery of the headquarters is covered by the Business Continuity Plan.

Migrated Applications onto AWS

In the future the on-prem services may be placed onto AWS as well at which point, a cost effective cloud strategy will be needed.

DR for Voting Application

It’s recommended that at least two different availability zones be selected and that the EC2 instances be doubled. At the data layer the databases should replicate between the availability zones. Later an alternative such as using AWS RDS if possible to replace the oracle instances and use it for the data-level DR plan. The existing AWS availability zone should be designated the prime location, and the new availability zone should be designated for DR. All servers except for the database server should be shut down to reduce cost; and the data base server on the DR zone should be set up to be a Read Replica. This would also be the initial configuration after moving to AWS RDS. In the event of a disaster, the servers will be turned on, and the database server will be configured to become the master database server. Route53 can be used to ensure that in the case of the disaster traffic is routed to the DR availability zone.

On recovery, the process can be reversed. The initial availability zone can be set up with the same initial machine instances and database, but with this database reconfigured as the read replica. When data is properly synchronized, it can be set as the master and Route53 can be configured to route traffic to this availaiblity zone, and the servers on the DR zone can be turned off and the database server(s) re-configured to act as read-replicas.

If RPO is large enough, a backup storage and recovery strategy can be used instead to reduce cost further. A scheduled backup can be made for the database servers on a daily basis and stored outside of this availability zone. The DR zone with the servers preconfigured but turned off would still be needed for this solution, but data retrieval would come from this backup rather than already having been stored within the database as a read-replica.

https://aws.amazon.com/blogs/database/implementing-a-disaster-recovery-strategy-with-amazon-rds/

DR strategy for on-prem

The current architecture has the following data and services being hosted within the on-prem data center. it’s expected that within the data center proper backup and offsite storage plans already exist for this service.

  • File Server

  • Email System

  • Accounting Applications

  • Internal Business Applications

However in a data center disaster scenario, recovery of the data, even from offsite, would still be problematic since the hardware may not be able to run the recovered data. Cloud Virtual Tape Libraries therefor might provide some assistance; but without the ability to run the original VMs it is not a complete solution.

For this reason, the Amazon EC2 VM Import Connector should be configured to replicate the on-prem servers into EC2 and then turned off until a disaster occurs and they need to be turned on. This will save costs over continually replicating the VMs into AWS. Cloud Virtual Tape Libraries can then be leveraged to store the backups into AWS and retrieve them to update the VMs in the case of a disaster.

The rollback from DR in this scenario would involve ensuring that the on-prem hardware is reconfigured and then retrieving the backup data to resurrect the servers and then shut down the DR site. HOWEVER, it may be more efficient overall to consider this a migration to the cloud; and instead consider the DR instances as the new production instances, and then set up a new DR site on a different availability zone. See “Future…” below.

Future (On AWS) - Migrated Applications

Once all of the EC2 instances are on AWS and working, this essentially represents a migration of the services to the cloud. At this point a decision can be made to use these instances rather than the on-prem instances as the main services and turn down the on-prem data center.

At this point strategies similar to the Voting Application strategies. A new set on machine instances should be set up in a different availability zone and turned down with any database servers set up to be read-replicas until a disaster occurs. Also similarly, the recovery from DR can be handled in reverse, by standing up the production servers in the production availability zone, but shut down with the database servers in a read-replica mode. Once the data is replicated these databases can be configured to be the masters, the DR databases can be configured to be the read-replicas; and the production servers can be turned on, and the DR servers can be shut down.

Route53 can be configured to route the traffic appropriate to the Production or DR sites based on which should currently be the master.

Regarding the File Server, as this is moved to the cloud; rather than hosting a file server this should be changed to an S3 solution to leverage it’s replication capabilities and a storage gateway can be placed on-prem to access the file storage. This would relieve some of the replication requirements at the file level needed for DR scenarios. Users would be able to access a file share just as they do currently, but files would be replicated to other locations transparently, and less infrastructure would be needed even on virtual machines.

References

(n.d.). Retrieved from https://docs.aws.amazon.com/rds/?id=docs_gateway

Deekonda, A. (2019, April 17). Implementing a disaster recovery strategy with Amazon RDS: Amazon Web Services. Retrieved from https://aws.amazon.com/blogs/database/implementing-a-disaster-recovery-strategy-with-amazon-rds/

Creating an SMB File Share. (n.d.). Retrieved from https://docs.aws.amazon.com/storagegateway/latest/userguide/CreatingAnSMBFileShare.html

Best Practices for Amazon EC2. (n.d.). Retrieved from https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/ec2-best-practices.html

(n.d.). Retrieved from http://docs.media.bitpipe.com/io_13x/io_136616/item_1510269/CloudBerry_Lab_Whitepaper_A_Complete_Guide_for_Backup_and_DR_on_AWS.pdf

Read More
Philippe Beauchamp Philippe Beauchamp

Cloud Security: Sample Patching Policy

Introduction

Recently, a Trojan attacked your company’s systems through an unpatched vulnerability within Windows. The patch for this was released over a year ago, but was not installed. The CEO was not happy when she learned that the situation could have been prevented by the installation of a small patch. As a result, she has tasked you to design a system, using AWS tools, to ensure that no security vulnerability will go unpatched. In addition, she has tasked you to prepare a report on any other security vulnerabilities you can find within the company.

Situation

Your organization currently runs their website on two Linux machines, but also has numerous Windows machines, database servers, and several application servers. You have never had any form of patching policy, so most of your servers are very far behind in patches. No-one has ever considered any other form of security against external or internal threats.

You currently use an onsite active directory for every form of user access. Numerous people use shared accounts (“generic1”, “generic2”, and “generic3”) to log in, a result of a legacy situation from years ago. You know this is not secure, but will need to articulate it for your CEO to understand. In addition, your AWS environment does not use the active directory; people in the directory have set up their own logins.

Finally, your environment has a great deal of Personally identifiable information, such as credit card information and sensitive voter information. Nobody at the company knows where all of the data is stored.

Technical Write-Up

Prepare a short (5 to 7 page) report describing at least three Amazon tools that will assist in these problems. Describe the tools you have selected as well as how they might help solve these problems. Give some recommendations as to policy changes that will help ensure the security of company data. Be sure to keep your report at a fairly high level, giving technical details where required, but always remembering that your audience is the CEO of the company.

—-

Overview

This document describes the setup as well as the plan to deal with major security related threats with the listed systems hosted in AWS.

This includes:

  • Two Linux Web Servers

  • Numerous Windows machines

  • Numerous Database Servers

  • Several Application Servers

Patching is especially of concern; as well identifying and managing user information that is sensitive due to privacy implantation, and best practices for authentication and account management are desired to be implemented.

Purpose

This document provide a high level view of the possible tools to help solve these problems as well as providing recommendations on policy changes that will help ensure the security of company data.

Main areas of concern

Patching against vulnerabilities

Both Linux and Windows systems require patching for vulnerabilities and this has never been configured before. This results in systems that can be exploited with known vulnerabilities compromising systems and digital assets. Database and Application Servers are known to reside on either Windows or Linux.

Account and Authentication Best Practices

Accounts are being created in an ad-hoc manner and not leveraging the identities in the Corporate Directory (Active Directory). Further accounts are being shared. This leads to a situation where is can not really be known who is performing what actions on the systems, and where it may be easy for an unauthorized person to can access to systems. Since this is not really identifiable, it may not even be possible to determine who performed actions after the fact. The company becomes blindly vulnerable to attack on the systems.

Privacy Implications

There is a considerable amount of personally identifiable information that can likely be considered private user information. It isn’t known where this data resides in the solutions currently.

Best Practices

AWS System Manager Patch Management

AWS System Manager has the ability to provide patching to both Windows and Linux systems:

Minhas, S. (2018, November 12). Patching your Windows EC2 instances using AWS Systems Manager Patch Manager: Amazon Web Services. Retrieved from https://aws.amazon.com/blogs/mt/patching-your-windows-ec2-instances-using-aws-systems-manager-patch-manager/

“Patch Manager automates the process of patching Windows and Linux managed instances. Use this feature of AWS Systems Manager to scan your instances for missing patches or scan and install missing patches. You can install patches individually or to large groups of instances by using Amazon EC2 tags.”

AWS Systems Manager Patch Manager. (n.d.). Retrieved from https://docs.aws.amazon.com/systems-manager/latest/userguide/systems-manager-patch.html

“AWS Systems Manager Patch Manager automates the process of patching managed instances with both security related and other types of updates. You can use Patch Manager to apply patches for both operating systems and applications. (On Windows Server, application support is limited to updates for Microsoft applications.) You can patch fleets of Amazon EC2 instances or your on-premises servers and virtual machines (VMs) by operating system type. This includes supported versions of Windows Server, Ubuntu Server, Red Hat Enterprise Linux (RHEL), SUSE Linux Enterprise Server (SLES), CentOS, Amazon Linux, and Amazon Linux 2. You can scan instances to see only a report of missing patches, or you can scan and automatically install all missing patches.”

AWS System Manager works by monitoring individual systems using an agent on the system to communicate with, and can be configured with specific IAM credentials and roles. Maintenance Windows, Patch Baselines, and Patch Groups are defined to assist with the automation of patching. During the Maintanance window if patches are needed they can be automatically deployed.

How Patches Are Installed. (n.d.). Retrieved from https://docs.aws.amazon.com/systems-manager/latest/userguide/patch-manager-how-it-works-installation.html

“When a patching operation is performed on a Windows instance, the instance requests a snapshot of the appropriate patch baseline from Systems Manager. This snapshot contains the list of all updates available in the patch baseline that have been approved for deployment. This list of updates is sent to the Windows Update API, which determines which of the updates are applicable to the instance and installs them as needed. If any updates are installed, the instance is rebooted afterwards, as many times as necessary to complete all necessary patching.”

Linux patching follows a similar flow which is detailed in the same link above. Approval Rules define which patches should be applied via “yum”.

MFA, AWS Directory Services, and IAM for Access Control

The following list of actions are defined by Amazon to ensure good access control:

IAM Best Practices. (n.d.). Retrieved from https://docs.aws.amazon.com/IAM/latest/UserGuide/best-practices.html

  • Lock Away Your AWS Account Root User Access Keys

  • Create Individual IAM Users

  • Use Groups to Assign Permissions to IAM Users

  • Grant Least Privilege

  • Get Started Using Permissions with AWS Managed Policies

  • Use Customer Managed Policies Instead of Inline Policies

  • Use Access Levels to Review IAM Permissions

  • Configure a Strong Password Policy for Your Users

  • Enable MFA for Privileged Users

  • Use Roles for Applications That Run on Amazon EC2 Instances

  • Use Roles to Delegate Permissions

  • Do Not Share Access Keys

  • Rotate Credentials Regularly

  • Remove Unnecessary Credentials

  • Use Policy Conditions for Extra Security

  • Monitor Activity in Your AWS Account

From these the most immediate practices and technologies that can be implemented are to ensure that users are uniquely identified and that their activities are traceable to approved individuals. Adoption of Multi Factored Authentication for privileged accounts will assist in the assurance that critical actions are traceable to approved individuals by ensuring that users authenticate with at least a second factor that is uniquely held by that individual. Processes should be put in place to determine when accounts and privileges are assigned. Tying access control and authentication to the Corporate Directory allows for HR processes granting corporate accounts to be leveraged for system accounts; and also helps in the trace-ability of the users as their authentication into corporate credentials can also be leveraged.

AWS Directory Services allows for the integration of corporate Active Directory with AWS IAM accounts, roles and privileges to enable this

IAM Authentication and Access Control for AWS Directory Service. (n.d.). Retrieved from https://docs.aws.amazon.com/directoryservice/latest/admin-guide/iam_auth_access.html

Access to AWS Directory Service requires credentials that AWS can use to authenticate your requests. Those credentials must have permissions to access AWS resources, such as an AWS Directory Service directory. The following sections provide details on how you can use AWS Identity and Access Management (IAM) and AWS Directory Service to help secure your resources by controlling who can access them:

  • Authentication

  • Access Control

With this in place, a focus can be made on the authorization, policies, groups and roles; as well as account lifecycle for credential removals to remove any further issues.

AWS Identity and Access Management (IAM) Best Practices. (2019, April 01). Retrieved from https://cloudcheckr.com/cloud-security/top-5-iam-best-practices/

  1. Enable multi-factor authentication (MFA) for privileged users

  2. Use Policy Conditions for Extra Security

  3. Remove Unnecessary Credentials

  4. Use AWS-Defined Policies to Assign Permissions Whenever Possible

  5. Use Groups to Assign Permissions to IAM Users

AWS Macie for Private Information Detection

AWS Macie is an AI service for analysing stored data, which can be used to determine and trace sensitive and private information:

Features: Amazon Macie: Amazon Web Services (AWS). (n.d.). Retrieved from https://aws.amazon.com/macie/details/

“Amazon Macie uses machine learning to better understand where your sensitive information is located and how it’s typically accessed, including user authentication, locations, and times of access. Today, Amazon Macie is available to protect data stored in Amazon S3, with support for additional AWS data stores coming later this year. Amazon Macie first creates a baseline and then actively monitors for anomalies that indicate risks and/or suspicious behavior, such as large quantities of source code being downloaded, credentials being stored in an unsecured manner, or sensitive data that is configured to be externally accessible. With the Amazon Macie console, your most important information is front and center with detailed alerts and recommendations for how to resolve issues. Amazon Macie also gives you the ability to easily define and customize automated remediation actions, such as resetting access control lists or triggering password reset policies.”

Next steps should to to organize and protect information that has been determined to be private once the locations are known.

Read More
Philippe Beauchamp Philippe Beauchamp

Cloud Security: Sample Web Firewall rule base

Sample scenario:

Your company’s voting application is currently being sold exclusively over the website that you have set up. Currently, your system is approved in most counties, but not in China or India. The governments of those countries have threatened to sue your company if anyone uses your application.

In addition, due to false information released by your major competitor, your company has been targeted by many hacker groups, claiming your organization is in league with corrupt politicians and organized crime. As a result, your website is often a target for denial of service attacks as well as other hacking attempts.

Finally, your company is planning to separate the website you currently have set up to include a back-end database service.

Technical Write-Up

In 5 to 10 pages, describe your setup, as well as your plan to deal with each of the three problems outlined above. Describe how your security framework will work, what steps you will take to eliminate or mitigate the problems, and what the final architectural design will look like. Include specific examples where needed. You may (but are not required to) include third-party applications and solutions along with those provided by Amazon, but if you do, be sure to well document their use along with the Amazon solution.

—-

Overview

This document describes the setup as well as the plan to deal with three major security related threats to the company’s voting application hosted in AWS

Main areas of concern

Three main areas of concern were highlighted to be eliminated or mitigate:

  1. Prohibition in China and India

    While most countries are allowing access to this application, access from China and India need to be blocked to prevent litigation and incurred cost from damages.

  2. Protection from DDoS attacks,

    Coordinated hacker groups are known not only for employing Denial of Service attacks, but also Distributed Denial of Service Attacks which can be more difficult to eliminate or mitigate. The solution needs to be prepared for this.

  3. Public APIs to Database

    Proposed plans include exposing a database API publicly. This can become a new attack point and needs to be designed to avoid the same risks outlined in points 1 and 2 above, as well as special considerations to protect the internal database from direct access over the internet incurring further possible exploits.

Best Practices

Geo Restriction

Amazon provides a Geo Restriction feature as part of Amazon CloudFront to allow either whitelisting or blacklisting of access based on location:

“Restricting the Geographic Distribution of Your Content.” Amazon, Amazon, docs.aws.amazon.com/AmazonCloudFront/latest/DeveloperGuide/georestrictions.html.

“When a user requests your content, CloudFront typically serves the requested content regardless of where the user is located. If you need to prevent users in specific countries from accessing your content, you can use the CloudFront geo restriction feature to do one of the following:

  • Allow your users to access your content only if they're in one of the countries on a whitelist of approved countries.

  • Prevent your users from accessing your content if they're in one of the countries on a blacklist of banned countries.”

In this case a blacklisting of China and India would be required as no other locations require restrictions. So leveraging CloudFront with Geo Restrictions will address the concerns on problem 1 and (part of) 3 above.

Protection from Distributed Denial of Service

Amazon provides a whitepaper specifically for Distributed Denial of Service (DDoS) which is a concern to points 2 and (part of) 3 above.

DDoS attacks can come at multiple levels

“AWS Best Practices for DDoS Resiliency.” AWS Best Practices | Microsoft Azure, d1.awsstatic.com/whitepapers/Security/DDoS_White_Paper.pdf .

While some mitigations exist automatically within AWS further architectural changes are typically recommended, especially for a large-scale international application:

“Some forms of DDoS mitigation are included automatically with AWS services. You can further improve your DDoS resilience by using an AWS architecture with specific services and by implementing additional best practices.”

The main services which provide the most direct assistance are AWS Shield Standard, CloudFront and Route53:

“You can use AWS Shield Standard as part of a DDoS resilient architecture to protect both web and non-web applications.”

“Additionally, you can leverage AWS services that operate from edge locations, like Amazon CloudFront and Amazon Route 53, to build comprehensive availability protection against all known infrastructure layer attacks. Using these services – part of the AWS Global Edge Network – can improve the DDoS resilience of your application when you serve web application traffic from edge locations distributed around the world”

This configuration provides a number of DOS related benefits:

“Several specific benefits of using Amazon CloudFront and Amazon Route 53 include the following:

• AWS Shield DDoS mitigation systems that are integrated with AWS edge services, reducing time-to-mitigate from minutes to sub-second.

• Stateless SYN Flood mitigation techniques that proxy and verify incoming connections before passing them to the protected service.

• Automatic traffic engineering systems that can disperse or isolate the impact of large volumetric DDoS attacks.

• Application layer defense when combined with AWS WAF that does not require changing your current application architecture (for example, in an AWS Region or on-premises datacenter).”

Since DDoS can come from multiple network levels including application, the WAF integration (as mentioned above) is also highly recommended to prevent known application level attacks.

Further mitigation can be can be achieved by allowing elasticity in the architecture based on expected growth, and restricting impact to set instances (for example by region).

Amazon recommends (see whitepaper for details) selecting appropriate instance sizes, enabling autoscaling, setting up load balancing, scaling load balancing automatically, and configuring delivery and resolution at the edge to localize access. This will allow, for example, one region to be unavailable, while others remain available so that a DoS may impact one region without impacting another.

Protecting API Service internal resources

Finally, APIs can expose internal data-level resources externally. However ideally only the expected application calls ideally should be exposed without allowing any access directly to the internal servers or services. Amazon recommends the Amazon API Gateway for this purpose:

“By using Amazon API Gateway, you don’t need your own servers for the API frontend and you can obfuscate other components of your application. By making it harder to detect your application’s components, you can help prevent those AWS resources from being targeted by a DDoS attack.”

Proposal

The proposal here is to leverage Amazon’s DDoS-Resilient Reference Architecture listed in the AWS Best practices reference provided previously.

This involves setting up the EC2 server instances to autoscale (along with relevant data stores) in a private subnet; placing ELB Load Balancers infront of the servers requiring access (also configured to autoscale) on a public subnet; using Amazon Cloudfront with blacklisted geo restrictions for China and India to restrict access geographically and prevent a number of known DDoS attacks; Using the Amazon API Gateway to expose the set of acceptable API calls to the database in front of CloudFront; Using Amazon WAF in front of Cloud Front to prevent known web-based attacks; and using Amazon Route 53 to localize the web access based on region to restrict the impact of an attack. (See figure 5 in the AWS Best Practices reference previously mentioned for a diagram)

Read More
Philippe Beauchamp Philippe Beauchamp

Cloud Computing: Typical Benefits

Cloud computing is typically defined as a utility-based model to for computing resources with the ability to dynamically scale. AWS and Azure (the two leadning providers) define the term as follows (respectively):

“… the on-demand delivery of compute power, database, storage, applications, and other IT resources via the internet with pay-as-you-go pricing.”

What is Cloud Computing. (n.d.). Retrieved from https://aws.amazon.com/what-is-cloud-computing/

“… the delivery of computing services—including servers, storage, databases, networking, software, analytics, and intelligence—over the Internet (“the cloud”) to offer faster innovation, flexible resources, and economies of scale. You typically pay only for cloud services you use, helping you lower your operating costs, run your infrastructure more efficiently, and scale as your business needs change.”

What Is Cloud Computing? A Beginner's Guide: Microsoft Azure. (n.d.). Retrieved from https://azure.microsoft.com/en-ca/overview/what-is-cloud-computing/

What this model provides in general is a greatly reduced upfront cost with a relatively smaller ongoing cost based on usage. This necessarily frees up budget allocations to be applied in a more agile manner on technology, and to adapt more easily due to the flexibility of changing the technology landscape for the organization in a real-time basis. Cloud providers typically have the resources to invest into a security and a broad array of technologies that individual corporations would have difficulty running independently, especially if these are not the corporation’s main areas of business. This translates into improved security at the infrastructure and physical layers, and improved scalability in most cases.

Business.com lists these as the main benefits of cloud computing as well as improvements in data availability due to cloud provider’s improved abilities in the areas of backup, recovery, and local and geographical redundancy; again due to cloud provider’s greater ability to focus on the investments needed at the infrstructure and physical levels needed to achieve these benefits (i.e. having multiple datacenters in geographic areas globally)

Main benefits listed by Business.com

  • easier updates,

  • greater security,

  • data avilability,

  • on-demand scalability

How to Increase Business Agility with Cloud Technology. (n.d.). Retrieved from https://www.business.com/articles/how-to-increase-business-agility-with-cloud-technology/

Read More
Philippe Beauchamp Philippe Beauchamp

Cloud Computing: Common Adoption Challenges

Cloud adoption can introduce a shift in thinking, and essentially represents a move from implementing solutions in-house IT to offloading a proportion of that work to a strategic partner. Like many strategic partnerships, it works best when the partner has the skillsets and partners that match closely with the organization’s requirements.

There are cases where this is more difficult for some organizations than others. Security is typically a multi-layered objective. Cloud computing has multiple levels of offerings, and so the levels to which Cloud Providers take on security can be different. Organizations need to be aware of the level of security provided and understand that they still bear overall responsibility for their organization’s IT security. Especially regarding data, and access control granted. Further some organizations may have higher levels of security requirements than provided generally by the Cloud Provider. These cases need to be evaluated carefully. For example Government Top Secret requirements might easily be higher than what is provided by a cloud provider and the confidentiality requirements might even prohibit having a third party handling data or data flows entirely even if encrypted.

In general legal and mandatory compliance regulations can be extremely prohibitive in Cloud adoption where Cloud providers have implemented according to general criteria.

Discuss how cloud computing solution can be a challenge for certain company? What are some of the new issues that they must now address with the move to cloud? Discuss from security, privacy, legal, and compliance stand point.

Like all aspects of cloud, compliance is a moving target as competition in the Cloud Provider market is driving providers to adopt more standard compliance criteria. So ongoing effort is needed y working with providers on compliance requirements in these areas. A strategic phased approach is generally the best approach in transitioning to a cloud model, to assess which services and assets can be moved safely and which can not , at least for the moment.

“Cloud providers and vendors already are stepping forward to address the language of this regulatory requirement for standards. New security and compliance products, as well as detailed hardening guidelines, address the need for industry-accepted control requirements or recommendations.”

Cloud Compliance: Tackling Compliance in the Cloud. (n.d.). Retrieved from https://searchsecurity.techtarget.com/feature/Cloud-Compliance-Tackling-Compliance-in-the-Cloud

Read More
Philippe Beauchamp Philippe Beauchamp

Cloud Computing: Common Cost Differences from On-Premise

Compare some of the common costs that are different between cloud model and on-premise IT model.

In general the physical servers, network, operating system, and many levels of security are reduced (from an on-premises model) in the cloud model as these responsibilities are offloaded to the Cloud Provider; whereas (depending on IaaS, PaaS or SaaS) higher level logic, application installations, configurations etc. remain the responsibility of the organization

What are some budget line items that would cost more in the cloud cost model than in the traditional on-premise IT model?

Subscriptions will increase overall as the cloud model relies on a subscription based model for operational expenses on hardware, networking, and possibly software. The necessity for Cloud Engineers also increases, whereas other types of engineers may decrease due to the technology shift

What other considerations should be included when looking at the economic impact of a cloud adoption?

Expected adaptability and usage projections, acceptable outage times for services and business continuity in the case of cloud provider outages, compliance requirements for services, staff training, network stability

How is the cost of the software license usually handled in a cloud computing solution?

In SaaS, and PaaS; software costs are often embedded into the subscription price. In IaaS this can depend on the configuration. Some vendors can provide subscription-based licencing to accommodate a scalable cloud-based model (Such as Processor Licenses PL, and Subscriber Access Licenses SAL), but in other cases the organization is responsible for tracking licenses much as they would on-premises.

Trappler, T. J. (2013, April 18). Software licensing in the cloud. Retrieved from https://www.computerworld.com/article/2496855/software-licensing-in-the-cloud.html

Insider, W. (2015, August 07). Software Licensing in the Cloud - Now With More Flexibility. Retrieved from https://www.wired.com/insights/2012/03/licensing-cloud/

Read More
Philippe Beauchamp Philippe Beauchamp

Cloud Computing: Total Cost Ownership

Why is TCO dependent on the type of the organization?

Organizations have different requirements on how their services need to run, and what is acceptable for non-functional requirements including scalability, security, compliance etc. All of these factors have direct implications on the architectures for the solutions for the organization, and therefore the total costs associated with them.

What are some of the challenges in determining the TCO for a certain cloud adoption project?

One of the major benefits of cloud models is the flexibility of the solution. Since solutions can be dynamically defined based on changing business criteria, cost prediction becomes more difficult depending on how flexible the organization wishes to be in its IT infrastructure. This has carry-through cost implication for R&D, training, staff retention, investment recovery and more. The greatest advantage of cloud it’s adaptability; but this can also be the greatest challenge to the business for any technology that the business is highly dependent on.

What factors inside and outside the company would complicate the calculations?

Examples of inside factors include organizational changes, training and education and changes in technology direction due to business needs. Outside factors include peaks and troughs in usage demands, changes in consumer service expectations to remain competitive, and disaster events requiring changes to architectures. Any of these may result in unplanned for alterations to the cloud-based infrastructure, and therefore unplanned costs that ideally would have been accounted for in the TCO.

Read More
Philippe Beauchamp Philippe Beauchamp

Cloud Computing: Cost Calculators

https://azure.microsoft.com/en-ca/pricing/calculator/#media-servicesd9334c86-66a8-4e24-854e-85fe4ecf365b

The Azure Pricing Calculator provides a point and click menu and wizard style driven approach to calculating your Azure cloud costs. Further than including typical costs on VM sizing, storage costs and other services, it provides a comprehensive pricing catalog for PaaS offerings and services available from the Azure Storefront, thus making it highly extensible. While this does cover all of the necessary pricing calculation needs, the downside to this approach is a move away from a single-view worksheet that we see in the AWS calculators, which benefits more advanced users due to the reduction in clicks and page refreshes. However for the relative newcomer on cloud platform pricing Microsoft’s approach provides a reasonable level of intuitive hand-holding in determining costing for cloud initiatives. This approach would be a welcome alternate addition to the AWS calculators reviewed in this course to assist those that are newer to the platform; while not removing the existing interface for more advanced users.

Read More
Philippe Beauchamp Philippe Beauchamp

Cloud Computing: IT Budget Impacts

Moving from an on-premises IT infrastructure to a cloud-based infrastructure can greatly influence how IT budgets are allocated. The greatest shift revolves around two main areas. First the offloading of responsibilities to a selected third party (the cloud provider), and second the move from purchasing hardware and software to a utility model where hardware and software owned by the cloud provider and a flexible subscription fee is paid based on usage.

These two points collectively move budgeting from Capital-based expense (CAPEX) budgeting on the purchase of assets to perform work; to Operational-based expense (OPEX) budgeting where technology costs are not purchased but the expenses assist in day-to-day work for maintaining the business.

As a result, the more an organization moves to a cloud model, the more large upfront IT costs for procurement are eliminated and replaced with ongoing recurring operational costs. Further, some operational costs are also eliminated as they are no longer the responsibility of the organization, but become the responsibility of the cloud provider. These typically include operation and security of the physical computers, networking, storage capacity, backup, archiving, and operating system along win management of the data center buildings themselves.

In moving to cloud, the costs are in general reduced based on consumption, and also allow for a more dynamic IT infrastructure that can respond to current needs to assist in the appropriate allocation of resources at any given time.

Read More