Additional Topics

  1. Bastion host: An AWS bastion host can provide a secure primary connection point as a ‘jump’ server for accessing your private instances via the internet.
    1. Basically bastion host is EC2 running in your public subnet
    2. Allows SSH and RDP only to certain ip ranges
    3. bastion host run in security group that has SSH/RDP permissions to EC2 instances in your Private subnets
    4. You can SSH to bastion using private key and do further SSH to EC2 in private subnet using
      1. Remote Desktop Gateway for windows
      2. Agent forwarding for Linux SSH
  2. AWS Systems Manager is a Management Tool that enables you gain operational insights and take action on AWS resources safely and at scale. Using the run command, one of the automation features of Systems Manager, you can simplify management tasks by eliminating the need to use bastion hosts, SSH, or remote PowerShell.
    1. If you have a running EC2, you can find what role it is using and attach a AWS policy called “AmazonEC2RoleforSSM” to the role. Remember you can attach multiple policies to a single role.
    2. If you are launching a new EC2, you can create a new role for it use it after attaching “AmazonEC2RoleforSSM” policy
  3. NAT Gateways: To create NAT gateway, need to provide VPC, subnet (public subnet)  and EIP
  4. Elastic Map Reduce (EMR)
  5. Simple Email Service (SES) is a cloud-based email sending service designed to help digital marketers and application developers send marketing, notification, and transactional emails. It is a reliable, cost-effective service for businesses of all sizes that use email to keep in contact with their customers. You can use our SMTP interface or one of the AWS SDKs to integrate Amazon SES directly into your existing applications. You can also integrate the email sending capabilities of Amazon SES into the software you already use, such as ticketing systems and email clients.
  6. Quick Site:
    1. AWS service that will aggregate your data from multiple data sources (S3, DynamoDB, RDS, etc.) and provide business intelligence based on this data.
  7. NAS (Network Attached Storage) EFS (Elastic File System):
    1. An Amazon EFS file system is accessed by EC2 instances running inside one of your VPCs.
    2. Instances connect to a file system by using a network interface called a mount target.
    3. Each mount target has an IP address or DNS (, which AWS assigns automatically or you can specify.
    4. Use linux mount command to mount this to a folder such as /home/mysharedfolder
    5. Cost in .xx US$ per GB/Month units (around 30 cents per GB/hour)
  8. Status Checks
    1. System status check checks the physical host
      1. Examples: Power failure, Network Failure, System software issues, Hardware failure. When this happens, simply stop and restart the VM which will restart it on a different host (hardware)
    2. Instance status check checks the VM/OS
      1. Corrupt memory
      2. Exhausted memory
      3. Misconfigured network
      4. Kernel issues
      5. Reboot will fix
  9. EBS Volume types (16 TB max for all) (burst max 3000 IOPS)
    1. General Purpose SSD: gp2 Can be root/boot volume
      1. General VMs, web servers. Min 1 GB
      2. 3 IOPS/GB max 10,0000 IOPS
    2. Provisioned IOPS SSD: io1 Can be root/boot volume
      1. High volume db server. Min 4 GB
    3. Standard Magnetic HDD: Can be root/boot volume. Magnetic volumes are backed by magnetic drives and are suited for workloads where data is accessed infrequently, and scenarios where low-cost storage for small volume sizes is important. These volumes deliver approximately 100 IOPS on average, with burst capability of up to hundreds of IOPS, and they can range in size from 1 GiB to 1 TiB.
    4. Throughput Optimized HDD:  st1
      1. Can’t be root/boot volume. Min 500 GB
      2. Big data, Data warehousing, Log processing
    5. Cold HDD: sc1 Can’t be root/boot volume
      1. . Min 500 GB
  10. Workplaces
    1. Using AWS WS client one can connect to virtual desktop (windows only)
    2. Workspaces are persistent
    3. data on D drive is backed up every 12 hours
    4. No need to have AWS account
  11. Elasticity vs Scalability and difference between scaling up and scaling out
    1. Elasticity is being able to scale out and scale back (horizontal scaling) within a short period such as hours or days or weeks. You can achieve this by launching additional instances of the same type and closing them after the demand comes down
    2. Scalability is to scale up your systems as the business grows and demand increases over long term (think months and years). You can achieve scale up (vertical scaling) by increasing the memory/CPU by upgrading your instances to a new type (m1 to m2 etc)
    3. Scaling up may not be instantaneous. May need some downtime unlike scaling out which can happen instantaneously.
    4. DynamoDb is inherently scalable,  however you can increase the IOPS and decrease later to achieve elasticity
    5. RDS is not elastic. You can scale it by upgrading to a higher instance type (small to medium etc)
  12. Snowball imports/exports your data to S3. Replaces Import/Export service where you send your hard disk to AWS by courier.
    1. Snowball: 80 TB data can be transferred to AWS using a physical device
    2. Snowball Edge: 100 TB storage plus EC2 running lamda functions all in one box. Use case: On board an aircraft
    3. Snowmobile: Extremely large amounts of data. Mounted on a truck. Capacity 100 PB
  13. Advantage of Direct Connect over VPN
    1. Better bandwidth as DC uses dedicated VLAN connection from your data center to AWS
    2. VPN uses ipsec protocol over internet and can drop while using if internet has problems.
    3. VPN connections can be setup in minutes whereas direct connect takes weeks to setup
  14. Amazon Resource Names (arn)
    1. Uniquely address any resource
    3. arn:PARTITION:SERVICE:REGION:ACCOUNTID:resourcetype/resource
    4. arn:PARTITION:SERVICE:REGION:ACCOUNTID:resourcetype:resource
    5. PARTITION=aws SERVICE=s3 or iam etc REGION=us-west-2 etc ACCOUNTID is your accountid
    6. For globally unique resources such as S3, no need of REGION or ACCOUNTID  so simply use ::: ex: arn:aws:s3:::bybucket/myfile.txt
  15. Data transfer cost optimization
    1. Always use private ip to transfer data between two instances within a single AZ to avail local transfer rates. Otherwise regional data transfer rates will be applied.
    2. If the instances are in different AZs the regional data transfer rate will be applied regardless or private or public ip is used.
  16. Nitro vs Xen Hipervisor
    1. Nitro reduces software components at host level thus more bare metal access to EC2 hence less wastage or memory and CPU resources
    2. Eventually all XEN will be phased out to Nitro
    3. Nitro allows upto 27 PCI devices to be attached to your EC2 including all your EBS and ENI
  17. AWS Data Pipeline
    1. Web service that helps you reliably process and move data between different AWS compute and storage services, as well as on-premises data sources, at specified intervals.
    2. With AWS Data Pipeline, you can regularly access your data where it’s stored, transform and process it at scale, and efficiently transfer the results to AWS services such as Amazon S3, Amazon RDS, Amazon DynamoDB, and Amazon EMR.
    3. AWS Data Pipeline helps you easily create complex data processing workloads that are fault tolerant, repeatable, and highly available.
    4. You don’t have to worry about ensuring resource availability, managing inter-task dependencies, retrying transient failures or timeouts in individual tasks, or creating a failure notification system.
    5. AWS Data Pipeline also allows you to move and process data that was previously locked up in on-premises data silos.
  18. Only M3 and C3 support para virtualization (PV). Rest use HVM (Hardware Virtual Machine)
  19. To delegate permission to access a resource, you create an IAM role that has two policies attached. The permissions policy grants the user of the role the needed permissions to carry out the desired tasks on the resource. The trust policy specifies which trusted accounts are allowed to grant its users permissions to assume the role. The trust policy on the role in the trusting account is one-half of the permissions. The other half is a permissions policy attached to the user in the trusted account that allows that user to switch to, or assume the role
  20. Trusted adviser provides info about 4 pillars (except operational excellence) and service limits
    1. Cost optimization
    2. Security issues/improvement advise
    3. Performance issues/improvement advise
    4. Fault Tolerance (Eg. If availability zones are used properly etc)
    5. Service Limits (Eg. How many EIPs are used and left)
  21. Launch Configs
    1. You can only specify one launch configuration for an Auto Scaling group at a time, and you can’t modify a launch configuration after you’ve created it.
    2. Therefore, if you want to change the launch configuration for your Auto Scaling group, you must create a new launch configuration and then update your Auto Scaling group with the new launch configuration.
    3. When you change the launch configuration for your Auto Scaling group, any new instances are launched using the new configuration parameters, but existing instances are not affected.
  22.  Difference between bucket policies, IAM policies, and ACLs for use with S3, and examples of when you would use each.
    1. With IAM policies, companies can grant IAM users fine-grained control to their Amazon S3 bucket or objects while also retaining full control over everything the users do.
    2. With bucket policies, companies can define rules which apply broadly across all requests to their Amazon S3 resources, such as
      1. Granting write privileges to a subset of Amazon S3 resources.
      2. Customers can also restrict access based on an aspect of the request, such as HTTP referrer and IP address.
    3. With ACLs, customers can grant specific permissions (i.e. READ, WRITE, FULL_CONTROL) to specific users for an individual bucket or object
  23. Import/Export service lets you import or export data into AWS (S3, EBS) by mailing your devices to Amazon. Replaced by snowball.
  24. Amazon EC2 Auto Scaling provides you with an option to enable automatic scaling for one or more EC2 instances by attaching them to your existing Auto Scaling group. After the instances are attached, they become a part of the Auto Scaling group. The instance that you want to attach must meet the following criteria:
    1. The instance is in the running state.
    2. The AMI used to launch the instance must still exist.
    3. The instance is not a member of another Auto Scaling group.
    4. The instance is in the same Availability Zone as the Auto Scaling group.
    5. If the Auto Scaling group has an attached load balancer, the instance and the load balancer must both be in EC2-Classic or the same VPC.
    6. If the Auto Scaling group has an attached target group, the instance and the load balancer must both be in the same VPC.
    7. When you attach instances, the desired capacity of the group increases by the number of instances being attached.
    8. If the number of instances being attached plus the desired capacity exceeds the maximum size of the group, the request fails.
    9. If you attach an instance to an Auto Scaling group that has an attached load balancer, the instance is registered with the load balancer.
    10. If you attach an instance to an Auto Scaling group that has an attached target group, the instance is registered with the target group.
  25. AWS Database Migration Service (DMS)
    1. helps you migrate databases to AWS quickly and securely.
    2. The source database remains fully operational during the migration, minimizing downtime to applications that rely on the database.
    3. The AWS Database Migration Service can migrate your data to and from most widely used commercial and open-source databases. The service supports homogeneous migrations such as Oracle to Oracle, as well as heterogeneous migrations between different database platforms, such as Oracle to Amazon Aurora or Microsoft SQL Server to MySQL.
    4. It also allows you to stream data to Amazon Redshift, Amazon DynamoDB, and Amazon S3 from any of the supported sources including Amazon Aurora, PostgreSQL, MySQL, MariaDB, Oracle, SAP ASE, SQL Server and MongoDB, enabling consolidation and easy analysis of data in the petabyte-scale data warehouse.
    5. AWS Database Migration Service can also be used for continuous data replication with high-availability.
  26. You can now turn on a CloudTrail across all regions for your AWS account. CloudTrail will deliver log files from all regions to the Amazon S3 bucket and an optional CloudWatch Logs log group you specified. Additionally, when AWS launches a new region, CloudTrail will create the same trail in the new region. As a result, you will receive log files containing API activity for the new region without taking any action.
  27. With cross-zone load balancing, each load balancer node for your Classic Load Balancer distributes requests evenly across the registered instances in all enabled Availability Zones. If cross-zone load balancing is disabled, each load balancer node distributes requests evenly across the registered instances in its Availability Zone only.
  28. The AWS cloud supports many popular disaster recovery (DR) architectures from “pilot light” environments that may be suitable for small customer workload data center failures to “hot standby” environments that enable rapid failover at scale.
  29. AWS Glue is a fully managed extract, transform, and load (ETL) service that makes it easy for customers to prepare and load their data for analytics. You can create and run an ETL job with a few clicks in the AWS Management Console. You simply point AWS Glue to your data stored on AWS, and AWS Glue discovers your data and stores the associated metadata (e.g. table definition and schema) in the AWS Glue Data Catalog. Once cataloged, your data is immediately searchable, queryable, and available for ETL. AWS Glue generates the code to execute your data transformations and data loading processes.
  30. AWS SDK supports: IOS, Android, Browser (Java scripts), Java, .NET, Node.js,  PHP, Python, Ruby , Go, C++
  31. Import Export
    1. Import / Export Disk
      1. Import to S3, EBS, Glacier
      2. export from S3
    2. Import / Export Snowball
      1. Import to S3
      2. Export to S3
  32. AWS WAF:  Use to control how Amazon CloudFront or an Application Load Balancer responds to web requests. Define your conditions, combine your conditions into rules, and combine the rules into a web ACL.
    1. Conditions define the basic characteristics that you want AWS WAF to watch for in web requests:
      1. Scripts that are likely to be malicious. cross-site scripting or SQL code injection
      2. IP addresses or address ranges or Country or geographical location that requests originate from.
      3. Length of specified parts of the request, such as the query string.
      4.  Strings that appear in the request header (such as user-agent)  or body (query string).
    2. Rules
      1. Regular rules use only conditions to target specific requests. Example: All requests coming from AND contain the value “BadBot”in the User-Agent header.
      2. Rate-based rule
        1. Rate-based rules are similar to regular rules, with a rate limit
        2. count the requests that arrive from a specified IP address every five minutes.
        3. The rule can trigger an action if the number of requests exceed the rate limit.
    3. Web ACLs: You combine the rules into a web ACL. You define an action for each rule—allow (to be forwarded to CloudFront or an Application Load Balancer), block, or count
<<< LambdaAWS Well-Architected framework (February 2018 CSAA Exam) >>>
Copyright 2005-2016 KnowledgeHills. Privacy Policy. Contact .