AWS Developer - Mon, Jun 14, 2021
And for this week we will do the Developer cert
ElastiCache
Select Memcached over Redis if you have the following requirements: – You need the simplest model possible – You need to run large nodes with multiple cores or threads – You need the ability to scale out and in, adding and removing nodes as demand on your system increases and decreases – You need to cache objects, such as a database
Lambda
-
Traffic Shifting with Lambda Aliases.
- Up to 2 alias can have routing rules created to direct a % of traffic an an alias function
-
Invocation types
-
RequestResponse (default) – Invoke the function synchronously. Keep the connection open until the function returns a response or times out. The API response includes the function response and additional data
-
Event – Invoke the function asynchronously. Send events that fail multiple times to the function’s dead-letter queue (if it’s configured). The API response only includes a status code
-
DryRun – Validate parameter values and verify that the user or role has permission to invoke the function
-
Reserved Concurrency
- AWS Lambda will keep the unreserved concurrency pool at a minimum of 100 concurrent executions of the 1000 cap for account, so that functions that do not have specific limits set can still process requests. So, in practice, if your total account limit is 1000, you are limited to allocating 900 to individual functions
In Lambda an increase in memory size triggers and equivilent increase in CPU.
Invocation Types : On Demand via CLI Applicaiton can do it
- 3 types
- RequestResponse - Default keep it open till done
- event - invoke asynch send em all failed goto dlq
- Dry run - validate
AWS Lambda natively supports Java, Go, PowerShell, Node.js, C#, Python, and Ruby
DynamoDB
- Streams Valid values for StreamViewType are KEYS_ONLY, NEW_IMAGE, OLD_IMAGE, and NEW_AND_OLD_IMAGES.
- OLD_IMAGE type, the entire item which has the previous value as it appeared before it was modified is written to the stream. Hence, this is the correct answer in this scenario.
- KEYS_ONLY it will only write the key attributes of the modified item to the stream. This choice is wrong since the question states that values should be copied as well.
- NEW_IMAGE it will configure the stream to write the entire item with its new value as it appears after it was modified. This choice is wrong since the stream should capture the item’s pre-modified values.
- NEW_AND_OLD_IMAGES writes the new values of the item in the stream, it also includes the old one as well.
To read data from a table, you use operations such as GetItem, Query, or Scan. DynamoDB returns all of the item attributes by default. To get just some, rather than all of the attributes, use a projection expression.
When you create a ghlbal secondary index on a provisioned mode table you must specify read and wrte capacity.
– Global secondary index — an index with a partition key and a sort key that can be different from those on the base table. A global secondary index is considered “global” because queries on the index can span all of the data in the base table, across all partitions.
– Local secondary index — an index that has the same partition key as the base table, but a different sort key. A local secondary index is “local” in the sense that every partition of a local secondary index is scoped to a base table partition that has the same partition key value.
- Locking – Optimistic Locking - Client side item you update is the same as the it3m in the db no overwrite – Pessimistic Locking - Each item has an attribute that acts as a version number. If your number does not match someone changed try again – Overly Optimistic Locking - Single user operations
‘‘‘dynamodb:LeadingKeys is the IAM condition key to use for limiting identity to the table info.
Parameter store better than Lambda encrypted vars not as good as secrets manager(RDS rotation)
– Use a secure, scalable, hosted secrets management service (No servers to manage) – Improve your security posture by separating your data from your code – Store configuration data and secure strings in hierarchies and track versions – Control and audit access at granular levels – Configure change notifications and trigger automated actions – Tag parameters individually, and then secure access from different levels, including operational, parameter, Amazon EC2 tag, or path levels – Reference AWS Secrets Manager secrets by using Parameter Store parameters
ECS
- placement strategies:
- binpack – Place tasks based on the least available amount of CPU or memory. This minimizes the number of instances in use
- random – Place tasks randomly
- spread – Place tasks evenly based on the specified value. Accepted values are attribute key-value pairs, instanceId, or host
X-Ray
A segment document conveys information about a segment to X-Ray Segments are for calls in your app Subsegments are for external system calls This is how you tell it what to capture for your application. Below are the optional subsegment fields:
namespace – aws for AWS SDK calls; remote for other downstream calls. http – http object with information about an outgoing HTTP call. aws – aws object with information about the downstream AWS resource that your application called. error, throttle, fault, and cause – error fields that indicate an error occurred and that include information about the exception that caused the error. annotations – annotations object with key-value pairs that you want X-Ray to index for search. metadata – metadata object with any additional data that you want to store in the segment. subsegments – array of subsegment objects. precursor_ids – array of subsegment IDs that identifies subsegments with the same parent that completed prior to this subsegment.
Port 2000UDP is a listener to gather raw segment data and relay to AWS X-Ray API.
Sampling controls how much you collect and can reduce the amout of traces for high volume and unimportant reqeust
Beanstalk
If you are doing a deployment you will do Blue/Green not Canary thats not in Beanstalk… This is important stuff to test people on.
KMS its all about the headers (it seems)
CMK needs the following
x-amz-server-side-encryption-customer-algorithm – This header specifies the encryption algorithm. The header value must be “AES256”. x-amz-server-side-encryption-customer-key – This header provides the 256-bit, base64-encoded encryption key for Amazon S3 to use to encrypt or decrypt your data. x-amz-server-side-encryption-customer-key-MD5 – This header provides the base64-encoded 128-bit MD5 digest of the encryption key according to RFC 1321. Amazon S3 uses this header for a message integrity check to ensure the encryption key was transmitted without error.
Just encyption is only the one if SSE x-amz-server-side-encryption
Code deploy
Appspec options for deployment
- BeforeAllowTraffic – Use to run tasks before traffic is shifted to the deployed Lambda function version.
- AfterAllowTraffic – Use to run tasks after all traffic is shifted to the deployed Lambda function version.
Deployment types:
- Canary this will cause the traffic to be shifted in two increments
- all at once traffic shifted at the same time
- Linear Equal increments with an equal number or minutes between each increment
Lambda cannot use an in place deployment Type - No idea why
Agent is only needed for on prem deployments
Cloudwatch
is basically a metrics repository A namespace is a container for CloudWatch metrics There is no default namespace.
AWS Steps Functions (States)
States can perform a variety of functions in your state machine:
Task State – Do some work in your state machine Choice State – Make a choice between branches of execution Fail or Succeed State – Stop execution with failure or success Pass State – Simply pass its input to its output or inject some fixed data, without performing work. Wait State – Provide a delay for a certain amount of time or until a specified time/date. Parallel State – Begin parallel branches of execut
API GW
Usage plans :
- Who can access an API stage and method also how much and how fast can they access them. API keys to id Clients and meters for each stage they access
AWS Cloud Development Kit (AWS CDK)
open-source software development framework to model and provision your cloud application resources using familiar programming languages. think of the CDK as a cloud infrastructure “compiler” CDK tools make it easy to define your application infrastructure stack, while the CloudFormation service takes care of the safe and dependable provisioning of your stack