How to Get a Consistent Endpoint URL When Scaling Up Amazon MQ ?

Sometimes you want the application running on an EC2 AMI to connect to the same Amazon MQ broker endpoint instead of changing the EC2 instance AMI each time when created a new Amazon MQ broker instance.

In that use case you can use a DNS name string of an Elastic LoadBalancer(ELB) to access MQ brokers exist behind ELB from the software on EC2. More specifically, as you may have noticed, MQ broker IP address would not change when the broker restarts. Brokers are assigned the Elastic IPs and hence they will retain the IPs on restart. With that being said, you should create a Elastic LoadBalancer (ELB) that forwards all incoming traffic to the brokers (targets) registered in an ELB’s target group with broker’s Elastic IP. Please take a look at  this doc for more details on load balancer target groups.

In this way, you can take the DNS name of Load Balancer as the static connection point from EC2s. The architecture diagram below shows what I mean in a big picture:

architecture diagram

To better assist you, I’d like to demonstrate it for you by the following steps:

Continue reading How to Get a Consistent Endpoint URL When Scaling Up Amazon MQ ?

Have you compared global variables and /tmp when Lambda function reused?

When developing your own Lambda function, have you ever thought of whether the new born lambda container inherits global variables and files located in /tmp from the running or dead instance of the same function? Now let me help you clarify global variables and /tmp issues when Lambda function being reused as follows.

As long as the execution doesn’t fail and it doesn’t take too long between each execution, the container which runs the Lambda function can be reused for other invocations. This means the runtime that runs the code will be active in memory throughout a number of executions. Therefore, if your code is saving information in memory and not cleaning up during each invocation, it can allocate memory until it reaches the limit. And normally it shouldn’t happen, however such continuously increasing memory issue occurs if there are some global variable or library that keeps allocating information in memory throughout the execution. You can refer to this AWS documentation for container reuse.

When there is some global object which is growing in size, If that is the case, I would suggest you to move the global variables, excluding the client, into the handler. In general, defining expensive variables in the shared scope is a good idea for performance reasons, but it is not advisable to define anything there that would potentially continue to grow.

Let me explain this with an Nodejs Lambda function example,

Everything inside the exports.handler = ()=>{} block is the handler, everything outside that block is global. The first time the function runs, the script is executed with objects outside of the handler being executed and persisted for the lifetime of the container. On subsequent runs, only the handler is executed.

When the above function is tested multiple times, the following happens:

If we invoke this code continuously in Lambda, it looks like:

Continue reading Have you compared global variables and /tmp when Lambda function reused?

Where is the Deployment Package Located in Lambda Container?

When develop your Lambda function, you might think how to access some static configuration files in deployment package from Lambda function code. Please let me clarify them for you as follows:

1. What is best practice for accessing files in Lambda code?

When upload the deployment package to Lambda, as you may have noticed, Lambda would generally put the deployment package at this folder ‘/var/task/’ in the Lambda container. All of dependencies and static configuration file should also stay in this ‘root’ folder ‘/var/task/’. That being said, the static config files would always besides your code under /var/task/. Therefore, take Python for example, ‘read(“config.json”)’ and read(“./config.json”) should be good and flexible.

Here is the simple Python code snippet being used for checking:

Here is the screenshot shows you its invocation result:

pwd and ls .

Continue reading Where is the Deployment Package Located in Lambda Container?

How to Utilize Intrinsic Functions And Outputs Section in CloudFormation Stack?

When developing your CloudFormation templates, for the purpose of concise and easy maintenance, you may wonder how to have a variable that is built with parameters and functions to use the same value in different parts of the stack, or another stack’s code. As one motivation of referencing values from another stacks can be, for example, you want the stack A to launch all VPCs, and stack B to launch all EC2 instances within each VPC.

One way to do that is using Intrinsic function and Outputs section in the CloudFormation template. Before creating the stack you can compose a variable of parameters that you will pass during launch time, the rest of the variable will be retrieved by intrinsic functions. Then, for example, you could declare the output variable in the optional Outputs section of CloudFormation template, and import this value into other stacks (to create cross-stack references) or use this value to update the template.

Please note: Parameters are passed to the stack during creation, and the intrinsic functions can retrieve different values depending on the resource being created.

Let me explain in more depth – AWS CloudFormation provides several built-in functions that help you manage your stacks. You could use intrinsic functions in your templates to assign values to properties that are not available until runtime. But only in specific parts of a template, currently, you can use intrinsic functions in resource properties, outputs, metadata attributes, and update policy attributes. You can also use intrinsic functions to conditionally create stack resources. For example the intrinsic function Fn::Base64 returns the Base64 representation of the input string, the Fn::GetAtt intrinsic function returns the value of an attribute from a resource in the template.

Here is the declaration of Fn::GetAtt:

where:

the “logicalNameOfResource” indicates the logical name (also called logical ID) of the resource that contains the attribute that you want;

the “attributeName” indicates the name of the resource-specific attribute whose value you want. See the resource’s reference page for details about the attributes available for that resource type.

Continue reading How to Utilize Intrinsic Functions And Outputs Section in CloudFormation Stack?

How to Utilize CloudFormation Lambda backed Custom Resources ?

AWS CloudFormation is a service that takes care of provisioning and configuring AWS resources for you. You don’t need to individually create and configure AWS resources and figure out what’s dependent on what; AWS CloudFormation handles all of that. And you may have realized, CloudFormation allows you to provision and configure your stack resources by utilizing custom resources in a CloudFormation template (JSON or YAML format).

Custom resources enable you to write custom provisioning logic in templates that AWS CloudFormation runs anytime you create, update (if you changed the custom resource), or delete stacks. So in this blog, I’ll explain how it works and how to use it with a practical example like below:

You’re supposed to create a SNS subscription filter policy while provisioning SNS topics and subscriptions with CloudFormation. So how to use CloudFormation to handle this task?

As CloudFormation does not support a particular property in a resource even though it’s available via the AWS API, we can use custom resources to workaround it. In our task, as CloudFormation resource ‘AWS::SNS::’ does not currently support the Filter Policy, we can make use of Lambda Backed Custom Resource in this scenario.

Continue reading How to Utilize CloudFormation Lambda backed Custom Resources ?

How to Filter S3 Event Notifications Sent to SNS ?

If you have several S3 bucket folders named ‘a’, ‘b’, ‘c’, when objects created in a S3 bucket, the event notifications get pushed to a SNS topic. Messages get finally delivered to SQS queues who are subscribers of this topic.

As you may have noticed, SNS supports subscription filter policy in Json format in the properties of a subscription. You might be looking for such a policy that matches the S3 folder prefixes to filter out notifications so that queues get only subset of messages (only those messages that have S3 folder prefixes as mentioned above). If the objects are created in a different prefix, the queue shouldn’t get those messages. However, without some deep dive, can you really implement this intuitive solution? So let’s start to figure out if it’s possible or not.

When a message is published to the topic, Amazon SNS will attempt to match the incoming message attributes to the subscription attribute that defines a filter policy. If they match, Amazon SNS will then deliver the message to the corresponding subscriber. However, unfortunately, S3 event notification currently does not support ‘Message Attribute’ fields.

Restriction of Subscription policy

With that being said, when S3 sends event notifications to SNS after an object created, the SNS topic can’t apply its subscription filter policy to the incoming S3 event notifications as they don’t have the ‘Message Attribute’. Please click here for more details on SNS ‘Message Attribute’.

Continue reading How to Filter S3 Event Notifications Sent to SNS ?

How does AWS SQS visibility timer work ?

As you may have realized from the doc here, when a consumer receives and processes a message from a queue, the message remains in the queue. To prevent other consumers from processing the message again, Amazon SQS sets a visibility timeout, a period of time during which Amazon SQS prevents other consumers (including the initial consumer) from receiving and processing the message. The default visibility timeout for a message is 30 seconds. The minimum is 0 seconds. The maximum is 12 hours. The mechanism can be displayed as the picture below:

Visibility Timeout

But the follow up question you may ask: Does the visibility timer (and in flight status) apply to the consumer that initially read the message?

Continue reading How does AWS SQS visibility timer work ?

How to Receive a Monitoring Alert When the Customized Threshold in SNS-SMS is Reached?

Issue:

When you are using SNS to send SMS messages, you may have just approached your monthly limit and this can potentially impact your business. What’s more, although AWS doc claims that “Typically, AWS Support processes your case within 2 business days. Depending on the spend limit you request and the complexity of your case, AWS Support might require an additional 3 – 5 days to ensure that your request can be processed.“, this limit increase process can still be longer then expected – during which your production system is impaired and can’t send messages.

Therefore, to buffer this, you can set up a monitoring alert, ie. in case of 75% of monthly limit is reached, so that you have enough time to make a ticket and wait until AWS handles that?

 

Solution:

You can set up an alarm with AWS Cloudwatch and it could successfully send alert message once the threshold was reached. To provide a better demonstration for this goal, I’d like provide you a step by step solution:

1. Sign in to the AWS Management Console and open the CloudWatch console at https://console.aws.amazon.com/cloudwatch/.

2. Choose ‘Metrics’.

Cloudwatch ‘Metrics’

3. Select ‘SNS’.

4. Select ‘Metrics with no dimensions’.

Metrics with no dimentions

5. Check ‘SMSMonthToDateSpentUSD’ and click the ‘Graphed metrics’ tab.

Graphed metrics

6. Click the bell symbol on the right side of the row to create the alarm for this metric.

7. Give an alarm name and description.

Create new alarm

8. At ‘Whenever: SMSMonthToDateSpentUSD’ choose ‘>=’ and type the value ‘0.75’ as the threshold as your default spend limit is $1.

9. Treat missing data as ‘missing ‘.

10. At ‘Whenever this alarm:’ choose ‘State is Alarm’

11. At ‘Send notification to:’ choose a SNS topic where your subscribed endpoint, such as email, can receive this alarm.

Alarm parameters

During the test, I randomly typed ‘0.02’ as the threshold value and after I sent 14 SMS text messages, I received the email alert as below. Note: after received this alert, I could still successfully publish text messages. Please also feel free to check the doc [1]/[2] for more details on metrics ‘SMSMonthToDateSpentUSD’ and Cloudwatch alarm.

You are receiving this email below because your Amazon CloudWatch Alarm “test” in the US East (N. Virginia) region has entered the ALARM state, because “Threshold Crossed: 1 datapoint [0.0258 (29/11/18 15:23:00)] was greater than or equal to the threshold (0.02).” at “Thursday 29 November, 2018 15:28:07 UTC”.

Alert email

I hope it can address your concern and please also feel free to check other blogs I wrote for your reference.

How to Track and Analyze Amazon SES Sending Activity ?

What is Amazon SES sending activity ?

Sometimes it’s worth of tracking the Simple Email Service (SES) email sending activity by monitoring event publishing. Amazon SES provides methods to monitor your sending activity. You can implement these methods so that you can keep track of important measures, such as your account’s bounce, complaint and reject rates. Excessively high bounce and complaint rates may jeopardize your ability to send emails using Amazon SES.

Additionally, you can also use these methods to measure the rates at which your customers engage with the emails you send. For example, these sending metrics can help you identify your overall open and clickthrough rates.

The metrics that you can measure using Amazon SES are referred to as email sending events. The email sending events that you can monitor are:

  • Sends – The call to Amazon SES was successful and Amazon SES will attempt to deliver the email.
  • Rejects – Amazon SES accepted the email, determined that it contained a virus, and rejected it. Amazon SES didn’t attempt to deliver the email to the recipient’s mail server.
  • Bounces – The recipient’s mail server permanently rejected the email. This event corresponds to hard bounces. Soft bounces are only included when Amazon SES fails to deliver the email after retrying for a period of time.
  • Complaints – The email was successfully delivered to the recipient. The recipient marked the email as spam.
  • Deliveries – Amazon SES successfully delivered the email to the recipient’s mail server.
  • Opens – The recipient received the message and opened it in his or her email client.
  • Clicks – The recipient clicked one or more links contained in the email.
  • Rendering Failures – The email was not sent because of a template rendering issue. This event type only occurs when you send email using the SendTemplatedEmail or SendBulkTemplatedEmail API operations. This event type can occur when template data is missing, or when there is a mismatch between template parameters and data.

How to track SES sending activity ?

To track and process these sending activities, you can configure the SES to send them to 3 types of destinations: Kinesis Firehose, Cloudwatch and SNS topic.

3 types of destination of Configuration set in SES
Different sending activities and SNS destination topic

As we need to store these events in relational database and SES can’t store them directly into relational database, we should set up SNS topic as destination for relaying these sending activities into SQS that acts as a scalable buffer, then using SQS queue to actively trigger Lambda function to consume, process and store these sending activities into relational database (RDS).

Note: As you may have noticed, the configuration set on SES can send email sending events to the SNS topic. From SNS, you can have your endpoints in different protocols, such as SQS, Lambda and etc. As your business may need to send emails to many recipients, (thousands of thousands users), you should take advantage of SQS queue as a buffer, rather than publishing event messages directly to Lambda function.

As SQS has a feature that it can actively trigger the Lambda function once messages arrived in the queue, you should have the SQS to trigger the Lambda function. Once the Lambda function got invoked, it should be able to insert the event messages into the RDS MySQL. So the final workflow would look like this diagram:

Workflow of buffering and storing sending activity events

Continue reading How to Track and Analyze Amazon SES Sending Activity ?

How to Prevent Your User from Using Emoji As Username in AWS Cognito ?

Issue:

How to stop your app users from creating usernames with emojis in AWS Cognito ? As you do not control the signin/signup webpage, is it possible to use the serverless service Lambda as a trigger function to test those new usernames of emojis ?

Cognito Emoji Username

Analysis:

Cognito usernames use the UTF-8 character set, one of the features of the character set is emojis.  As a result, it appears as though emojis are allowed with Cognito for users to have in their username. If you like, you can follow the document here to replicate the issue I’m talking.

Although using emoji as username does appear to be supported by AWS Cognito due to this, you should definitely limit the username to normal characters as it could cause an issue for your users if they no longer have access to a specific emoji or a keyboard that uses them.

Solution:

Continue reading How to Prevent Your User from Using Emoji As Username in AWS Cognito ?