Google Guava Cache Summary

Cache has been heavily used in many common scenarios, such as the multi-threads high concurrency case. Diverse cache candidates can fit into different use cases, like using Redis, memcached for distributed cache, using ehcache, GuavaCache as local cache. As you may have learned it before this blog, so let’s just summarize those important characteristics of Guava Cache in this blog.

Guava Cache is similar to ConcurrentMap, but it is not exactly the same. The most basic difference is that ConcurrentMap will always save all added elements until they are explicitly removed. In contrast, in order to limit memory usage, Guava Cache is usually set to “automatically” recycle elements.

Use Cases:

In general, Guava Cache is suitable for:

1. You are willing to consume some memory space to increase speed.

2. You expect that certain keys will be queried more than once.

3. The total amount of data stored in the cache does not exceed the memory capacity.

Note: Guava Cache is a local cache for a single application runtime. It does not store data to a file or external server. If this does not meet your needs, try a tool like Memcached. If your scene matches each of the above, Guava Cache is for you.

Build Cache:

Cache Interface

The interface Cache represents a block of cache, which has the following methods:

You can build a cache object through the CacheBuilder class. The CacheBuilder class uses the builder design pattern, and each method returns the CacheBuilder itself until the build method is called. Building a cache object code is as follows.

The above code creates a Cache object through the code CacheBuilder.newBuilder().build(), and stores a record with the “14LP” as key and the “Do you like them?” as value in the cache object. You can see that Cache is very similar to Map in JDK, but Guava Cache provides a lot more powerful features than Map.

Size-based Eviction Mechanism

Guava Cache can specify the maximum number of records that the cache can store when building a cache object. When the number of records in the Cache reaches the maximum value and then call the put method to add objects to it, Guava will first select one of the currently cached object records to delete, and then free up the new object to store in the Cache.

When the above code constructs the cache object, the CacheBuilder class’s maximumSize method specifies that the Cache can store up to two objects, and then calls the Cache’s put method to add three objects to it. The result of the program execution is as shown in the section below. You can see the insertion of the third object record, causing the first object record to be deleted.

Time-based Eviction Mechanism

1 expireAfterAccess( )

2 expireAfterWrite( )

When building a Cache object, you can specify the expiration time for the objects in the cache through the expireAfterAccess( ) and expireAfterWrite( ) methods of the CacheBuilder class. The expired objects will be “automatically” deleted by the cache. Among them, the expireAfterWrite( ) method specifies how long the object is expired after being written to the cache, and expireAfterAccess( ) specifies how long the object has expired after being accessed.

The above code specifies that Cache will expire after 3 seconds through the expireAfterWrite( ) method of the CacheBuilder when constructing the Cache object. After storing an object record in the Cache object, this record is read every 1 second. The result of the program running is shown below. It can be seen that the object can be obtained from the Cache in the first three seconds. After more than three seconds, the object is “automatically” deleted from the Cache.

The code below demonstrates expireAfterAccess( ):

The CacheBuilder’s expireAfterAccess( ) method specifies that the object stored in the Cache will expire if it is not accessed for more than 3 seconds. The code in while access will access the object key1 stored in the Cache every sleep, and the time of the next sleep will be longer by one second after each access to key1. The result of the program running is shown below. As can be seen from the results, when the key1 object is not read for more than 3 seconds, the object is “automatically” deleted by the Cache.

You can also use the expireAfterAccess( ) and expireAfterWrite( ) methods at the same time to specify the expiration time. In this case, as long as the object meets one of the conditions, it will be “automatically” expired and deleted.

Refreshing is not quite the same as eviction. For more details about refreshAfterWrite( ), please check out this official doc.

Reference-based Eviction Mechanism

Guava Cache can set the cache to allow garbage collection by using:

1. weakly referenced keys

2. weakly referenced values

3. soft referenced values

CacheBuilder.weakKeys(): Use weak reference storage keys. When the key has no other (strong or soft) references, the cache entry can be garbage collected. Because garbage collection relies only on identity (==), caches that use weak reference keys use == instead of equals to compare keys.

CacheBuilder.weakValues(): Stores values ​​using weak references. When there are no other (strong or soft) references to the value, the cache entry can be garbage collected. Because garbage collection relies only on identity (==), caches that use weak reference values ​​use == instead of equals to compare values.

CacheBuilder.softValues(): Stores values ​​using soft references. Soft references are only reclaimed in the order of least recently used globally when responding to memory needs. Given the performance impact of using soft references, it’s recommended to use a more predictive cache size limit. Caches that use soft reference values ​​also use == instead of equals to compare values.

For example, you can specify that the Cache only saves weak references to the cache record value through the weakValues ​​methods below. This way, when no other strong references point to the key and value, the key and value objects are recycled by the garbage collector

The print result of the above code is null. When building a Cache, the weakValues ​​method is used to specify that the Cache only saves a weak reference to the record value. When a new object is assigned to a value reference, there is no longer a strong reference to the original object. After System.gc() triggers garbage collection, the original object is cleared.

Explicit Eviction

At any time, you can explicitly clear the cache entry instead of waiting for it to be recycled:

1. Individual Clear: void invalidate(Object key);

2. Batch Clear: void invalidateAll(Iterable<?> keys);

3. All Clear: void invalidateAll();

You can call the Cache’s invalidateAll( ) or invalidate( ) method to display the records in the delete Cache. The invalidate( ) method can only delete one record in the Cache at a time, and the received parameter is the key to delete the record. The invalidateAll( ) method can delete records in the Cache in batches. When no parameters are passed, the invalidateAll( ) method will clear all records in the Cache. invalidateAll( ) can also receive an Iterable type parameter containing all the key values ​​of the record to be deleted. The code below gives an example of this.

The code constructs a set list for saving the key value of the record to be deleted, and then calls the invalidateAll method to delete the records corresponding to key1 and key2 in batches, leaving only the record corresponding to key3 is not deleted.

RemovalListener

You can add a remove listener to the Cache object so that it can be perceived when a record is deleted.

The removal Listener method specifies a removal listener for the Cache so that when a record is deleted from the Cache, the listener listener will perceive the event. The result of the program running is shown below.

Load Cache

The Cache get( ) method has two parameters, the first parameter is the key to get the record from the Cache, and the second record is a Callable object. When the record corresponding to the key already exists in the cache, the get( ) method directly returns the record corresponding to the key. If the cache does not contain the record corresponding to the key, Guava will start a thread to execute the call method in the Callable object. The return value of the call method will be stored in the cache as the value corresponding to the key, and will be returned by the get( ) method. The following is an example of multithreading:

In this code, two threads share the same Cache object, and two threads simultaneously call the get method to get the record corresponding to the same key. Since the record corresponding to the key does not exist, both threads are blocked at the get method. Here, the Thread.sleep(1000) simulation program is called in the call method to load data from the external memory. The execution result of the code is as follows:

As can be seen from the results, although the two methods call the get method at the same time, only the Callable in the get method will be executed (the load2 is not printed). Guava can guarantee that when multiple threads access a key in the Cache at the same time, if the record corresponding to the key does not exist, Guava will only start a thread to execute the task load data corresponding to the Callable parameter in the get method and save it to the cache. When the data is loaded, the get method in any thread will get the value corresponding to the key.

Cache Statistics

Statistics such as the hit rate of the Cache and the data loading time can be counted. When building a Cache object, you can enable statistics on the StatBuilder’s recordStats method. After the switch is enabled, the Cache automatically collects statistics on various operations of the cache. The stats method of the Cache can be used to view the statistics. But Cache stats is not recommended for production environment.

The execution result is as follows:

These statistics are critical to adjusting the cache settings, and should be closely monitored in performance-critical applications.

LoadingCache Interface

The LoadingCache is a sub-interface of the Cache. When comparing a record with a specified key from the LoadingCache, if the record does not exist, the LoadingCache can automatically perform the operation of loading data into the cache. The definition of the LoadingCache interface is as follows:

Similar to the construction of the Cache type object, the LoadingCache type object is also built by the CacheBuilder. The difference is that when calling the CacheBuilder build method, you must pass a CacheLoader type parameter. The CacheLoader load method needs to be implemented. When the get method of LoadingCache is called, if there is no record corresponding to the key in the cache, the load method in the CacheLoader will be automatically called to load data from the external memory, and the return value of the load method will be stored in the LoadingCache as the value corresponding to the key, and Return from get method

Execution result is as follows:

 

Get/Put Comparison

get( ): either return the value that has been cached, or use the CacheLoader to atomically load the new value into the cache;

getUnchecked( ): CacheLoader will throw an exception, the defined CacheLoader does not declare any check-type exceptions, you can getUnchecked to find the cache; vice versa;

getAllPresent( ): method is used to execute a batch query;

put( ): Explicitly insert a value into the cache, Cache.asMap() can also modify the value, but not atomic;

getIfPresent( ): This method simply treats Guava Cache as a replacement for Map and does not execute the load method.

Summary

After all, GuavaCache is a local cache, lightweight Cache, suitable for caching a small amount of data. If you want to cache tens of millions of data, you can set a different lifetime for each key, and high performance, it is not suitable for using GuavaCache.

FAQ

Q1: Does Google Guava Cache evict records in cache automatically?

A: The code below demonstrates Google Guava Cache does not really expire the records in time, but depends on read/write accesses. Because it does not launch extra threads for monitoring expired cache. Its maintenance is implemented by reads/updates.

 

For other blogs, please click here.

How to Get a Consistent Endpoint URL When Scaling Up Amazon MQ ?

Sometimes you want the application running on an EC2 AMI to connect to the same Amazon MQ broker endpoint instead of changing the EC2 instance AMI each time when created a new Amazon MQ broker instance.

In that use case you can use a DNS name string of an Elastic LoadBalancer(ELB) to access MQ brokers exist behind ELB from the software on EC2. More specifically, as you may have noticed, MQ broker IP address would not change when the broker restarts. Brokers are assigned the Elastic IPs and hence they will retain the IPs on restart. With that being said, you should create a Elastic LoadBalancer (ELB) that forwards all incoming traffic to the brokers (targets) registered in an ELB’s target group with broker’s Elastic IP. Please take a look at  this doc for more details on load balancer target groups.

In this way, you can take the DNS name of Load Balancer as the static connection point from EC2s. The architecture diagram below shows what I mean in a big picture:

architecture diagram

To better assist you, I’d like to demonstrate it for you by the following steps:

Continue reading How to Get a Consistent Endpoint URL When Scaling Up Amazon MQ ?

Have you compared global variables and /tmp when Lambda function reused?

When developing your own Lambda function, have you ever thought of whether the new born lambda container inherits global variables and files located in /tmp from the running or dead instance of the same function? Now let me help you clarify global variables and /tmp issues when Lambda function being reused as follows.

As long as the execution doesn’t fail and it doesn’t take too long between each execution, the container which runs the Lambda function can be reused for other invocations. This means the runtime that runs the code will be active in memory throughout a number of executions. Therefore, if your code is saving information in memory and not cleaning up during each invocation, it can allocate memory until it reaches the limit. And normally it shouldn’t happen, however such continuously increasing memory issue occurs if there are some global variable or library that keeps allocating information in memory throughout the execution. You can refer to this AWS documentation for container reuse.

When there is some global object which is growing in size, If that is the case, I would suggest you to move the global variables, excluding the client, into the handler. In general, defining expensive variables in the shared scope is a good idea for performance reasons, but it is not advisable to define anything there that would potentially continue to grow.

Let me explain this with an Nodejs Lambda function example,

Everything inside the exports.handler = ()=>{} block is the handler, everything outside that block is global. The first time the function runs, the script is executed with objects outside of the handler being executed and persisted for the lifetime of the container. On subsequent runs, only the handler is executed.

When the above function is tested multiple times, the following happens:

If we invoke this code continuously in Lambda, it looks like:

Continue reading Have you compared global variables and /tmp when Lambda function reused?

Where is the Deployment Package Located in Lambda Container?

When develop your Lambda function, you might think how to access some static configuration files in deployment package from Lambda function code. Please let me clarify them for you as follows:

1. What is best practice for accessing files in Lambda code?

When upload the deployment package to Lambda, as you may have noticed, Lambda would generally put the deployment package at this folder ‘/var/task/’ in the Lambda container. All of dependencies and static configuration file should also stay in this ‘root’ folder ‘/var/task/’. That being said, the static config files would always besides your code under /var/task/. Therefore, take Python for example, ‘read(“config.json”)’ and read(“./config.json”) should be good and flexible.

Here is the simple Python code snippet being used for checking:

Here is the screenshot shows you its invocation result:

pwd and ls .

Continue reading Where is the Deployment Package Located in Lambda Container?

How to Utilize Intrinsic Functions And Outputs Section in CloudFormation Stack?

When developing your CloudFormation templates, for the purpose of concise and easy maintenance, you may wonder how to have a variable that is built with parameters and functions to use the same value in different parts of the stack, or another stack’s code. As one motivation of referencing values from another stacks can be, for example, you want the stack A to launch all VPCs, and stack B to launch all EC2 instances within each VPC.

One way to do that is using Intrinsic function and Outputs section in the CloudFormation template. Before creating the stack you can compose a variable of parameters that you will pass during launch time, the rest of the variable will be retrieved by intrinsic functions. Then, for example, you could declare the output variable in the optional Outputs section of CloudFormation template, and import this value into other stacks (to create cross-stack references) or use this value to update the template.

Please note: Parameters are passed to the stack during creation, and the intrinsic functions can retrieve different values depending on the resource being created.

Let me explain in more depth – AWS CloudFormation provides several built-in functions that help you manage your stacks. You could use intrinsic functions in your templates to assign values to properties that are not available until runtime. But only in specific parts of a template, currently, you can use intrinsic functions in resource properties, outputs, metadata attributes, and update policy attributes. You can also use intrinsic functions to conditionally create stack resources. For example the intrinsic function Fn::Base64 returns the Base64 representation of the input string, the Fn::GetAtt intrinsic function returns the value of an attribute from a resource in the template.

Here is the declaration of Fn::GetAtt:

where:

the “logicalNameOfResource” indicates the logical name (also called logical ID) of the resource that contains the attribute that you want;

the “attributeName” indicates the name of the resource-specific attribute whose value you want. See the resource’s reference page for details about the attributes available for that resource type.

Continue reading How to Utilize Intrinsic Functions And Outputs Section in CloudFormation Stack?

How to Utilize CloudFormation Lambda backed Custom Resources ?

AWS CloudFormation is a service that takes care of provisioning and configuring AWS resources for you. You don’t need to individually create and configure AWS resources and figure out what’s dependent on what; AWS CloudFormation handles all of that. And you may have realized, CloudFormation allows you to provision and configure your stack resources by utilizing custom resources in a CloudFormation template (JSON or YAML format).

Custom resources enable you to write custom provisioning logic in templates that AWS CloudFormation runs anytime you create, update (if you changed the custom resource), or delete stacks. So in this blog, I’ll explain how it works and how to use it with a practical example like below:

You’re supposed to create a SNS subscription filter policy while provisioning SNS topics and subscriptions with CloudFormation. So how to use CloudFormation to handle this task?

As CloudFormation does not support a particular property in a resource even though it’s available via the AWS API, we can use custom resources to workaround it. In our task, as CloudFormation resource ‘AWS::SNS::’ does not currently support the Filter Policy, we can make use of Lambda Backed Custom Resource in this scenario.

Continue reading How to Utilize CloudFormation Lambda backed Custom Resources ?

How to Filter S3 Event Notifications Sent to SNS ?

If you have several S3 bucket folders named ‘a’, ‘b’, ‘c’, when objects created in a S3 bucket, the event notifications get pushed to a SNS topic. Messages get finally delivered to SQS queues who are subscribers of this topic.

As you may have noticed, SNS supports subscription filter policy in Json format in the properties of a subscription. You might be looking for such a policy that matches the S3 folder prefixes to filter out notifications so that queues get only subset of messages (only those messages that have S3 folder prefixes as mentioned above). If the objects are created in a different prefix, the queue shouldn’t get those messages. However, without some deep dive, can you really implement this intuitive solution? So let’s start to figure out if it’s possible or not.

When a message is published to the topic, Amazon SNS will attempt to match the incoming message attributes to the subscription attribute that defines a filter policy. If they match, Amazon SNS will then deliver the message to the corresponding subscriber. However, unfortunately, S3 event notification currently does not support ‘Message Attribute’ fields.

Restriction of Subscription policy

With that being said, when S3 sends event notifications to SNS after an object created, the SNS topic can’t apply its subscription filter policy to the incoming S3 event notifications as they don’t have the ‘Message Attribute’. Please click here for more details on SNS ‘Message Attribute’.

Continue reading How to Filter S3 Event Notifications Sent to SNS ?

How does AWS SQS visibility timer work ?

As you may have realized from the doc here, when a consumer receives and processes a message from a queue, the message remains in the queue. To prevent other consumers from processing the message again, Amazon SQS sets a visibility timeout, a period of time during which Amazon SQS prevents other consumers (including the initial consumer) from receiving and processing the message. The default visibility timeout for a message is 30 seconds. The minimum is 0 seconds. The maximum is 12 hours. The mechanism can be displayed as the picture below:

Visibility Timeout

But the follow up question you may ask: Does the visibility timer (and in flight status) apply to the consumer that initially read the message?

Continue reading How does AWS SQS visibility timer work ?

How to Receive a Monitoring Alert When the Customized Threshold in SNS-SMS is Reached?

Issue:

When you are using SNS to send SMS messages, you may have just approached your monthly limit and this can potentially impact your business. What’s more, although AWS doc claims that “Typically, AWS Support processes your case within 2 business days. Depending on the spend limit you request and the complexity of your case, AWS Support might require an additional 3 – 5 days to ensure that your request can be processed.“, this limit increase process can still be longer then expected – during which your production system is impaired and can’t send messages.

Therefore, to buffer this, you can set up a monitoring alert, ie. in case of 75% of monthly limit is reached, so that you have enough time to make a ticket and wait until AWS handles that?

 

Solution:

You can set up an alarm with AWS Cloudwatch and it could successfully send alert message once the threshold was reached. To provide a better demonstration for this goal, I’d like provide you a step by step solution:

1. Sign in to the AWS Management Console and open the CloudWatch console at https://console.aws.amazon.com/cloudwatch/.

2. Choose ‘Metrics’.

Cloudwatch ‘Metrics’

3. Select ‘SNS’.

4. Select ‘Metrics with no dimensions’.

Metrics with no dimentions

5. Check ‘SMSMonthToDateSpentUSD’ and click the ‘Graphed metrics’ tab.

Graphed metrics

6. Click the bell symbol on the right side of the row to create the alarm for this metric.

7. Give an alarm name and description.

Create new alarm

8. At ‘Whenever: SMSMonthToDateSpentUSD’ choose ‘>=’ and type the value ‘0.75’ as the threshold as your default spend limit is $1.

9. Treat missing data as ‘missing ‘.

10. At ‘Whenever this alarm:’ choose ‘State is Alarm’

11. At ‘Send notification to:’ choose a SNS topic where your subscribed endpoint, such as email, can receive this alarm.

Alarm parameters

During the test, I randomly typed ‘0.02’ as the threshold value and after I sent 14 SMS text messages, I received the email alert as below. Note: after received this alert, I could still successfully publish text messages. Please also feel free to check the doc [1]/[2] for more details on metrics ‘SMSMonthToDateSpentUSD’ and Cloudwatch alarm.

You are receiving this email below because your Amazon CloudWatch Alarm “test” in the US East (N. Virginia) region has entered the ALARM state, because “Threshold Crossed: 1 datapoint [0.0258 (29/11/18 15:23:00)] was greater than or equal to the threshold (0.02).” at “Thursday 29 November, 2018 15:28:07 UTC”.

Alert email

I hope it can address your concern and please also feel free to check other blogs I wrote for your reference.

How to Track and Analyze Amazon SES Sending Activity ?

What is Amazon SES sending activity ?

Sometimes it’s worth of tracking the Simple Email Service (SES) email sending activity by monitoring event publishing. Amazon SES provides methods to monitor your sending activity. You can implement these methods so that you can keep track of important measures, such as your account’s bounce, complaint and reject rates. Excessively high bounce and complaint rates may jeopardize your ability to send emails using Amazon SES.

Additionally, you can also use these methods to measure the rates at which your customers engage with the emails you send. For example, these sending metrics can help you identify your overall open and clickthrough rates.

The metrics that you can measure using Amazon SES are referred to as email sending events. The email sending events that you can monitor are:

  • Sends – The call to Amazon SES was successful and Amazon SES will attempt to deliver the email.
  • Rejects – Amazon SES accepted the email, determined that it contained a virus, and rejected it. Amazon SES didn’t attempt to deliver the email to the recipient’s mail server.
  • Bounces – The recipient’s mail server permanently rejected the email. This event corresponds to hard bounces. Soft bounces are only included when Amazon SES fails to deliver the email after retrying for a period of time.
  • Complaints – The email was successfully delivered to the recipient. The recipient marked the email as spam.
  • Deliveries – Amazon SES successfully delivered the email to the recipient’s mail server.
  • Opens – The recipient received the message and opened it in his or her email client.
  • Clicks – The recipient clicked one or more links contained in the email.
  • Rendering Failures – The email was not sent because of a template rendering issue. This event type only occurs when you send email using the SendTemplatedEmail or SendBulkTemplatedEmail API operations. This event type can occur when template data is missing, or when there is a mismatch between template parameters and data.

How to track SES sending activity ?

To track and process these sending activities, you can configure the SES to send them to 3 types of destinations: Kinesis Firehose, Cloudwatch and SNS topic.

3 types of destination of Configuration set in SES
Different sending activities and SNS destination topic

As we need to store these events in relational database and SES can’t store them directly into relational database, we should set up SNS topic as destination for relaying these sending activities into SQS that acts as a scalable buffer, then using SQS queue to actively trigger Lambda function to consume, process and store these sending activities into relational database (RDS).

Note: As you may have noticed, the configuration set on SES can send email sending events to the SNS topic. From SNS, you can have your endpoints in different protocols, such as SQS, Lambda and etc. As your business may need to send emails to many recipients, (thousands of thousands users), you should take advantage of SQS queue as a buffer, rather than publishing event messages directly to Lambda function.

As SQS has a feature that it can actively trigger the Lambda function once messages arrived in the queue, you should have the SQS to trigger the Lambda function. Once the Lambda function got invoked, it should be able to insert the event messages into the RDS MySQL. So the final workflow would look like this diagram:

Workflow of buffering and storing sending activity events

Continue reading How to Track and Analyze Amazon SES Sending Activity ?