AWS SQS

Vivekkumarrullay
5 min readFeb 28, 2021

Amazon Simple Queue Service (Amazon SQS) offers a secure, durable, and available hosted queue that lets you integrate and decouple distributed software systems and components. It provides a generic web services API that you can access using any programming language that the AWS SDK supports.

Amazon SQS supports both standard and FIFO queues.

Benefits of using Amazon SQS — :

Security :- You control who can send messages to and receive messages from an Amazon SQS queue.

Server-side encryption (SSE) lets you transmit sensitive data by protecting the contents of messages in queues using keys managed in AWS Key Management Service (AWS KMS).

Durability :- For the safety of your messages, Amazon SQS stores them on multiple servers. Standard queues support at-least-once message delivery, and FIFO queues support exactly-once message processing.

Availability :- Amazon SQS uses redundant infrastructure to provide highly-concurrent access to messages and high availability for producing and consuming messages.

Scalability :- Amazon SQS can process each buffered request independently, scaling transparently to handle any load increases or spikes without any provisioning instructions.

Reliability :- Amazon SQS locks your messages during processing, so that multiple producers can send and multiple consumers can receive messages at the same time.p

What is SQS used for ?

The most common ways to use SQS, and of course other messaging systems, in cloud applications.

Decoupling micro services :- In a micro service architecture, messages represent one of the easiest ways to set up communication between different parts of the system. If your micro services run in AWS, and especially if those are Server less services, SQS is a great choice for that aspect of the communication.

Sending tasks between different parts of your system :- You don’t have to be running a micro services-oriented application to take advantage of SQS. You can also use it in any kind of application that needs to communicate tasks to other systems.

Basic Amazon SQS architecture

Distributed queues :- There are three main parts in a distributed messaging system the components of your distributed system, your queue (distributed on Amazon SQS servers), and the messages in the queue.

Message lifecycle :- The following scenario describes the lifecycle of an Amazon SQS message in a queue, from creation to deletion.

  1. A producer (component 1) sends message A to a queue, and the message is distributed across the Amazon SQS servers redundantly.
  2. When a consumer (component 2) is ready to process messages, it consumes messages from the queue, and message A is returned. While message A is being processed, it remains in the queue and isn’t returned to subsequent receive requests for the duration of the visibility timeout.
  3. The consumer (component 2) deletes message A from the queue to prevent the message from being received and processed again when the visibility timeout expires.

Amazon SQS performance: what to expect

From a technical standpoint, SQS supports unlimited throughput per queue. There are account-level limits on throughput per queue, but you can request an increase if your usage grows. In general, the performance of your SQS queue is chiefly limited by the latency between the queue and its clients. If your queue and its clients are located in the same AWS region, your latency will be quite low but not zero.

To illustrate this, let’s assume that you’re using SQS from AWS Lambda. SQS has an HTTP API, and even if function and queue are in the same region, making a request will still take a few milliseconds. This millisecond-level latency is the primary performance limitation in SQS.

Case Study on Auto Scaling & Job Scheduling through SQS and ELB

Client Profile and Use Case:

The client is involved in audio mining and speech analytics systems. The audio files are supplied to speech servers in .WAV format and then these files are converted into textual data files and then sent for storage. The client was facing issues with the load balancing and auto scaling solution design, since the audio files were live telephonic conversation between call center agents and customers. These calls might stretch to 2 or 3 hours sometimes due to which the Speech conversion Application was facing a lot of issues in the production environment and an efficient auto scaling and load balancing was needed. Tekzee was required to design a load balancing solution using AWS services and scripting python.

AWS Services Used:

Tekzee used five AWS services to build the required architecture:

  • Amazon EC2.
  • Amazon SQS.
  • Amazon S3.
  • Amazon Dynamo DB & Auto Scaling working together.
  • Python.

Implementation:

Step by step implementation of the solution is as follows:

  • The audio files are placed in S3 Bucket.
  • A request message is placed in an incoming Amazon SQS queue with a pointer to the file location.
  • The speech server that runs on EC2 instances reads the request message from incoming queue, retrieves the file from AmazonS3 using the pointer, and converts the file into the target format.
  • The converted .json is putted back into Amazon S3 and another response message is placed in an outgoing Amazon SQS queue with a pointer to the converted file.
  • Metadata about these files is indexed into Amazon Dynamo DB for querying.

During this work-flow, a dedicated Auto Scaling instance can constantly monitor the incoming queue. Based on the number of messages in the incoming queue, the Auto Scaling instance dynamically adjusts the number of speech server instances to meet the response time requirements of the customers.

=============Thanks for Reading===============

--

--