In today’s world of cloud computing, it is very important to create highly scalable and resilient applications. As the user base grows the demand for infrastructure of backend also needs to grow and for the applications that are backed by relational databases like PostgreSQL and MySQL often becomes the primary bottleneck.
While managing the database architecture, the most common and important choke point is connection management. Opening and closing the database connections is considered as computationally expensive operations. While using the AWS Lambda or microservices that suddenly handles a lot more traffic, it can use up all the database connections resulting in slow or fail to respond or even crashout completely.
To address these type of specific problem, “Amazon RDS Proxy” was built and in this step by step guide we will see detail about it. We will cover the concept of connection pooling, the process of provisioning and configuring the RDS proxy, learn about securely connecting to test environment using EC2 instance connect and finally running a rigorous load test to show the proxy is working well.
Understanding and getting to know “Why” before the “How”
First of all before hopping into AWS management console or writing the command line scripts what we need to understand is that the basic terms of database connections and why secondary layer is necessary for modern applications.
If we are running a single server on EC2, the application creates fixed number of database connections and keeps reusing them. In these types of traditional setups, there is not much change in the number of servers, so the database can easily handle this steady number of connections.
Whereas, in the newer systems like microservices or serverless applications, the servers scale up and down automatically. For example, if an AWS lambda functions suddenly runs a thousand times at once, each one might try to open its own database connection.
The main issue here is that the databases uses memory and CPU for every connection and if too many connections are opened simultaneously then the database can run out of resources which is also commonly known as connection exhaustion. This results in slowing down the database and starts rejecting new connections which will crash.
The Solution is Amazon RDS Proxy
Amazon RDS Proxy is a fully managed, highly available database proxy service for Amazon RDS and Amazon Aurora. It is placed between our application and our database acting as a policeman and connection manager. It provides:
Connection pooling and multiplexing:
The proxy sets a consistent connections to the database and when our application needs to communicate with database, it connects to the proxy which will route the application’s queries through its established pool of database connections. In this way multiple application connections can share a small number of actual database connections.
Improved failover times:
Incase our primary database instance fails, RDS proxy can automatically route traffic to a standby instance because the application is connected to the proxy and it doesn’t need to resolve a new DNS endpoint or handle connection drops. In his way RDS Proxy can reduce failover times by up to 66 percentage.
Enhanced security:
We can enforce IAM authentication to the proxy which means that our applications can connect using IAM roles rather than hardcoded database passwords. The proxy then securely retrieves the real database credentials from AWS Secrets Manager.
Prerequisites and Baseline Infrastructure
Now lets deep dive into this guide for which we need a few foundational elements deployed in our AWS account.
1. The AWS Environment
We need an active AWS Account with Administrator privileges and familiarity with AWS Management Console and bash command-line shell.
2. The Database Cluster (Amazon Aurora MySQL)
Here we will deploying against an Amazon Aurora MySQL cluster. Provisioning an Aurora cluster from scratch typically takes 10- 15 minutes. We will ensure that our cluster is in “Available” state before moving ahead towards creating a proxy. It should be deployed in a VPC with at least two private subnets.
3. AWS Secrets Manager
RDS Proxy requires our database credentials to be stored securely in AWS Secrets Manager.
1. Navigate to Secrets Manager in the AWS Console.
2. Click “Store a new secret”.
3. Choose “Credentials for Amazon RDS database”.
4. Enter the `admin` username and the password we used when creating our Aurora cluster.
5. Select the Aurora database cluster we created.
6. Name the secret (e.g., `db/aurora-mysql-credentials`) and save it.
4. IAM Role for RDS Proxy
RDS Proxy needs permission to read the secret we just created.
1. Go to the IAM Console.
2. Create a new Role. Choose “RDS” as the trusted entity, and specifically “RDS – Add Role to Database”.
3. Attach a policy that grants `secretsmanager:GetSecretValue` for the specific ARN of your stored secret.
Creating the Amazon RDS Proxy
With our foundation set, we can now create the RDS Proxy. This process tells AWS to provision the highly available proxy fleet, associate it with your database, and configure how connections should be managed.Expect this process to take up to 20 minutes to complete once initiated.
Step-by-Step guide to configuration Amazon RDS Proxy are as follows:
1. Navigate to the RDS Console: Log into the AWS Management Console, search for “RDS”, and open the service dashboard.
2. Access Proxies: In the left-hand navigation pane, click on “Proxies”, then click the orange “Create proxy”button.
3. Proxy Configuration:
Proxy identifier: Give proxy a meaningful name, such as `app-db-proxy`.
Engine compatibility: Select “MySQL” (since we are using Aurora MySQL).
Require Transport Layer Security: Ensure this is checked to enforce SSL/TLS encryption between our application and the proxy.
Idle client connection timeout: This dictates how long the proxy will keep an inactive application connection open. Leave this at the default `1800` seconds.
4. Target Group Configuration:
Database: Select the Aurora MySQL cluster you provisioned earlier from the dropdown menu. This tells the proxy where to route the traffic.
Connection pool maximum connections: This is a percentage of the database’s total allowed connections that the proxy can use. Setting this to `100` means the proxy can utilize all available connections on the database.
5. Authentication:
Secrets Manager secret: Select the secret you created in the prerequisites (e.g., `db/aurora-mysql-credentials`).
IAM role: Select the IAM role we created that has permissions to read that secret.
IAM Authentication: Set this to “Required” if you want our applications to authenticate using IAM tokens, or “Not Required” if we want them to authenticate using the standard database username and password (which the proxy will verify against Secrets Manager). For ease of initial testing, set this to “Not Required”.
6. Network Configuration:
Subnets: Select at least two private subnets in different Availability Zones. RDS Proxy is highly available by design and requires multi-AZ deployment.
VPC security groups:Select a security group that allows inbound traffic on port 3306 (MySQL) from our application servers (the EC2 instance we will use later).
7. Advanced Configuration: Ensure Enhanced Logging is temporarily enabled if we are troubleshooting, but for standard deployment, leave the defaults.
8. Create: Click “Create proxy”.
The status of the proxy will change to “Creating”. Go grab a coffee, as this will take between 5 to 20 minutes. Once the status changes to “Available”, copy the “Proxy endpoint” (it will look something like `app-db-proxy.proxy-abcde12345.us-east-1.rds.amazonaws.com`). We will need this for the testing phase.
Connecting to the Virtual Machine using EC2 Instance Connect
To test the proxy, we need a client machine that lives within the same VPC (or has network access to it). We will use an Amazon EC2 instance acting as a bastion host or application server.
Instead of managing SSH keys locally, we will use “EC2 Instance Connect”, a secure, browser-based SSH client provided directly within the AWS Console.
Step 1: Locating your EC2 Instance
1. Open the “EC2” service in the AWS Management Console.
2. Click on “Instances (running)”.
3. Locate our test instance (e.g., `Proxy-Test-Client`). This instance should be in a public subnet with an assigned Public IPv4 address, and its security group should allow outbound traffic to our RDS Proxy’s security group on port 3306.
Step 2: Launching EC2 Instance Connect
1. Check the box next to your instance.
2. Click the “Connect” button at the top of the screen.
3. In the Connect to instance page, select the “EC2 Instance Connect” tab.
4. Leave the connection type as “Connect using EC2 Instance Connect”.
5. The default username is usually `ec2-user` (for Amazon Linux) or `ubuntu` (for Ubuntu). Leave the default populated by AWS.
6. Click “Connect”.
A new browser tab will open, displaying a black terminal screen. We are now securely connected to the command-line interface of your EC2 instance.
Step 3: Preparing the Environment
Before we can run a load test, we need to ensure the database client software is installed on the EC2 instance.
Run the following command in our terminal to update the package manager and install the MariaDB/MySQL client, as well as a popular benchmarking tool called `sysbench` which we will use to generate connection load.
“`bash
sudo yum update -y
sudo yum install mariadb -y
sudo amazon-linux-extras install epel -y
sudo yum install sysbench -y
“`
Inspecting and Testing the RDS Proxy
Now comes the critical part: proving that the infrastructure we just built actually works and provides the connection multiplexing benefits we discussed.
Phase 1: Inspecting the Proxy via the AWS CLI
Before hitting the database with traffic, let’s verify our proxy configuration using the AWS Command Line Interface (CLI) installed on our EC2 instance.
Run the following command, replacing `app-db-proxy` with our actual proxy name:
“`bash
aws rds describe-db-proxies \
–db-proxy-name app-db-proxy \
–region us-east-1
“`
We should see a JSON output detailing the proxy’s Amazon Resource Name (ARN), endpoint, status (`available`), and the VPC it is attached to.
Next, verify the target group to ensure the proxy successfully registered our Aurora database:
“`bash
aws rds describe-db-proxy-target-groups \
–db-proxy-name app-db-proxy \
–region us-east-1
“`
Look for the `TargetGroupName` (usually `default`) and ensure the health state reflects that it is successfully communicating with the underlying Aurora DB.
Phase 2: Verifying Connectivity
Let’s ensure we can log into the database through the proxy using the standard MySQL client. We will need your proxy endpoint URL, our database username (`admin`), and your password.
“`bash
mysql -h <YOUR_PROXY_ENDPOINT> -u admin -p
“`
Enter your password when prompted. If successful, you will see the `MySQL [(none)]>` prompt.
Create a test database for our load test:
“`sql
CREATE DATABASE proxy_test;
EXIT;
“`
Phase 3: The Connection Load Test
To truly appreciate RDS Proxy, we need to stress-test it. We will use `sysbench` to simulate hundreds of concurrent connections attempting to query the database simultaneously.
First, we need to prepare the test database with some dummy data. We will route this preparation traffic through the proxy.
“`bash
sysbench \
–db-driver=mysql \
–mysql-host=<YOUR_PROXY_ENDPOINT> \
–mysql-user=admin \
–mysql-password=<YOUR_PASSWORD> \
–mysql-db=proxy_test \
–tables=10 \
–table-size=1000 \
oltp_read_write prepare
“`
The Direct Connection Baseline (The Failure Scenario)
To understand the value of the proxy, we ideally want to test against the “direct” database endpoint first. If you point `sysbench` directly at the Aurora cluster endpoint and request 1,000 concurrent threads, the database engine will attempt to spawn 1,000 distinct processes.
“`bash
# Example of testing the direct endpoint
sysbench \
–db-driver=mysql \
–mysql-host=<YOUR_AURORA_CLUSTER_ENDPOINT> \
–mysql-user=admin \
–mysql-password=<YOUR_PASSWORD> \
–mysql-db=proxy_test \
–threads=1000 \
–events=0 \
–time=60 \
oltp_read_only run
“`
Depending on the size of your Aurora instance (e.g., a small `db.t3.medium`), the database will likely hit its `max_connections` limit. The `sysbench` output will start throwing errors: `FATAL: error 1040: Too many connections`. The application (in this case, sysbench) fails because the database cannot accept any more direct connections.
The RDS Proxy Triumph
Now, let’s run the exact same heavy load test, but this time, we will point `sysbench` at the “RDS Proxy endpoint”.
“`bash
# Testing the proxy endpoint
sysbench \
–db-driver=mysql \
–mysql-host=<YOUR_PROXY_ENDPOINT> \
–mysql-user=admin \
–mysql-password=<YOUR_PASSWORD> \
–mysql-db=proxy_test \
–threads=1000 \
–events=0 \
–time=60 \
oltp_read_only run
“`
Now lets analyze the results as follows:
When running through the proxy, the test will succeed without dropping connections. How?
While `sysbench` successfully opens 1,000 connections to the “proxy”, the proxy intelligently multiplexes those requests. It queues the incoming queries and pushes them through its much smaller, optimized pool of persistent connections to the Aurora database.
If we monitor the Aurora database’s metrics in CloudWatch during this test, we will notice that the `DatabaseConnections` metric remains incredibly stable and low, entirely unaffected by the 1,000 application threads hammering the proxy. The proxy absorbs the connection overhead, protecting the database’s CPU and memory, ensuring that query execution remains fast and stable.
Phase 4: Cleanup
After running the tests, be sure to clean up the `sysbench` data to prevent unnecessary storage costs.
“`bash
sysbench \
–db-driver=mysql \
–mysql-host=<YOUR_PROXY_ENDPOINT> \
–mysql-user=admin \
–mysql-password=<YOUR_PASSWORD> \
–mysql-db=proxy_test \
oltp_read_write cleanup
“`
Conclusion
Implementing Amazon RDS Proxy is a transformative step in evolving our cloud infrastructure from simply “functioning” to being truly robust and enterprise-ready.
By completing this hands-on process, we have witnessed firsthand how decoupling connection management from the database engine protects your critical data layer from application-side volatility. We walked through the prerequisite architecture, navigated the nuances of proxy configuration, established secure bastion access via EC2 Instance Connect, and ultimately proved the proxy’s resilience through rigorous load testing.
Whether we are building the next massive serverless application or simply trying to stabilize a legacy monolith experiencing traffic spikes, RDS Proxy ensures your database remains responsive, secure, and highly available under pressure.

Leave a Reply