How I manage to handle 1 million requests of updating per minute with Rust

1. Business Details

Let me describe this business first. Currently, our service allows students can practice online. With each exam, a student can take an attempt with it and could submit a list of answer choices, each time students answer a question, clients will send all answer choices to the server and we must save it to our database. The target here is to serve as fast as possible.

Exam and Attempt tables

2. How To Measure

To stress-test the service, I will use K6 testing tool. This tool allows us to create many requests per second easily with a script using javascript language. The code below shows my script for testing:

import http from 'k6/http';
import { sleep } from 'k6';

export let options = {
scenarios: {
constant_request_rate: {
executor: 'constant-arrival-rate',
rate: 10000,
timeUnit: '1s',
duration: '1m',
preAllocatedVUs: 6000,
maxVUs: 15000,
},
},
};

function getRandomInt(min, max) {
min = Math.ceil(min);
max = Math.floor(max);
return Math.floor(Math.random() * (max - min + 1)) + min;
}

export default function () {
let data = {answers: {"1": [1], "2": [2], "3": [3], "4": [4]}}
let params = {headers: {"Content-Type": "application/json"}}
http.patch('http://localhost:8000/attempts/' + getRandomInt(1, 100000), JSON.stringify(data), params);
}
  • RAM: 24GB
  • Disk: GX2 SSD 256GB

3. Server Implementation And Results

As I said above, for implementation, I use Actix web framework along with sqlx which will manage connections to my database by using a connection pool. When creating the connection pool, I let sqlx create its own default config, meaning that connection timeout, idle timeout, max lifetime of connection will be 30 seconds, 10 minutes, and 30 minutes.

opt-level = 3
debug = false
split-debuginfo = 'off'
debug-assertions = false
overflow-checks = false
lto = true
panic = 'unwind'
incremental = false
codegen-units = 1
rpath = false
15000 rps result of Rust server
The result after set unlogged to table

4. For The Future

I have run pgbench on my table, and it said that the maximum tps of my table is about 55000. Therefore, I still push the limit of my server merely to that number if somehow the process of one request is optimized. In the next action, I will dive into my server code to improve it. Maybe to reduce the serialize time, optimize memory allocation, so on.

--

--

--

improving yourself

Love podcasts or audiobooks? Learn on the go with our new app.

Recommended from Medium

C++ static(wip)

The beginning of my hackathon journey

Hannes Mehnert on MirageOS and OCaml: “Functional programming is about better code maintenance and…

How to rename a Git branch locally and remotely

No Such Thing As Plain Text

Java 18: Vector API — Do we get free speed-up?

Understanding EnvoyProxy’s Rate Limiting

Q1/Q2 Roadmap, feature release dates and Developer AMA Recap.

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store
Thiện Trần

Thiện Trần

improving yourself

More from Medium

Clash of the compiled: Golang vs Rust

Golang’s most important feature is invisible (continued . . .) and my thoughts about Rust

Exiting the Vietnam of Programming: Our Journey in Dropping the ORM (in Golang)

Concurrency in Go, Pony, Erlang/Elixir, and Rust