- Published on
How to build a simple web server in Rust
- Authors
- Name
- Jonas Vetterle
- @jvetterle
This article is part of a series in which we compare implementations of a simple web server in different programming languages. If you're interested, check out the implementations in Python, and also the benchmarking article.
In this article we'll implement a simple web server in Rust in two different ways - using Rocket and axum. There are actually a whole lot of other web frameworks available in Rust, like Actix, Warp, Tide and Salvo.
How did I land on Rocket and axum? Well, axum is part of Tokio, a popular async runtime for Rust. It's also a relatively bare bones framework in the sense that it doesn't use macros for defining routes. Rocket on the other hand is a bit more high level, uses macros, and hides away a lot of the boilerplate code. So it makes for an interesting comparison, as we're trading off some control for ease of use.
The app we're going to build is fairly simple, but easily extensible. We're going to build a todo list app that allows us to add, remove, edit and list tasks. We're going to persist everything in a SQLite database, which we'll manage using Diesel.
Shortcuts:
- Setting up the API server with Rocket
- Setting up an SQLite database with Diesel
- Implementing the CRUD operations
- Implementing the API end points
- Setting up the API server in axum
- Differences between Rocket and axum
Simples, let's go!
Setting up the API server with Rocket
Let's set up the Rust project and start with just the web server part. All we need for now is a Cargo.toml
where we specify the dependencies, a main.rs
where we write the server code and a Rocket.toml
for the Rocket configuration.
/
├── src
│ ├── main.rs
├── Cargo.toml
├── Rocket.toml
Starting with the Cargo.toml
file, we need to add the dependencies for the web server.
[package]
name = "simple_web_server"
version = "0.1.0"
edition = "2021"
[dependencies]
rocket = { version = "0.5.0-rc.2" }
rocket_sync_db_pools = { version = "0.1.0-rc.3", features = ["diesel_sqlite_pool"] }
Rocket
is the framework we're using for our API server and we could just work with that. However we're also installing rocket_sync_db_pools
, which will mean we won't have to write boilerplate code for connection pooling. Next, let's add the Rocket configuration in Rocket.toml
.
[default]
address = "127.0.0.1"
port = 8000
workers = 2
Next, let's write the server code in main.rs
. We start with just a simple "Hello, world!" GET route.
#[macro_use]
extern crate rocket;
#[get("/")]
fn hello() -> &'static str {
"Hello, world!"
}
#[launch]
fn rocket() -> _ {
rocket::build().mount("/", routes![hello])
}
In rust, since it's a compiled language, we need to build the project before we can run it.
cargo build
which will create a target
directory with the compiled binary.
To run the server, we can use
cargo run
If everything is working correctly, you should be able to access the server at http://127.0.0.1:8000
and see the "Hello, world!" message.
curl http://127.0.0.1:8000
➜ Hello, world!%
Setting up an SQLite database with Diesel
For the database management, we're going to use Diesel, which is a safe, extensible ORM and query builder for Rust. We'll also need diesel-cli later, so let's install that first.
cargo install diesel_cli --no-default-features --features sqlite
Add an .env
file with the following content to the root of the project specifying the database URL.
DATABASE_URL=file:db.sqlite
Now we can let Diesel set up the database for us.
diesel setup
This should create a migrations
directory for you (more on that in a second), a diesel.toml
file, specifying where diesel-cli can find the desired schema, and an db.sqlite
file, which is the SQLite database itself.
Next, let's use diesel-cli to create a migration.
diesel migration generate init
This should create a new folder YYYY-MM-DD-202827_init
in your migrations folder, containing and empty up.sql
and down.sql
file. The sql files should contain the SQL commands to create and drop the table respectively. That way you can always apply and revert the changes to the database.
-- up.sql
CREATE TABLE tasks (
id INTEGER PRIMARY KEY AUTOINCREMENT NOT NULL,
name TEXT NOT NULL,
done BOOLEAN NOT NULL
);
and
-- down.sql
DROP TABLE tasks;
Now that the migration is in place, we can run it via:
diesel migration run
This will
- create the
tasks
table in the database - create a file
schema.rs
in thesrc
directory (that's specified indiesel.toml
), which contains the Rust representation of the database schema. You can also create this file manually by runningdiesel print-schema > src/schema.rs
.
Confirm that the table was created by running:
sqlite3 sqlite:file.sqlite ".tables"
This should show the tasks
table as well as a __diesel_schema_migrations
table, which is used by Diesel to keep track of the migrations that have been run.
Your folder structure should now look like this:
/
├── migrations
│ ├── YYYY-MM-DD-202827_init
│ │ ├── down.sql
│ │ ├── up.sql
├── src
│ ├── main.rs
│ ├── schema.rs
├── .env
├── Cargo.toml
├── diesel.toml
├── Rocket.toml
├── file:db.sqlite
Implementing CRUD operations
Ok now that the database is set up, let's implement the CRUD operations.
First we have to update the dependencies in the Cargo.toml
file. We're adding diesel
itself and serde
for serialization (since our API server will receive and send json data).
[dependencies]
diesel = { version = "2.2.1", features = ["sqlite", "r2d2"] }
serde = { version = "1.0", features = ["derive"] }
Next we're gonna create a models.rs
file in the src
directory, where we define some Task
structs. These structs use some traits from serde
to allow for serialization and deserialization of the data.
use diesel::prelude::*;
use serde::{Deserialize, Serialize};
#[derive(Queryable, Selectable, Serialize, Deserialize)]
#[diesel(table_name = crate::schema::tasks)]
#[diesel(check_for_backend(diesel::sqlite::Sqlite))]
pub struct Task {
pub id: i32,
pub name: String,
pub done: bool,
}
#[derive(Insertable, AsChangeset, Deserialize)]
#[diesel(table_name = crate::schema::tasks)]
pub struct NewTask {
pub name: String,
pub done: bool,
}
Let's also create crud.rs
in the same directory, where we define the CRUD operations.
use crate::models::{NewTask, Task};
use crate::schema::tasks;
use diesel::prelude::*;
use diesel::sqlite::SqliteConnection;
pub fn create_task(conn: &mut SqliteConnection, new_task: &NewTask) -> QueryResult<Task> {
conn.transaction(|conn| {
diesel::insert_into(tasks::table)
.values(new_task)
.execute(conn)?;
tasks::table.order(tasks::id.desc()).first(conn)
})
}
pub fn get_task(conn: &mut SqliteConnection, task_id: i32) -> QueryResult<Task> {
tasks::table.find(task_id).first(conn)
}
pub fn get_tasks(conn: &mut SqliteConnection) -> QueryResult<Vec<Task>> {
tasks::table.load::<Task>(conn)
}
pub fn delete_task(conn: &mut SqliteConnection, task_id: i32) -> QueryResult<usize> {
diesel::delete(tasks::table.find(task_id)).execute(conn)
}
pub fn update_task(
conn: &mut SqliteConnection,
task_id: i32,
updated_task: &NewTask,
) -> QueryResult<Task> {
diesel::update(tasks::table.find(task_id))
.set(updated_task)
.execute(conn)?;
tasks::table.find(task_id).first(conn)
}
Your project structure should now look like this:
/
├── migrations
│ ├── YYYY-MM-DD-202827_init
│ │ ├── down.sql
│ │ ├── up.sql
├── src
│ ├── crud.rs
│ ├── main.rs
│ ├── models.rs
│ ├── schema.rs
├── .env
├── Cargo.toml
├── diesel.toml
├── Rocket.toml
├── file:db.sqlite
Implementing the API end points
The last step is to make the CRUD operations available via the API.
Start by adding the following DB configuration to the Rocket.toml
file.
[default.databases]
sqlite_database = { url = "file:db.sqlite", pool_size = 2 }
This is a setting which we'll use the main.rs
file below to create a connection pool to the database. You can choose to use a different pool size if you want. This comes with a small overhead, but it generally a good idae in concurrent applications. By doing this we don't have to establish a new connection to the database every time we want to perform a query. Instead we borrow an existing connection from the pool, perform the query and then return the connection to the pool.
Finally update the main.rs
file as follows:
mod crud;
mod models;
mod schema;
#[macro_use]
extern crate rocket;
use crate::crud::{create_task, delete_task, get_task, get_tasks, update_task};
use crate::models::{NewTask, Task};
use rocket::http::Status;
use rocket::serde::json::Json;
use rocket_sync_db_pools::{database, diesel};
#[database("sqlite_database")]
struct DbConn(diesel::SqliteConnection);
#[get("/")]
fn hello() -> &'static str {
"Hello, world!"
}
#[post("/tasks", format = "json", data = "<task>")]
async fn create(task: Json<NewTask>, conn: DbConn) -> Json<Task> {
conn.run(move |c| create_task(c, &task))
.await
.map(Json)
.expect("Failed to create task")
}
#[get("/tasks")]
async fn index(conn: DbConn) -> Json<Vec<Task>> {
conn.run(get_tasks)
.await
.map(Json)
.expect("Failed to retrieve tasks")
}
#[get("/tasks/<id>")]
async fn detail(id: i32, conn: DbConn) -> Json<Task> {
conn.run(move |c| get_task(c, id))
.await
.map(Json)
.expect("Failed to retrieve task")
}
#[put("/tasks/<id>", format = "json", data = "<task>")]
async fn update(id: i32, task: Json<NewTask>, conn: DbConn) -> Json<Task> {
conn.run(move |c| update_task(c, id, &task))
.await
.map(Json)
.expect("Failed to update task")
}
#[delete("/tasks/<id>")]
async fn delete(id: i32, conn: DbConn) -> Status {
conn.run(move |c| delete_task(c, id))
.await
.map(|num_deleted| {
if num_deleted > 0 {
Status::NoContent
} else {
Status::NotFound
}
})
.unwrap_or(Status::InternalServerError)
}
#[rocket::main]
async fn main() -> Result<(), rocket::Error> {
let figment = rocket::Config::figment().merge((
"databases.sqlite_database",
rocket::Config::from(rocket::build().figment()),
));
rocket::custom(figment)
.attach(DbConn::fairing())
.mount("/", routes![hello, index, create, detail, update, delete])
.launch()
.await?;
Ok(())
}
where this macro creates the connection pool using rocket_sync_db_pools
:
#[database("sqlite_database")]
struct DbConn(diesel::SqliteConnection);
And that's it! As before, build the project with cargo build
, then cargo run
and your server should be up and running.
Setting up the API server in axum
Now let's set up the API server using axum. A lot of the above still holds, but there are some key differences.
First of all, let's update the Cargo.toml
file with the dependencies for axum. It should now look like this:
[package]
name = "simple_web_server"
version = "0.1.0"
edition = "2021"
[dependencies]
axum = "0.6"
diesel = { version = "2.2.1", features = ["sqlite", "r2d2"] }
serde = { version = "1.0", features = ["derive"] }
tokio = { version = "1", features = ["full"] }
The crud.rs
file stays the same, but we have to make some changes to models.rs
. The reason for this is that Rocket is by default more lenient when it comes to JSON serialization and deserialization. As mentioned above, Rocket is more high level and abstracts away a lot of the boilerplate code for us. One problem with axum is that it expects boolean values to be actual booleans, not for example string encoded booleans. So it will struggle if we send a JSON object with a boolean value that is a string "true"
or "True"
instead of a boolean true
.
The way we get around that is by using a custom deserializer for the done
field in the NewTask
struct.
// models.rs
use diesel::prelude::*;
use serde::{Deserialize, Serialize};
#[derive(Queryable, Selectable, Serialize, Deserialize)]
#[diesel(table_name = crate::schema::tasks)]
#[diesel(check_for_backend(diesel::sqlite::Sqlite))]
pub struct Task {
pub id: i32,
pub name: String,
pub done: bool,
}
#[derive(Insertable, AsChangeset, Deserialize, Clone)]
#[diesel(table_name = crate::schema::tasks)]
pub struct NewTask {
pub name: String,
#[serde(deserialize_with = "deserialize_bool_from_string_or_bool")]
pub done: bool,
}
fn deserialize_bool_from_string_or_bool<'de, D>(deserializer: D) -> Result<bool, D::Error>
where
D: serde::Deserializer<'de>,
{
#[derive(Deserialize)]
#[serde(untagged)]
enum BoolOrString {
Bool(bool),
String(String),
}
match BoolOrString::deserialize(deserializer)? {
BoolOrString::Bool(b) => Ok(b),
BoolOrString::String(s) => match s.to_lowercase().as_str() {
"true" => Ok(true),
"false" => Ok(false),
"True" => Ok(true),
"False" => Ok(false),
_ => Err(serde::de::Error::custom("can't deserialize string to bool")),
},
}
}
Here deserialize_bool_from_string_or_bool
takes care of different types of boolean values and converts them to a boolean.
Next, let's update the main.rs
file to use axum instead of Rocket. Here is the updated file:
use axum::{
extract::{Path, State},
http::StatusCode,
routing::{get, post, put, delete},
Json, Router,
};
use diesel::{
r2d2::{self, ConnectionManager},
sqlite::SqliteConnection,
connection::SimpleConnection,
};
use std::sync::Arc;
use std::time::Duration;
mod models;
mod crud;
mod schema;
use crate::models::{Task, NewTask};
use crate::crud::{create_task, delete_task, get_task, get_tasks, update_task};
type DbPool = Arc<r2d2::Pool<ConnectionManager<SqliteConnection>>>;
fn main() -> Result<(), Box<dyn std::error::Error>> {
let runtime = tokio::runtime::Builder::new_multi_thread()
.worker_threads(2)
.enable_all()
.build()?;
runtime.block_on(async_main())
}
async fn async_main() -> Result<(), Box<dyn std::error::Error>> {
let database_url = "db.sqlite";
let pool_size = 2;
let manager = ConnectionManager::<SqliteConnection>::new(database_url);
let pool = r2d2::Pool::builder()
.max_size(pool_size)
.connection_customizer(Box::new(ConnectionOptions {
busy_timeout: Some(Duration::from_secs(5)),
pragmas: vec![
("journal_mode".to_string(), "WAL".to_string()),
],
}))
.build(manager)
.expect("Failed to create pool.");
let pool = Arc::new(pool);
let app = Router::new()
.route("/", get(hello))
.route("/tasks", get(index).post(create_task_handler))
.route("/tasks/:id", get(detail).put(update_task_handler).delete(delete_task_handler))
.with_state(pool);
println!("Server starting on http://127.0.0.1:8000");
axum::Server::bind(&"127.0.0.1:8000".parse().unwrap())
.serve(app.into_make_service())
.await?;
Ok(())
}
#[derive(Debug)]
struct ConnectionOptions {
pub busy_timeout: Option<Duration>,
pub pragmas: Vec<(String, String)>,
}
impl r2d2::CustomizeConnection<SqliteConnection, diesel::r2d2::Error> for ConnectionOptions {
fn on_acquire(&self, conn: &mut SqliteConnection) -> Result<(), diesel::r2d2::Error> {
if let Some(duration) = self.busy_timeout {
conn.batch_execute(&format!("PRAGMA busy_timeout = {};", duration.as_millis()))
.map_err(diesel::r2d2::Error::QueryError)?;
}
for (name, value) in &self.pragmas {
conn.batch_execute(&format!("PRAGMA {} = {};", name, value))
.map_err(diesel::r2d2::Error::QueryError)?;
}
Ok(())
}
}
async fn hello() -> &'static str {
"Hello, world!"
}
async fn create_task_handler(
State(pool): State<DbPool>,
Json(new_task): Json<NewTask>,
) -> Result<Json<Task>, (StatusCode, String)> {
let mut conn = pool.get().map_err(|e| (StatusCode::INTERNAL_SERVER_ERROR, e.to_string()))?;
create_task(&mut conn, &new_task)
.map(Json)
.map_err(|e| (StatusCode::INTERNAL_SERVER_ERROR, format!("Failed to create task: {}", e)))
}
async fn index(State(pool): State<DbPool>) -> Result<Json<Vec<Task>>, (StatusCode, String)> {
let mut conn = pool.get().map_err(|e| (StatusCode::INTERNAL_SERVER_ERROR, e.to_string()))?;
get_tasks(&mut conn)
.map(Json)
.map_err(|e| (StatusCode::INTERNAL_SERVER_ERROR, format!("Failed to get tasks: {}", e)))
}
async fn detail(
State(pool): State<DbPool>,
Path(id): Path<i32>,
) -> Result<Json<Task>, (StatusCode, String)> {
let mut conn = pool.get().map_err(|e| (StatusCode::INTERNAL_SERVER_ERROR, e.to_string()))?;
get_task(&mut conn, id)
.map(Json)
.map_err(|e| (StatusCode::INTERNAL_SERVER_ERROR, format!("Failed to get task: {}", e)))
}
async fn update_task_handler(
State(pool): State<DbPool>,
Path(id): Path<i32>,
Json(task): Json<NewTask>,
) -> Result<Json<Task>, (StatusCode, String)> {
let mut conn = pool.get().map_err(|e| (StatusCode::INTERNAL_SERVER_ERROR, e.to_string()))?;
update_task(&mut conn, id, &task)
.map(Json)
.map_err(|e| (StatusCode::INTERNAL_SERVER_ERROR, format!("Failed to update task: {}", e)))
}
async fn delete_task_handler(
State(pool): State<DbPool>,
Path(id): Path<i32>,
) -> Result<StatusCode, (StatusCode, String)> {
let mut conn = pool.get().map_err(|e| (StatusCode::INTERNAL_SERVER_ERROR, e.to_string()))?;
delete_task(&mut conn, id)
.map(|num_deleted| if num_deleted > 0 { StatusCode::NO_CONTENT } else { StatusCode::NOT_FOUND })
.map_err(|e| (StatusCode::INTERNAL_SERVER_ERROR, format!("Failed to delete task: {}", e)))
}
Note that the the sync main function isn't strictly necessary.
let runtime = tokio::runtime::Builder::new_multi_thread()
.worker_threads(2)
.enable_all()
.build()?;
runtime.block_on(async_main())
}
The reason we're including this is so we can set the number of worker_threads
to 2. Normally this would happen automatically by tokio, but we want to achieve parity with the Rocket and Python implementations.
Differences between Rocket and axum
Let's talk about some of this code in more detail and how it varies from the Rocket implementation.
Connection pooling
In Rocket, being a more high level framework, the connection pooling was relatively easy to set up using rocket_sync_db_pools
.
#[database("sqlite_database")]
struct DbConn(diesel::SqliteConnection);
In axum on the other hand we have to write a lot more boilerplate code to set up the connection pool. We use r2d2
for the connection pooling functionality.
let pool = r2d2::Pool::builder()
.max_size(pool_size)
.connection_customizer(Box::new(ConnectionOptions {
busy_timeout: Some(Duration::from_secs(5)),
pragmas: vec![
("journal_mode".to_string(), "WAL".to_string()),
],
}))
.build(manager)
.expect("Failed to create pool."
);
We also useArc
to make this pool safely shareable across multiple threads.
let pool = Arc::new(pool);
And finally we also enable write ahead logging (WAL). Rocket does this by default, and it's necessary for the benchmarking I did here.
#[derive(Debug)]
struct ConnectionOptions {
pub busy_timeout: Option<Duration>,
pub pragmas: Vec<(String, String)>,
}
impl r2d2::CustomizeConnection<SqliteConnection, diesel::r2d2::Error> for ConnectionOptions {
fn on_acquire(&self, conn: &mut SqliteConnection) -> Result<(), diesel::r2d2::Error> {
if let Some(duration) = self.busy_timeout {
conn.batch_execute(&format!("PRAGMA busy_timeout = {};", duration.as_millis()))
.map_err(diesel::r2d2::Error::QueryError)?;
}
for (name, value) in &self.pragmas {
conn.batch_execute(&format!("PRAGMA {} = {};", name, value))
.map_err(diesel::r2d2::Error::QueryError)?;
}
Ok(())
}
}
So connection pooling is a lot more involved in axum than it is in Rocket. In reality, this might not matter much though, because it's likely that in a real world application you might have a separate service to handle connection pooling.
Route definitions
In axum, routes and handlers are defined separately. So for example we define the hello
handler like this:
async fn hello() -> &'static str {
"Hello, world!"
}
and then define the route like this:
let app = Router::new().route("/", get(hello))
In Rocket, we define the route and handler in one go using a macro:
#[get("/")]
fn hello() -> &'static str {
"Hello, world!"
}
Another notable difference is that axum uses explicit parameter extraction. Take for example the index
handler:
async fn index(State(pool): State<DbPool>) -> Result<Json<Vec<Task>>, (StatusCode, String)> {
let mut conn = pool.get().map_err(|e| (StatusCode::INTERNAL_SERVER_ERROR, e.to_string()))?;
get_tasks(&mut conn)
.map(Json)
.map_err(|e| (StatusCode::INTERNAL_SERVER_ERROR, format!("Failed to get tasks: {}", e)))
}
where we use State(pool)
to explicitly extract the connection pool from the state. We then go on to use pool.get()
to get a connection from the pool.
In Rocket on the other hand we use parameter injection. So all we have to do is declare conn: DbConn
in the handler signature and Rocket will take care of injecting the connection pool for us.
#[get("/tasks")]
async fn index(conn: DbConn) -> Json<Vec<Task>> {
conn.run(get_tasks)
.await
.map(Json)
.expect("Failed to retrieve tasks")
}
Next steps
That's a wrap! I hope this article is a helpful starting point for building web servers in Rust using Rocket and axum. If you're interested, check out the implementations in Python, and also the benchmarking article.