- Published on
How to build a simple web server in Python
- Authors
- Name
- Jonas Vetterle
- @jvetterle
This article is part of a series in which we compare implementations of a simple web server in different programming languages. If you're interested, check out the implementations in Rust, and also the benchmarking article.
In this article we'll implement a simple web server in Python using FastAPI and SQLAlchemy. FastAPI and SQLAlchemy is a very reasonable choice for a python web app these days. It's a modern, high-performance web framework and my personal first choice. The alternative would be to use Django, which is a more full-featured framework. As we're keeping it simple here, it's definitely overkill for our use case. And there is also flask, which is a lightweight framework, but is not as performant as FastAPI and has an outdated API.
The app we're going to build is fairly simple, but easily extensible. We're going to build a todo list app that allows us to add, remove, edit and list tasks. We're going to persist everything in a SQLite database, which we'll manage using SQLAlchemy and Alembic.
Shortcuts:
- Setting up the API server with FastAPI
- Setting up an SQLite database with SQLAlchemy and Alembic
- Implementing the CRUD operations
- Implementing the API end points
I recommend you set up a virtual environment for the Python project first (see here). Also, we're gonna be using the poetry
package manager, so make sure you have that installed as well (see here for instructions).
Setting up the API server with FastAPI
On the Python side, we're starting with the following project structure:
/
├── src
│ ├── main.py
├── pyproject.toml
Our pyproject.toml
file looks as follows:
[tool.poetry]
name = "simple_web_server"
version = "0.1.0"
description = ""
authors = ["foo <foo@bar.com>"]
readme = "README.md"
[tool.poetry.dependencies]
fastapi = "^0.112.0"
python = "^3.10"
uvicorn = "^0.30.5"
[build-system]
requires = ["poetry-core"]
build-backend = "poetry.core.masonry.api"
The most important bit to note here is that we're using FastAPI as our web server framework. We're also installing uvicorn
as the ASGI server to run the FastAPI app.
Let's start by creating our server in main.py
and add a simple "Hello, world!" GET route.
import uvicorn
from fastapi import APIRouter, FastAPI, Request
app = FastAPI()
@app.get("/")
async def hello() -> str:
return "Hello, world!"
if __name__ == "__main__":
uvicorn.run("main:app", reload=True)
Nice thing about Python, we can run the server directly without having to compile the code first. We just have to install the dependencies first. To do that run
poetry lock
which will install the dependencies specified in the pyproject.toml
file and create a poetry.lock
file. This file is used to keep track of the exact versions of the dependencies that were installed. So if we were to run it on a different machine, we could just run poetry install
and it would install the exact same versions of the dependencies.
Now we can run the server with
poetry run python src/main.py
You should be able to access http://127.0.0.1:8000
and see the "Hello, world!" message.
curl http://127.0.0.1:8000
➜ Hello, world!%
Setting up an SQLite database with SQLAlchemy and Alembic
Now we'll add dependencies for managing the SQLite database and migrations. Run the following command to install them:
poetry add alembic sqlalchemy
This will also add those 2 dependencies to the pyproject.toml
file.
alembic
is a lightweight database migration tool for SQLAlchemy. To get started run
poetry run alembic init alembic
Your project structure should now look like this:
/
├── alembic
│ ├── versions
│ ├── env.py
│ ├── README
│ ├── script.py.mako
├── src
│ ├── __init__.py
│ ├── main.py
├── alembic.ini
├── poetry.lock
├── pyproject.toml
Next we add 2 new files src/db.py
and src/models.py
.
# db.py
from sqlalchemy import create_engine, event, QueuePool
from sqlalchemy.ext.declarative import declarative_base
from sqlalchemy.orm import sessionmaker
Base = declarative_base()
engine = create_engine(
"sqlite:///db.sqlite",
connect_args={"check_same_thread": False},
poolclass=QueuePool,
pool_size=2,
)
@event.listens_for(engine, "connect")
def set_sqlite_pragma(dbapi_connection, connection_record):
cursor = dbapi_connection.cursor()
cursor.execute("PRAGMA journal_mode=WAL")
cursor.execute("PRAGMA synchronous=NORMAL")
cursor.close()
Base.metadata.bind = engine
session_maker = sessionmaker(autocommit=False, autoflush=False, bind=engine)
def get_db():
db = session_maker()
try:
yield db
finally:
db.close()
We create an engine that connects to a file-based SQLite database db.sqlite
and a session maker that creates a new session for each request. The check_same_thread
argument is specific to SQLite and we set it to False
to allow the connection to be used by multiple threads. Use with caution, as it can lead to race conditions and data corruption, but we want max speed for this simple app and promise to manage our sessions correctly!
Setting the poolclass
and pool_size
is not strictly necessary for a minimal setup, but we're doing it here to show how you can configure the connection pool. It's also what we do in the Rust implementation so we want to keep parity with that.
The same applies to the following code block which enables write ahead logging (WAL).
def set_sqlite_pragma(dbapi_connection, connection_record):
cursor = dbapi_connection.cursor()
cursor.execute("PRAGMA journal_mode=WAL")
cursor.execute("PRAGMA synchronous=NORMAL")
cursor.close()
It's not necessary unless you're expecting a high number of concurrent writes, which we're doing in our benchmarking.
The get_db
is a function that creates a new session for each request and closes it afterwards. This will come in handy in a second when we create the API end points.
In models.py
we define the Task
model.
# models.py
from . import Base
from sqlalchemy import Column, String, Boolean
class Task(Base):
__tablename__ = "task"
id = Column(String, primary_key=True)
name = Column(String, nullable=True)
done = Column(Boolean, nullable=True)
Optional: Creating a migration with alembic automatically
Now here is a cool thing about alembic. Go to the alembic.ini
and change the line that starts with sqlalchemy.url
to
sqlalchemy.url = sqlite:///./db.sqlite
Also head to your env.py
file in the alembic
directory and add the following lines at the top of the file.
from src.db import Base
from src.models import *
and change the line that says target_metadata = None
to target_metadata = Base.metadata
.
Now you can run the following command to create a migration for you, given the schema you specified in the models.py
file.
poetry run alembic revision --autogenerate -m "Added task table"
This will create the sqlite database file db.sqlite
in the root directory, and a migration file in the alembic/versions
directory. Notice that the upgrade
and downgrade
functions contain the instructions to create and drop the task
table respectively.
Creating a migration with alembic manually
If for some reason this didn't work for you, you can also create the migration manually. To do this run the command withouth the --autogenerate
flag.
poetry run alembic revision -m "Added task table"
which creates a new migration file in the alembic/versions
directory that has an empty upgrade
and downgrade
function. Edit those functions as follows:
def upgrade() -> None:
op.create_table('task',
sa.Column('id', sa.String(), nullable=False),
sa.Column('name', sa.String(), nullable=True),
sa.Column('done', sa.Boolean(), nullable=True),
sa.PrimaryKeyConstraint('id'))
def downgrade() -> None:
op.drop_table('task')
Apply the migration
Whichever way you chose to create the migration, you can now run it with
poetry run alembic upgrade head
This will create the task
table in the database. To verify that it was created, run
sqlite3 file:db.sqlite ".tables"
which should print out the 2 tables that are in the database, alembic_version
and task
.
Implementing CRUD operations
For the CRUD operations, we're first defining some pydantic classes in src/schemas.py
.
# schemas.py
from pydantic import BaseModel, ConfigDict
class TaskBase(BaseModel):
name: str
done: bool
class NewTask(TaskBase):
pass
class Task(TaskBase):
id: int
model_config = ConfigDict(from_attributes=True)
Pydantic plays nicely with FastAPI and SQLAlchemy. It will handle all the serialization and deserialization for us. And it's generally a good idea to use pydantic for type annotations in Python.
Now, we're going to add another new file src/crud.py
for the actual CRUD operations.
# crud.py
from sqlalchemy.orm import Session
from . import models, schemas
def create_task(db: Session, task: schemas.NewTask):
db_task = models.Task(**task.model_dump())
db.add(db_task)
db.commit()
db.refresh(db_task)
return schemas.Task.model_validate(db_task)
def get_task(db: Session, task_id: str):
db_task = db.query(models.Task).filter(models.Task.id == task_id).first()
return schemas.Task.model_validate(db_task) if db_task else None
def get_tasks(db: Session):
db_tasks = db.query(models.Task).all()
return [schemas.Task.model_validate(task) for task in db_tasks]
def update_task(db: Session, task_id: str, task: schemas.NewTask):
db_task = db.query(models.Task).filter(models.Task.id == task_id).first()
if db_task:
for key, value in task.model_dump().items():
setattr(db_task, key, value)
db.commit()
db.refresh(db_task)
return schemas.Task.model_validate(db_task)
return None
def delete_task(db: Session, task_id: str):
db_task = db.query(models.Task).filter(models.Task.id == task_id).first()
if db_task:
db.delete(db_task)
db.commit()
return 1
return 0
And that's it!
Implementing the API end points
Now we can use the CRUD operations in the API end points.
Make the following additions to the main.py
file.
import uvicorn
from fastapi import FastAPI, HTTPException, Depends
from sqlalchemy.orm import Session
from typing import List
from src.schemas import Task, NewTask
from src.db import get_db
import src.crud as crud
app = FastAPI()
@app.get("/")
async def hello() -> str:
return "Hello, world!"
@app.post("/tasks", response_model=Task)
def create_task(task: NewTask, db: Session = Depends(get_db)):
return crud.create_task(db, task)
@app.get("/tasks/{task_id}", response_model=Task)
def read_task(task_id: str, db: Session = Depends(get_db)):
db_task = crud.get_task(db, task_id)
if db_task is None:
raise HTTPException(status_code=404, detail="Task not found")
return db_task
@app.get("/tasks/", response_model=List[Task])
def read_tasks(db: Session = Depends(get_db)):
return crud.get_tasks(db)
@app.put("/tasks/{task_id}", response_model=Task)
def update_task(task_id: str, task: NewTask, db: Session = Depends(get_db)):
db_task = crud.update_task(db, task_id, task)
if db_task is None:
raise HTTPException(status_code=404, detail="Task not found")
return db_task
@app.delete("/tasks/{task_id}", response_model=int)
def delete_task(task_id: str, db: Session = Depends(get_db)):
deleted_count = crud.delete_task(db, task_id)
if deleted_count == 0:
raise HTTPException(status_code=404, detail="Task not found")
return deleted_count
if __name__ == "__main__":
uvicorn.run("main:app", reload=True)
Notice how we use the get_db
function as a dependency for the end points. This creates and closes a new session for each request.
You can run the server either with poetry run python src/main.py
or with poetry run uvicorn src.main:app --reload
.
The reload
flag is useful for development, as it automatically reloads the server when you make changes to the code. However it's not recommended for production. What is recommended for production, depending on your available resources and the expected load, is to run the server with several worker processes. You can do this by running poetry run uvicorn src.main:app --workers 4
for example, which will start the server with 4 worker processes.
That's a wrap, there you have a simple web server in Python using FastAPI and SQLAlchemy. If you're interested, check out the implementations in Rust, and also the benchmarking article.