There’s a difference between reading about AI-assisted development and actually doing it. This post is about doing it.

We’re going to build a complete REST API for a bookmarks manager — the kind of thing you’d actually use to save and organize links. By the end, you’ll have a working API with full CRUD operations, search, tagging, and proper error handling. And we’re going to build the whole thing by talking to AI.

What We’re Building

A bookmarks API with these endpoints:

  • POST /bookmarks — save a new bookmark
  • GET /bookmarks — list all bookmarks (with search and tag filtering)
  • GET /bookmarks/{id} — get a single bookmark
  • PUT /bookmarks/{id} — update a bookmark
  • DELETE /bookmarks/{id} — delete a bookmark

Tech stack: Python, FastAPI, SQLite with SQLAlchemy. Simple, no external services, runs anywhere.

Step 1: Project Setup

Here’s the prompt I used to kick things off:

Set up a new FastAPI project for a bookmarks manager API. Create the project structure with separate files for models, database, and routes. Use SQLAlchemy with SQLite. Include a requirements.txt. Don’t add any endpoints yet — just the skeleton.

The AI generated this structure:

bookmarks-api/
├── main.py
├── database.py
├── models.py
├── schemas.py
├── routes.py
└── requirements.txt

The database.py file handles the SQLAlchemy engine and session:

from sqlalchemy import create_engine
from sqlalchemy.orm import sessionmaker, declarative_base

SQLALCHEMY_DATABASE_URL = "sqlite:///./bookmarks.db"

engine = create_engine(
    SQLALCHEMY_DATABASE_URL, connect_args={"check_same_thread": False}
)
SessionLocal = sessionmaker(autocommit=False, autoflush=False, bind=engine)
Base = declarative_base()

def get_db():
    db = SessionLocal()
    try:
        yield db
    finally:
        db.close()

Nothing surprising here. That’s the point — you want AI to handle the standard stuff so you can focus on the interesting parts.

Step 2: The Data Model

Next prompt:

Create a Bookmark model with these fields: id (auto-increment), url (required, must be unique), title (required), description (optional), tags (stored as a comma-separated string), created_at (auto-set), updated_at (auto-set on modification). Also create the Pydantic schemas for create, update, and response.

The key detail here is specifying how tags are stored. Without that constraint, AI might create a separate tags table with a many-to-many relationship — which is fine for a larger app but overkill for what we need.

class Bookmark(Base):
    __tablename__ = "bookmarks"

    id = Column(Integer, primary_key=True, index=True)
    url = Column(String, unique=True, nullable=False, index=True)
    title = Column(String, nullable=False)
    description = Column(String, default="")
    tags = Column(String, default="")
    created_at = Column(DateTime, server_default=func.now())
    updated_at = Column(DateTime, server_default=func.now(), onupdate=func.now())

The Pydantic schemas followed the same pattern — BookmarkCreate for input validation, BookmarkResponse for output, and BookmarkUpdate with all optional fields for partial updates.

Step 3: CRUD Endpoints

This is where I used the iterative approach. Instead of asking for all five endpoints at once, I built them one at a time.

Create bookmark:

Add a POST /bookmarks endpoint. Validate that the URL is a valid URL format. If a bookmark with the same URL already exists, return 409 Conflict. Return the created bookmark with 201 status.

List bookmarks:

Add a GET /bookmarks endpoint. Support query parameters: search (searches title and description), tag (filters by tag), skip (pagination offset, default 0), limit (pagination limit, default 20, max 100). Return a list of bookmarks sorted by created_at descending.

The search and filtering logic was the most complex part:

@router.get("/bookmarks", response_model=list[BookmarkResponse])
def list_bookmarks(
    search: str | None = None,
    tag: str | None = None,
    skip: int = 0,
    limit: int = Query(default=20, le=100),
    db: Session = Depends(get_db),
):
    query = db.query(Bookmark)

    if search:
        query = query.filter(
            or_(
                Bookmark.title.ilike(f"%{search}%"),
                Bookmark.description.ilike(f"%{search}%"),
            )
        )

    if tag:
        query = query.filter(Bookmark.tags.ilike(f"%{tag}%"))

    return query.order_by(Bookmark.created_at.desc()).offset(skip).limit(limit).all()

Get, update, and delete followed the same pattern — one prompt each, with specific requirements for error handling and response codes.

Step 4: Error Handling

After the basic endpoints were working, I added proper error handling:

Add a custom exception handler for validation errors that returns a consistent error format: { “detail”: “error message” }. Make sure all 404 responses use the same format. Add a health check endpoint at GET /health that returns { “status”: “ok” }.

Small touch, but it makes the API feel professional and consistent. The AI added a helper function for 404 responses that all endpoints shared.

Step 5: Testing It

I asked AI to help me test with curl commands:

Give me curl commands to test every endpoint. Include examples for: creating a bookmark, creating a duplicate (should fail), listing with search, listing with tag filter, updating a bookmark’s title, and deleting a bookmark.

# Create a bookmark
curl -X POST http://localhost:8000/bookmarks \
  -H "Content-Type: application/json" \
  -d '{"url": "https://example.com", "title": "Example", "tags": "reference,docs"}'

# Search bookmarks
curl "http://localhost:8000/bookmarks?search=example"

# Filter by tag
curl "http://localhost:8000/bookmarks?tag=docs"

# Update title
curl -X PUT http://localhost:8000/bookmarks/1 \
  -H "Content-Type: application/json" \
  -d '{"title": "Updated Example"}'

Everything worked on the first run. The only issue I caught was that the update endpoint wasn’t properly handling partial updates — it was overwriting unset fields with None. One follow-up prompt fixed it: “The update endpoint should only modify fields that are explicitly included in the request body. Don’t overwrite unset fields.”

What This Took

Total time: about 40 minutes. That includes setup, all five endpoints, error handling, and testing.

Total prompts: about 12. Half were for the core functionality, half were for refinements and fixes.

Lines of code I typed manually: zero. Every line was AI-generated. But I read and understood every line, and I made several adjustments through follow-up prompts.

The Takeaway

Building an API with AI isn’t about giving up control. It’s about operating at a higher level of abstraction. Instead of writing code, you’re specifying behavior. Instead of debugging syntax, you’re reviewing logic.

The skills that mattered weren’t Python or FastAPI knowledge (though that helped with review). They were:

  1. Knowing what a good API looks like
  2. Breaking the work into clear, sequential steps
  3. Being specific about edge cases and error handling
  4. Testing thoroughly and catching issues early

Try building something this weekend using only AI assistance. Start with a project you understand well — that way you can evaluate the output confidently. You’ll develop a feel for the workflow fast.