Dave Gill

P2P-Play

I started working on my little side project p2p-play about two years ago. I wanted to create a peer to peer network application that would allow me to learn more about Rust code and in particular networking. It was very ambitious and I started it from a tutorial I saw online. But that tutorial only went so far and then stopped just as it got interesting. Not too surprising really, most of the tutorials out there whet your appetite and nothing more.

I really struggled to get the tutorial to work because, of course, it was old and the author of that tutorial had not updated it to work with the latest version of libp2p. So I had to do a lot of updates etc to get the code to compile and run. I felt quite happy when I did get it running, but it did take me quite a while and also taught me more about the library I was using than Rust.

Where it Started

The project was inspired by a LogRocket blog post on building a peer-to-peer app in Rust. At the time libp2p was at version 0.44 and by the time I got started it had already moved on a lot. The early commits show the pain — moving from v0.45 through 0.46, 0.47, 0.48, 0.49, 0.50, 0.51, 0.52 and eventually all the way to 0.56. Each version brought breaking changes, deprecated APIs and hours of head-scratching.

A large number of the early commits are just debugging messages — Added more logging, More logging, Added more debugging around connections — which is just how that period felt. The peer discovery and connection layer of libp2p is fiddly and getting two nodes to find each other and stay connected took a lot of trial and error.

Growing the App

Once the basics were working I started building on top of it. The git history is a feature diary:

  • Story publishing — nodes could create stories and broadcast them to connected peers
  • Peer naming — giving peers human-readable aliases so you don't have to deal with raw peer IDs
  • Persistent storage — moved from a simple JSON file to a proper SQLite database using rusqlite
  • Direct messaging — point-to-point messages using libp2p's request-response protocol, completely separate from the broadcast channel
  • Terminal UI — replaced the raw stdout output with a full ratatui-based TUI with multiple panels, keyboard navigation and real-time updates
  • Kademlia DHT — added DHT support so nodes could discover each other beyond the local network, not just via mDNS
  • Channel system — organise stories into named channels with subscription management
  • Message relay — relay messages through intermediate nodes to reach peers behind NAT
  • End-to-end encryption — a crypto module for encrypting direct messages
  • WASM execution — probably the most ambitious addition: nodes can advertise and execute WebAssembly modules, turning the network into a distributed compute platform

What it Does Now

At its heart, p2p-play is a peer-to-peer story sharing application. Peers on the same local network find each other automatically via mDNS; peers across the internet connect via Kademlia DHT bootstrap nodes. Once connected, you can:

  • Create and publish stories to all connected peers
  • Organise stories into channels and subscribe to the ones you care about
  • Send direct messages to named peers
  • Set a human-readable alias so others can identify you
  • View everything through a multi-panel terminal UI with colour-coded notifications, relative timestamps and delivery receipts

The TUI has panels for local peers, connected peers, local stories, received stories, direct messages and a log output. Navigation is keyboard-driven and story creation is a step-by-step wizard inside the terminal.

The application is configured through a single unified_network_config.json file that covers everything from bootstrap retry settings to ping keep-alive intervals to circuit breaker thresholds. Most settings can be hot-reloaded at runtime without a restart.

To get started yourself:

git clone https://github.com/bhagdave/p2p-play.git
cd p2p-play
cargo run

Then set your name with name <alias> and create your first story with create s.

Using AI Throughout the Process

I will be honest about how much AI assistance has gone into the project — both as a learning tool and as a productivity tool. The git history makes this fairly visible.

Early on I added a CLAUDE.md file to give Claude Code context about the project, and a GitHub Copilot instructions file. Over time the workflow evolved into using both tools in different ways:

GitHub Copilot handled a lot of the implementation work on individual features — the commit history is full of copilot/sub-pr-* branches where Copilot would take a feature branch and flesh out the implementation or fix review issues. Things like the WASM executor test coverage, formatting fixes, and trait implementations were often driven this way.

Claude Code (which I use via the CLI) became more useful for architectural decisions, planning and code review. I used it to set up spec-kit for feature specification, to review pull requests created by co-pilot, and for the more complex design work around things like the circuit breaker pattern and the unified network configuration system.

The interesting thing is that neither tool is a replacement for understanding the code. I still had to make the architectural calls, debug the tricky connection issues, and decide what features were worth adding. The AI tools helped with the implementation of features I had already decided on, and helped me think through options — but the project direction was always mine.

It has been a genuinely useful experiment in AI-assisted dev. The project has feature completeness that would have taken me much longer working alone, and in the process I have learned far more about both Rust and libp2p than I would have from any tutorial.

The Code

The project is on GitHub at https://github.com/bhagdave/p2p-play. It needs Rust 1.88+ and uses libp2p 0.56 as its networking backbone. If you are interested in peer-to-peer networking in Rust, or just want to see what AI-assisted development looks like in practice on a side project, have a look.