About Plymorph

Plymorph is a decentralized intelligence engine that transforms Fediverse discussions into refined, high-level reports. We follow active communities across multiple Lemmy instances, cluster related discussions by topic using vector embeddings, and run a three-stage AI synthesis pipeline to produce executive summaries with fact-checking.

9
Instances
100
Communities
267
Reports
11
Topics

How it works

1. Ingestion — Every 15 minutes, we poll Lemmy instances for new posts and comments from the communities we follow. Posts are deduplicated by their ActivityPub ID so we never store the same content twice. We also fetch trending older posts via an activity-based pass so popular discussions aren't missed.

2. Processing — New posts are converted into vector embeddings using a local AI model. Similar posts are then clustered together using cosine similarity search. Each cluster is scored based on comment depth, upvote velocity, cross-instance coverage, and engagement volume.

3. Synthesis — Clusters that score above the quality threshold enter a three-stage AI pipeline:

  • The Analyst reads all posts and comments in the cluster and identifies consensus, controversy, and outlier insights
  • The Fact-Checker reviews the analysis and flags verifiable claims as Verified, Unverified, or Disputed
  • The Writer produces a three-paragraph executive summary with an assigned topic category

4. Dashboard — Reports are published here, browsable by recency or by topic. Each report links back to the original Lemmy discussions so you can dive into the source material.

What makes this different

Most news aggregators show you headlines. We show you what the community actually thinks — the consensus, the debates, and the surprising takes that emerge from hundreds of comments across multiple instances. When the same story is discussed on lemmy.world, lemmy.ml, hexbear.net, and lemmy.ca, we synthesize all those perspectives into one report.

Transparency

All data is sourced from public Lemmy API endpoints. We don't scrape, we don't require authentication, and we don't store any user profiles or PII beyond public author names. Every report shows its source discussions with direct links back to the original threads. Our API requests identify themselves with a clear User-Agent header.