I've been looking at that for quite some time, even met teams members developing the product. Sorry to say: both are fundamentally different technologies and philosophies.
NOSTR "accounts" are meant to trivially generated and used outside the context of micro-blogging. That is the reason for being popular, the npub becomes a signature that validates texts and there is value in that.
AT always feels like mastodon meets RSS with US-centric political moderation on top.
I wouldn't write ATProto off as just microblogging, there are a bunch of interesting (and exciting depending on your POV) apps out there that _aren't_ microblogging apps. To name a few:
* https://stream.place
* https://tangled.org
* https://www.germnetwork.com/
* https://slices.network/
* https://smokesignal.events/
* https://www.graze.social/
I'll check them later. Thank you for the list.
> US-centric political moderation on top.
This is something you opt-in to. Two concepts, labels and moderation policy.
You subscribe to "labelers" which will apply labels to posts. You can subscribe to many labelers. Some labelers will be generic or some will be focused on a certain idea/niche. You might have a labeler focusing on nsfw content or another for human vs ai content. Or one who just tags spiders. Labels can be anything, and are stand alone data objects in the atproto ecosystem.
Your moderation policy is up to you, on how to handle those above labels. You can decide to allow, warn, or block for each label applied by your labelers. Warn shows a content warning you must click through first to see.
Bsky does have a default labeler and moderation settings when you sign up, which you might be experiencing.
I'm building a Q&A/community on top of Nostr and using those same concepts:
Original Author posts a kind:1 note with a question
A bot sends a kind:1985 note (NIP-32 https://github.com/nostr-protocol/nips/blob/master/32.md) that labels the content.
It can be done by the author (self-label), by an app, or by third parties (moderators/curators), depending on the trust model.
Other clients can decide to use that classification/label
--
For moderation purposes. If the behavior is closer to abuse (spam, scams, harassment...), use NIP-56 (Reporting). Reporting harmful/should-be-moderated content.
Thank you for explaining how it works. I'm building a decentralized platform and NOSTR was the first choice as base for signing messages and identities. There is the will to include other protocols (even IRC is supported as entry method) but whenever approaching AT there are always obstacles.
Will put on the list for a deeper review.