> This is my first large scale project, so I'd love to hear your feedback!
> I have placed restrictions on searching directly by user ID to prevent doxing. I also made the opt out process one click, for those who do not want to be archived.
1) I'd suggest anonymizing the usernames / author ids to something more privacy friendly such as how some image sites were generating 3-4 random words as a human readable unique id. This removes a lot of the reason people would opt out (i.e. posts being tracked down years later)
2) You not seem to have a clear rate limit documentation. If you are asking people to pay for commercial use, I'd suggest making it clear what the rough original limits are as well as the rough price range of what you'd offer.
3) Tbh, the only real thing I want from this project is basically narrative / roleplay / writing content for LLM reasons as I'm trying to build a rules-oriented system that narrates via LLM. If you don't want people using this data for this purpose, I'd suggest making that clear.
Hey,
Thanks for your suggestions.
> 1) I'd suggest anonymizing the usernames / author ids to something more privacy friendly such as how some image sites were generating 3-4 random words as a human readable unique id. This removes a lot of the reason people would opt out (i.e. posts being tracked down years later)
In the original iteration of Searchcord, it used to work similarly to that. The username was `sha256(userid+guildid)`, truncated to the first 8 characters. Unfortunately, it was pretty hard to follow chats. I will try your idea and see how it works, though.
> 2) You not seem to have a clear rate limit documentation.
This is a good idea. The rate limit varies by endpoint, and I haven't gotten around to documenting each one.
> If you are asking people to pay for commercial use, I'd suggest making it clear what the rough original limits are as well as the rough price range of what you'd offer.
I have absolutely zero idea what industry would be interested in this, in what form, and if anyone would even pay.
> 3) Tbh, the only real thing I want from this project is basically narrative / roleplay / writing content for LLM reasons as I'm trying to build a rules-oriented system that narrates via LLM. If you don't want people using this data for this purpose, I'd suggest making that clear.
I really don't care what people do with the data, as long as they are not spamming requests or using the data for commercial purposes without permission.
The sheer audacity here is quite something. You're stating people can't use your scraped data for commercial purposes "without permission," while your entire project is built on vacuuming up content from countless users without their permission, and in direct violation of Discord's ToS. That's not just a double standard; it's bordering on next-level cognitive dissonance.
And "privacy preserving"? With a one-click opt-out, that 99.999% of the affected users will never even know exists because they have no idea their conversations are now part of your archive, and you want it indexed by search engines? That's not "privacy preserving" - that's a bad joke. If privacy was a genuine concern, this project wouldn't exist in its current form. What you're offering is an opt-out fig leaf for a mass data harvesting operation.
Most people using Discord, even on "public, discoverable" servers, aren't posting with the expectation that their words will be systematically scraped, archived indefinitely, and made globally searchable outside the platform's context. It's a fundamental misunderstanding (or willful dismissal) of user expectations on what is essentially a semi-public, yet distinctly siloed, platform. This isn't an open-web forum where content is implicitly intended for broad public consumption and indexing.
Look, I get the frustration that (likely) motivated this. Discord has become an information black hole for many communities, and the shift away from open, searchable forums for project support is a genuine problem I've been incredibly frustrated with myself. But this "solution" - creating a massive, non-consensual archive that tramples over user privacy (and platform terms) - creates far graver ethical and practical issues than the one it purports to solve.
> Most people using Discord, even on "public, discoverable" servers, aren't posting with the expectation that their words will be systematically scraped, archived indefinitely, and made globally searchable outside the platform's context
Honestly, maybe they should. Maybe we need more stuff like this, until people finally wake up about the privacy catastrophe. The now defunct service spy.pet used to sell this kind of data with the stated purpose of doxxing people. There’s black markets for this. And it’s the same kind of data the service providers themselves have full access to.
> The sheer audacity here is quite something. You're stating people can't use your scraped data for commercial purposes "without permission," while your entire project is built on vacuuming up content from countless users without their permission, and in direct violation of Discord's ToS. That's not just a double standard; it's bordering on next-level cognitive dissonance.
Not really, it is not free to host and serve this data. If they want to get the data for free, they can get it directly from Discord. I did that work for them.
> And "privacy preserving"? With a one-click opt-out, that 99.999% of the affected users will never even know exists because they have no idea their conversations are now part of your archive, and you want it indexed by search engines? That's not "privacy preserving" - that's a bad joke. If privacy was a genuine concern, this project wouldn't exist in its current form. What you're offering is an opt-out fig leaf for a mass data harvesting operation.
Again, not really. It's impossible to search for users without already knowing what server they are in. This is functionally identical to Discord's in-built search feature.
> Most people using Discord, even on "public, discoverable" servers, aren't posting with the expectation that their words will be systematically scraped, archived indefinitely, and made globally searchable outside the platform's context. It's a fundamental misunderstanding (or willful dismissal) of user expectations on what is essentially a semi-public, yet distinctly siloed, platform. This isn't an open-web forum where content is implicitly intended for broad public consumption and indexing.
I believe that people need to realize that their messages were already being logged by many different moderation bots, just not publicized. This also happens on platforms like Telegram, look at the SangMata_BOT for example. Unless the messages are end to end encrypted, it was just a matter of time before they were scooped up and archived.
Thanks for your input, though, I really do want to build a platform that balances privacy and usability.
> I believe that people need to realize that their messages were already being logged by many different moderation bots, just not publicized. Unless the messages are end to end encrypted, it was just a matter of time before they were scooped up and archived.
and that makes it ok for you to do aswell? Bots storing all the messages is also not ok, but they also don't publish it, so it is way less problematic
Okay, the "not really" and "I'll solve that problem if and when" responses are... something else. It feels like you're speedrunning how to get into a world of trouble while hand-waving away every legitimate concern. Let's try to unpack this again, because your justifications are frankly baffling.
> Again, not really. It's impossible to search for users without already knowing what server they are in. This is functionally identical to Discord's in-built search feature.
That's not quite correct, and frankly it borders on willful obfuscation. In your own words elsewhere in this thread, you're eager for search engines to index this archive. That "privacy preserving" barrier of needing to know both a user ID and a server/channel id evaporates the moment Google or any other search engine hoovers up your pages. At that point, any combination of keywords, usernames, aliases, or snippets could reveal someone's posting history, across contexts and years. How is that "functionally identical" to Discord's walled-garden search or "privacy preserving"?
> I believe that people need to realize that their messages were already being logged by many different moderation bots, just not publicized.
This is a disingenuous deflection.
Your "I really do want to build a platform that balances privacy and usability" line sounds utterly hollow when the entire foundation of the project demonstrates a profound misunderstanding, or disregard, for basic privacy, consent, and intellectual property.Speaking of which... have you actually thought about the legal Pandora's Box you're prying open? Your casual "I'll deal with Discord's ToS issues if they arise" attitude is quaint, because Discord's ToS is likely the tip of a colossal iceberg of legal trouble.
You're not just 'breaking ToS', you're potentially looking at:
Good luck with all of this.I hope you have a good lawyer, ideally multiple. You might need them.
The COPPA part is only if it was knowingly.
did you type this?
Ridiculous take. If you're posting in a server that's intentionally open to the public and accessible to anyone with a link or even indexed by server discovery you shouldn't expect privacy. That's just the basic reality of the internet.
No, what's "ridiculous" is this simplistic, black-and-white framing that deliberately ignores any nuance, the concept of contextual integrity or reasonable user expectations.
Of course, no one expects absolute secrecy in a public-facing Discord server. That's a straw man. The issue isn't about some naive belief that messages are invisible. It's about the scope, permanence, and method of access and archiving.
People participating in public Discord spaces have reasonable contextual expectations about how their words will be accessed and by whom. They expect their messages to be seen by current and maybe future server members - not extracted, permanently archived, and made globally searchable by entirely unrelated third parties.
This is similar to how conversations in a public park are technically "public," but most people would be rightfully disturbed if someone recorded everything, transcribed it, published it online with their names attached, and made it all searchable forever. Just because something isn't strictly private doesn't mean any and all forms of collection, republication, and indexing are ethically justified.
If you can't see the distinction between "not perfectly private within this specific semi-public space" and "archived indefinitely, and globally searchable forever by anyone, anywhere, for any reason," then you're either arguing in bad faith or your understanding of these issues is so superficial that further engagement is pointless.
[flagged]
It seems the core concept of contextual integrity is still not landing.
It's not a question of surprise that public data can be scraped - I'm well aware of how the internet functions, thank you. The point, which you seem determined to evade, is about the fundamental ethics of systematically doing so and the vast difference in impact and expectation between, say, a server's own moderation logs or incidental screenshots, and a third party, globally indexed, permanent archive. The former serves limited, often known functions within that specific community; the latter is a privacy-invasive data trawl weaponizing the 'public' label. Just because a thing is technically possible doesn't grant a free pass to ignore privacy implications or users' reasonable expectations of how their contributions will be used and disseminated.
Your attempt to dismantle the 'public park' analogy only underscores your misunderstanding of it. The scenario isn't about someone yelling (an exceptional event, often a public nuisance, that might indeed attract specific attention or recording). It's the equivalent of someone systematically planting listening devices by every park bench, transcribing every casual, low-expectation conversation - like my dinner plans with my girlfriend, or a vent about my boss - and then publishing it all online, forever, simply because the park itself is 'public' and it was a technically possible thing to do. The ethical chasm between observing a public spectacle and conducting mass, indiscriminate surveillance of every day, semi-private interactions within a public space shouldn't be this difficult to grasp. One involves a specific event; the other is a dragnet.
As for flagging, I didn't touch your comment. I have never flagged a single comment on this site. Perhaps others simply disagreed with the quality, relevance or the dismissive tone of your contribution.
I won't continue a discussion with someone who relies on AI for writing, this response you posted presents the tells of someone using a language model to write a response paragraph.
[flagged]
> In the original iteration of Searchcord, it used to work similarly to that. The username was `sha256(userid+guildid)`, truncated to the first 8 characters. Unfortunately, it was pretty hard to follow chats. I will try your idea and see how it works, though.
I suggest you do since tbh you are likely (as others have said) to be violating privacy laws with your current implementation + the discord ToS. If its anonymized better, you are less likely to be a target of someone who gets angry about not knowing you exist.
Up to you, your life your circus y'know?
> I have absolutely zero idea what industry would be interested in this, in what form, and if anyone would even pay.
LLM data collection if its not being bought via discord already directly.
Same reason I'd want to use highly anonymized and curated data from the roleplay / writing discords as training data. It is just I'd have to go through and anonymize your data and curate it / clean it up before I would dare to send it to an LLM for legal reasons.
If I send/share PII, I'd be screwed just like you will be if someone gets upset.
> I really don't care what people do with the data, as long as they are not spamming requests or using the data for commercial purposes without permission.
Fair, for me, this is for hobby implementations of solo roleplaying content similar to AI Dungeon and other implementations so its not commercial but my use case (for your purposes) would be better served by just being able to download a database dump (properly anoynmized or me doing it) for specific servers since most data is useless to me that you collect since I've got a specific goal in mind and want to minimize data collection for legal liability reasons. (i.e. non-commercial roleplaying with no PII or other privacy risky info is likely to be a safe use case)
EDIT:
I'd consider dropping attachments + links and only recording text as well for CSAM and other abusive material reasons. I doubt you have the moderation in place to protect yourself.
Pictures and videos and what not are a lot more dangerous to you than text would be. (i.e. despite what people say about it, realistically, most text in a public forum on the internet w/o PII is not going to get you hit with fines)
That said, personally, I would not publish this as you have because I don't have that kind of risk tolerance but I can see it being "safe enough" for some people. But the images/attachements are in "are you really sure you want to do that? You could go bankrupt" territory.