Moderation Guidelines
Policies and procedures for moderating content and user behavior on Emberly.
This document outlines moderation policies, decision frameworks, and enforcement procedures.
Moderation Philosophy
Fair, consistent, and proportionate.
- Protect the community from harmful content
- Respect user privacy and freedom
- Apply rules consistently across all users
- Provide clear communication when taking action
- Allow appeals and corrections
- Document all decisions
Prohibited Content
Absolute Prohibitions (Immediate Removal)
Content that is automatically prohibited and removed on sight:
-
Malware & Exploits
- Executable files flagged by antivirus
- Exploit kits, weaponized PDFs
- Phishing kits, credential stealers
- Action: Remove immediately, no warning
- User notification: Standard (file was malware)
-
Child Exploitation
- Any CSAM (child sexual abuse material)
- Any sexualization of minors
- Action: Remove, ban user permanently, report to NCMEC
- User notification: None (direct law enforcement involvement)
-
Terroristic Content
- Bomb-making instructions
- Manifesto related to imminent violence
- Recruitment for terrorist organizations
- Action: Remove, ban, report to law enforcement
- User notification: None
-
Stolen Financial Information
- Credit card dumps
- Bank login databases
- PayPal/Venmo credentials
- Action: Remove, ban, investigate user
- User notification: We removed stolen financial data
Policy Violations (Conditional Enforcement)
Content that violates policies but may have legitimate context:
-
Copyright Infringement
- Published copyrighted works (books, movies, etc.)
- Software license cracks/keygens
- Assessment: Check for fair use (review, commentary, education)
- Action: If legitimate use → keep; Infringing → DMCA takedown
- User notification: Copyright claim explanation
-
Spam
- Identical files uploaded repeatedly
- Links to scams or malware
- Commercial promotion in public spaces
- Investigation: User behavior pattern analysis
- Action: Warning first → temporary suspension → permanent ban
- Escalation: If coordinated spam ring, involve law enforcement
-
Hate Speech
- Slurs or dehumanizing language targeting protected groups
- Assessment: Context (quote for discussion vs. endorsement)
- Action: First offense → warning; Repeat → temporary ban
- Appeals: Available via support email
-
Harassment & Threats
- Threats of violence against specific individuals
- Sustained harassment campaigns
- Doxxing (publishing private info)
- Assessment: Severity, targeting, pattern
- Action: Warn → ban from service if severe
- User protection: Offer to restrict profile visibility
Ban Severity Levels
| Level | Duration | When to Use | Appeal |
|---|---|---|---|
| Warning | None | First minor violation | N/A |
| Temporary | 7 days | Pattern of behavior | After suspension |
| Extended | 30 days | Serious violation | After suspension |
| Permanent | Indefinite | Extreme/repeated violations | Email after 6 months |
Moderation Process
Step 1: Report Received
When a report comes in:
- Log into admin dashboard
- Go to Reports section
- Filter by status:
open - Choose oldest report first (FIFO)
- Click to open full report
Save the report ID for tracking.
Step 2: Initial Assessment
For each report, determine:
-
What is reported?
- File content
- User behavior
- Account activity
-
What is the violation?
- Which policy is broken?
- How severe?
-
Is there ambiguity?
- If yes, reach out to moderator supervisor
- If no, proceed
-
What action is appropriate?
- Warning
- Temporary ban
- Permanent ban
- Content removal
- Investigation
Step 3: Take Action
For content removal:
For user warning:
For user ban:
Step 4: Document Decision
Always document:
- What was the report?
- What did we find?
- What action did we take?
- Why did we take that action?
Update report status:
Step 5: Communicate with User
When taking action, always notify the user unless it's a P1 offense (CSAM, malware, etc.)
Email Template - Warning:
Email Template - Ban:
Appeal Process
Users can appeal any moderation decision except CSAM/malware/terrorism.
Appeal Workflow
- User sends appeal email to support
- Tag as
[APPEAL]in subject - Assign to appeals specialist (usually a manager)
- Appeals specialist:
- Reviews original evidence
- Considers appeal merit
- Checks for policy misapplication
- Makes decision (uphold, modify, reverse)
- Send response to user
Timeline: Respond within 7 days
Escalation
When in doubt, escalate to your supervisor:
- Ambiguous case? → Escalate
- Multiple violations? → Escalate
- Permanent ban? → Requires approval from manager
- Security/legal issue? → Escalate to compliance team
Escalation email:
Common Scenarios
Scenario 1: Copyright Film
Report Content: Copyrighted movie file uploaded publicly
Investigation:
- Is this for sale/commercial use? (Infringing)
- Is it an excerpt for educational discussion? (Possibly fair use)
- Is there copyright notice? (Supports infringement)
Decision: Check file size/details
- Full movie (3GB) → Infringing, remove
- Clip (100MB) with discussion → Examine context, likely keep
- Promotional trailer → Likely fair use, keep
Action: If removing, DMCA notice to user. If keeping, document reason.
Scenario 2: Political Content
Report Content: File with "offensive political viewpoint"
Investigation:
- Does it violate policy? (Policy bans illegal content, not opinions)
- Is it hate speech? (Depends on language used)
- Is reporter disagreeing with politics? (Not a violation)
Decision: Political speech is protected unless it meets hate speech threshold.
Action: Likely no action. Respond to reporter: "This doesn't violate our policy."
Scenario 3: Suspicious Account
Report Content: "User is violating TOS"
Investigation:
- What is the actual violation?
- Is there evidence?
- Pattern of behavior?
Decision: Must see specific violation, not just suspicious behavior.
Action: If no clear violation, close report. If pattern detected, monitor account and investigate.
Tools You'll Use
Admin Dashboard
- Reports page — See all reports, status, evidence
- Users page — Search users, view history, take action
- Audit logs — See all moderation actions taken
- Flagged files — Review files marked suspicious
API Endpoints
Primary API calls you'll make:
See Staff Admin API Reference for full documentation.
Metrics & Reporting
Weekly Metrics
Track and report:
- Reports received:
[N] - Reports resolved:
[N] - Average resolution time:
[X hours] - User bans:
[N] - Appeals:
[N](and outcomes)
Monthly Report
Submit to ops lead:
- Summary of moderation activity
- Policy changes or clarifications needed
- Escalations or edge cases
- Team capacity/improvements
Special Cases
Selfie with ID Doxxing
User uploads photo with ID visible (can view home address, full name, etc.)
Action: Remove immediately, warn user "Don't share personal documents publicly"
TikTok/Instagram Download
User is sharing content from other platforms
Assessment:
- Is it modified? (Stolen content)
- Is it only a link? (Not infringing)
- Is it their own content being re-shared? (OK)
Action: If stolen → Remove; If link/own content → OK
Bot Account Farming
Multiple accounts created by same person, spamming links
Investigation:
- IP address check
- Account creation pattern
- Behavior similarity
- File upload pattern
Action: Ban all linked accounts, potentially IP-ban if recurrent
FAQs About Moderation
Q: Can I ban someone for disagreeing with me? A: No. We don't ban people for opinions, only policy violations.
Q: What if someone appeals and they're right? A: Reverse the action, apologize sincerely, and document the error.
Q: Can I warn someone twice and then ban? A: Use progressive discipline, but severity of violation matters. One serious violation might lead to permanent ban.
Q: What if the report is fake? A: Close it and document. If same person submits false reports repeatedly, note the pattern.
Q: Am I allowed to ban myself to test the system? A: No. Use test accounts instead.
Support
Questions about moderation?
- Ask in Slack:
#emberly-moderation - Email supervisor
- Review past decisions for precedent
- Check internal wiki for new policy updates