GLINR Studio LogoTypeWeaver

Severity Levels

EXACT vs FUZZY profanity classification with minimum severity filtering

Edit on GitHub

Advanced profanity classification system that categorizes matches as EXACT (direct dictionary matches) or FUZZY (approximate/obfuscated matches) with minimum severity threshold filtering for granular control over detection sensitivity.

Severity levels enable sophisticated filtering strategies - flag only EXACT matches for strict environments, or include FUZZY matches to catch obfuscated profanity in high-risk scenarios.

SeverityLevel Enumeration

PropTypeDefault
EXACT?
1
Standard profanity detection - 'damn', 'shit', 'fuck'
FUZZY?
2
Obfuscated patterns - 'd*mn', 'sh1t', 'f*ck'

Cross-Language Implementation

Both JavaScript and Python use identical severity enumeration values:

export enum SeverityLevel {
  EXACT = 1,  // Direct dictionary matches
  FUZZY = 2,  // Fuzzy/obfuscated matches
}
from enum import IntEnum

class SeverityLevel(IntEnum):
    EXACT = 1  # Direct dictionary matches  
    FUZZY = 2  # Fuzzy/obfuscated matches

Severity Classification Algorithm

The system evaluates each profanity match and assigns a severity level based on the detection method used:

Minimum Severity Filtering

Filter profanity results by minimum severity threshold using specialized methods that return only matches meeting the severity criteria.

JavaScript Implementation

JavaScript Minimum Severity Filtering
import { Filter, SeverityLevel } from 'glin-profanity';

const filter = new Filter({
  severityLevels: true,
  allowObfuscatedMatch: true
});

const text = "What the hell is this f*ck?";

// Only include FUZZY matches (obfuscated patterns)
const fuzzyOnly = filter.checkProfanityWithMinSeverity(text, SeverityLevel.FUZZY);
console.log(fuzzyOnly.filteredWords);  // ["f*ck"] - only obfuscated word
console.log(fuzzyOnly.result.severityMap);  // { "f*ck": 2 }

// Include all matches (EXACT + FUZZY)  
const allMatches = filter.checkProfanityWithMinSeverity(text, SeverityLevel.EXACT);
console.log(allMatches.filteredWords);  // ["hell", "f*ck"] - both words
console.log(allMatches.result.severityMap);  // { "hell": 1, "f*ck": 2 }
Advanced Severity-Based Filtering
import { Filter, SeverityLevel } from 'glin-profanity';

// Content moderation with severity-based actions
class SeverityModerator {
  constructor() {
    this.filter = new Filter({
      severityLevels: true,
      allowObfuscatedMatch: true,
      enableContextAware: true,
      confidenceThreshold: 0.7
    });
  }

  moderateContent(text, userLevel = 'standard') {
    const policies = {
      strict: SeverityLevel.EXACT,    // Only flag direct profanity
      standard: SeverityLevel.FUZZY,  // Flag all profanity including obfuscated
      lenient: SeverityLevel.FUZZY    // Same as standard but with context analysis
    };

    const minSeverity = policies[userLevel] || SeverityLevel.FUZZY;
    const result = this.filter.checkProfanityWithMinSeverity(text, minSeverity);
    
    return {
      action: this.determineAction(result, userLevel),
      filteredWords: result.filteredWords,
      severityBreakdown: this.analyzeSeverity(result.result.severityMap),
      recommendation: this.getRecommendation(result, userLevel)
    };
  }

  analyzeSeverity(severityMap) {
    const breakdown = { exact: [], fuzzy: [] };
    
    for (const [word, severity] of Object.entries(severityMap || {})) {
      if (severity === SeverityLevel.EXACT) {
        breakdown.exact.push(word);
      } else if (severity === SeverityLevel.FUZZY) {
        breakdown.fuzzy.push(word);
      }
    }
    
    return breakdown;
  }

  determineAction(result, userLevel) {
    if (!result.result.containsProfanity) return 'approve';
    
    const { exact, fuzzy } = this.analyzeSeverity(result.result.severityMap);
    
    if (userLevel === 'strict' && exact.length > 0) return 'reject';
    if (userLevel === 'standard' && (exact.length > 0 || fuzzy.length > 2)) return 'flag';
    if (userLevel === 'lenient' && exact.length > 2) return 'warn';
    
    return 'approve';
  }
}

// Usage example
const moderator = new SeverityModerator();
const testCases = [
  'This movie is fucking brilliant!',    // Context-aware: likely positive
  'You damn sh1t f*cking idiot!',        // High severity: multiple FUZZY
  'What the hell is wrong?',             // Low severity: single EXACT
  'This d@mn thing is broken'            // Medium severity: single FUZZY
];

testCases.forEach(text => {
  const result = moderator.moderateContent(text, 'standard');
  console.log(`Text: "${text}"`);
  console.log(`Action: ${result.action}`);
  console.log(`EXACT: ${result.severityBreakdown.exact.join(', ')}`);
  console.log(`FUZZY: ${result.severityBreakdown.fuzzy.join(', ')}`);
  console.log('---');
});

Python Implementation

Python Minimum Severity Filtering
from glin_profanity import Filter, SeverityLevel

filter_instance = Filter({
    "severity_levels": True,
    "allow_obfuscated_match": True
})

text = "What the hell is this f*ck?"

# Only include FUZZY matches (obfuscated patterns)
fuzzy_only = filter_instance.check_profanity_with_min_severity(text, SeverityLevel.FUZZY)
print(fuzzy_only["filtered_words"])     # ["f*ck"] - only obfuscated word
print(fuzzy_only["result"]["severity_map"])  # {"f*ck": 2}

# Include all matches (EXACT + FUZZY)
all_matches = filter_instance.check_profanity_with_min_severity(text, SeverityLevel.EXACT)  
print(all_matches["filtered_words"])    # ["hell", "f*ck"] - both words
print(all_matches["result"]["severity_map"])  # {"hell": 1, "f*ck": 2}
Cross-Language Parity Testing
from glin_profanity import Filter, SeverityLevel

def test_severity_parity():
    """Verify identical behavior between JavaScript and Python implementations."""
    
    py_filter = Filter({
        "severity_levels": True,
        "allow_obfuscated_match": True,
        "fuzzy_tolerance_level": 0.8
    })
    
    # Test cases from cross-language parity tests
    test_cases = [
        "This damn text is very annoying",
        "What the hell is this word?",
        "Testing severity levels works"
    ]
    
    for text in test_cases:
        # Test complete results
        full_result = py_filter.check_profanity(text)
        print(f'Text: "{text}"')
        print(f'Full severity map: {full_result.get("severity_map", {})}')
        
        # Test EXACT+ filtering (includes EXACT and FUZZY)
        exact_plus = py_filter.check_profanity_with_min_severity(text, SeverityLevel.EXACT)
        print(f'EXACT+ filtered: {exact_plus["filtered_words"]}')
        
        # Test FUZZY-only filtering  
        fuzzy_only = py_filter.check_profanity_with_min_severity(text, SeverityLevel.FUZZY)
        print(f'FUZZY-only filtered: {fuzzy_only["filtered_words"]}')
        print('---')

test_severity_parity()

Severity-Based Configuration Strategies

Content Policy Examples

Performance Impact by Severity

Processing Overhead

Severity level tracking adds minimal overhead (~5-10%) but enables powerful filtering capabilities.

Performance Comparison
// Performance impact of severity tracking
const basicFilter = new Filter();                    // ~0.1ms per check
const severityFilter = new Filter({                  // ~0.11ms per check  
  severityLevels: true
});
const advancedFilter = new Filter({                  // ~0.15-0.3ms per check
  severityLevels: true,
  allowObfuscatedMatch: true,    // Adds normalization overhead
  fuzzyToleranceLevel: 0.7       // More aggressive = slower
});

// Optimization strategies
const optimizedFilter = new Filter({
  severityLevels: true,
  allowObfuscatedMatch: true,    
  fuzzyToleranceLevel: 0.8,      // Higher tolerance = faster
  languages: ['english'],        // Single language = faster
  wordBoundaries: true           // Exact matching = faster (when possible)
});

Memory Usage

  • Severity Tracking: +~0.1KB per result for severity mappings
  • Match Objects: +~0.2KB per result for detailed match information
  • Filtering Methods: No additional static memory overhead

Total Impact: Minimal memory increase, proportional to number of matches found

Integration Examples

Content Moderation Pipeline

Production Content Moderation System
class ContentModerationPipeline {
  constructor(environment = 'balanced') {
    this.environments = {
      strict: {
        config: { severityLevels: true, allowObfuscatedMatch: false },
        threshold: SeverityLevel.EXACT,
        actions: { exact: 'block', fuzzy: 'approve' }
      },
      balanced: {
        config: { severityLevels: true, allowObfuscatedMatch: true, enableContextAware: true },
        threshold: SeverityLevel.EXACT, 
        actions: { exact: 'flag', fuzzy: 'flag' }
      },
      aggressive: {
        config: { severityLevels: true, allowObfuscatedMatch: true, fuzzyToleranceLevel: 0.6 },
        threshold: SeverityLevel.EXACT,
        actions: { exact: 'block', fuzzy: 'block' }
      }
    };
    
    const env = this.environments[environment];
    this.filter = new Filter(env.config);
    this.threshold = env.threshold;
    this.actions = env.actions;
  }
  
  async moderateContent(content, userId, contentType) {
    const result = this.filter.checkProfanityWithMinSeverity(content, this.threshold);
    const analysis = this.analyzeSeverityImpact(result);
    
    return {
      contentId: generateId(),
      userId: userId,
      contentType: contentType,
      moderationResult: {
        blocked: analysis.shouldBlock,
        flagged: analysis.shouldFlag,
        severity: analysis.overallSeverity,
        matches: analysis.severityBreakdown,
        confidence: result.result.contextScore,
        action: analysis.recommendedAction,
        reason: analysis.reason
      },
      filteredContent: analysis.shouldBlock ? '[Content Blocked]' : content,
      appealable: analysis.appealable
    };
  }
  
  analyzeSeverityImpact(result) {
    const severityMap = result.result.severityMap || {};
    const breakdown = { exact: 0, fuzzy: 0 };
    
    Object.values(severityMap).forEach(severity => {
      if (severity === SeverityLevel.EXACT) breakdown.exact++;
      else if (severity === SeverityLevel.FUZZY) breakdown.fuzzy++;
    });
    
    const overallSeverity = breakdown.exact * 3 + breakdown.fuzzy * 1; // Weighted severity
    const shouldBlock = this.shouldBlock(breakdown, result.result.contextScore);
    const shouldFlag = !shouldBlock && (breakdown.exact > 0 || breakdown.fuzzy > 1);
    
    return {
      shouldBlock,
      shouldFlag,
      overallSeverity,
      severityBreakdown: breakdown,
      recommendedAction: shouldBlock ? 'block' : shouldFlag ? 'flag' : 'approve',
      reason: this.generateReason(breakdown, result.result.contextScore),
      appealable: shouldBlock && result.result.contextScore && result.result.contextScore > 0.5
    };
  }
}

// Usage in different environments
const environments = ['strict', 'balanced', 'aggressive'];
const testContent = [
  'This movie is fucking brilliant!',
  'You damn idiot piece of shit!', 
  'What the h3ll is this $h1t?',
  'Holy cow this is amazing'
];

environments.forEach(env => {
  const moderator = new ContentModerationPipeline(env);
  console.log(`\n=== ${env.toUpperCase()} ENVIRONMENT ===`);
  
  testContent.forEach(async (content) => {
    const result = await moderator.moderateContent(content, 'user123', 'comment');
    console.log(`Content: "${content}"`);
    console.log(`Action: ${result.moderationResult.action}`);
    console.log(`Severity: ${result.moderationResult.severity}`);
    console.log(`Reason: ${result.moderationResult.reason}`);
    console.log('---');
  });
});

Cross-References