HbarSuite Docs
  • Welcome to HbarSuite
  • HbarSuite Developer Documentation
    • HbarSuite Smart Engine Applications
      • @hsuite/cross-chain-exchange
      • @hsuite/dao
        • DAO Application Testing
      • @hsuite/exchange
      • @hsuite/launchpad
      • @hsuite/multisig
      • @hsuite/nft-exchange
      • HSuite Smart App - Enterprise Hedera Application Framework
    • HSuite Libraries
      • @hsuite/api-key - Enterprise API Key Authentication System
      • @hsuite/auth-types
      • @hsuite/auth - Authentication Module
      • @hsuite/client-types
      • @hsuite/client - Client Service Module
      • @hsuite/dkg-types - Distributed Key Generation Type Definitions
      • @hsuite/hashgraph-types - Hedera Hashgraph Type Definitions
      • @hsuite/health - Comprehensive System Health Monitoring
      • @hsuite/helpers - Utility Library
      • @hsuite/ipfs - InterPlanetary File System Integration
      • @hsuite/shared-types - Shared Type Definitions
      • @hsuite/smart-config - Configuration Management
      • @hsuite/smart-ledgers - Multi-Ledger Management
      • @hsuite/smart-network-types - Smart Network Type Definitions
      • @hsuite/smart-transaction-types - Smart Transaction Type Definitions
      • @hsuite/smartnode-sdk - SmartNode Software Development Kit
      • @hsuite/snapshots - Multi-Ledger Token Snapshot Management
      • @hsuite/subscriptions-types - Subscription Management Type Definitions
      • @hsuite/subscriptions - Enterprise Subscription Management System
      • @hsuite/throttler-types - Rate Limiting Type Definitions
      • @hsuite/throttler - Advanced Rate Limiting for NestJS
      • @hsuite/users-types - User Type Definitions
      • @hsuite/users - User Management Module
      • @hsuite/validators-types
  • General Documentation
    • Smart Apps and Interaction
      • Subscription-Based Model
      • Token-Gate Model
    • The Smart Node Network
      • security-layer
      • Type of Validators Explained
      • Understanding Validators in Our System
      • Automating Responses to Network Changes & Key Rotation
      • Ensuring Continuous Operation and Recovery
      • Generating and Sharing Keys Collaboratively
      • Handling Node Expulsion and Replacement
      • Managing Cluster Membership
      • Protecting Secrets with Shamir's Method
      • Security Layer Integration
      • Setting Up Secure Clusters
    • Tokenomics
      • Tokenomics v1
      • Tokenomics V2
    • What is a Smart Node?
  • Restful APIs Documentation
Powered by GitBook
On this page
  • 📚 Table of Contents
  • ✨ Quick Start
  • Installation
  • Basic Setup
  • Protected Routes
  • 🏗️ Architecture
  • Core Component Areas
  • Module Structure
  • 🔧 API Reference
  • Core Module Types
  • Response Headers
  • Core Services
  • 📖 Guides
  • Rate Limiting Setup Guide
  • Redis Configuration Guide
  • Custom Guards Implementation Guide
  • Production Deployment Guide
  • 🎯 Examples
  • Advanced Module Configuration
  • Custom Throttling Guards
  • Dynamic Rate Limiting Service
  • Redis Cluster Configuration
  • Monitoring and Analytics Integration
  • 🔗 Integration
  • Required Dependencies
  • Module Integration
  • Documentation Generation
  • Environment Configuration
  • Integration with HSuite Ecosystem
  • Best Practices
  • 🔧 Configuration Best Practices
  • 🛡️ Security Best Practices
  • ⚡ Performance Best Practices
  • 📊 Monitoring Best Practices
  1. HbarSuite Developer Documentation
  2. HSuite Libraries

@hsuite/throttler - Advanced Rate Limiting for NestJS

Previous@hsuite/throttler-types - Rate Limiting Type DefinitionsNext@hsuite/users-types - User Type Definitions

Last updated 2 days ago

⚡ Powerful and flexible rate limiting library for NestJS applications with Redis-based distributed storage support

Enterprise-grade rate limiting solution providing robust protection against abuse and optimal resource utilization with support for Redis distributed storage, IP-based protection, and comprehensive request tracking.


📚 Table of Contents


✨ Quick Start

Installation

npm install @hsuite/throttler

Basic Setup

import { SecurityThrottlerModule } from '@hsuite/throttler';
import { IThrottler } from '@hsuite/throttler-types';

@Module({
  imports: [
    SecurityThrottlerModule.forRootAsync({
      useFactory: (): IThrottler.IOptions => ({
        enabled: true,
        storage: IThrottler.IStorage.REDIS,
        settings: {
          ttl: 60,    // Time window in seconds
          limit: 100  // Maximum requests per window
        },
        redis: {
          socket: {
            host: 'localhost',
            port: 6379
          }
        }
      })
    })
  ]
})
export class AppModule {}

Protected Routes

import { CustomThrottlerGuard } from '@hsuite/throttler';

@Controller()
@UseGuards(CustomThrottlerGuard)
export class AppController {
  @Get()
  public getData() {
    return 'Rate limited endpoint';
  }
}

🏗️ Architecture

Core Component Areas

⚡ Rate Limiting Engine

  • IP-Based Tracking - Track and limit requests based on client IP addresses

  • Time Window Management - Configurable TTL for request counting windows

  • Request Counting - Efficient increment and threshold checking

  • Automatic Blocking - Immediate protection when limits are exceeded

🗄️ Storage Backends

  • Redis Storage - Distributed rate limiting across multiple server instances

  • In-Memory Storage - Local storage for development and single-instance deployments

  • Persistent Tracking - Reliable request count persistence with Redis

  • High Performance - Optimized for high-throughput applications

🛡️ Security Features

  • Abuse Protection - Prevent API abuse and resource exhaustion

  • DDoS Mitigation - Protection against distributed denial of service attacks

  • Response Headers - Informative rate limit headers for client guidance

  • Graceful Degradation - Proper error handling and retry mechanisms

🔧 NestJS Integration

  • Global Guards - Automatic protection for all routes

  • Custom Guards - Flexible guard implementation for specific use cases

  • Decorator Support - Easy route-specific configuration

  • Module Configuration - Comprehensive async configuration support

Module Structure

src/
├── throttler.module.ts                # Main module with async configuration
├── throttler.service.ts               # Core rate limiting service
├── guards/
│   └── custom-throttler.guard.ts      # IP-based throttling guard
└── index.ts                          # Public API exports

🔧 API Reference

Core Module Types

Configuration Interface

IThrottler.IOptions

  • Purpose: Complete throttler configuration interface

  • Properties: enabled, storage, settings, redis

  • Usage: Module configuration and factory pattern implementation

Storage Selection

Storage Type
Use Case
Performance
Scalability
Complexity

Redis

Production, Distributed

High

Excellent

Medium

Default

Development, Single Instance

Very High

Limited

Low

Settings Configuration

Parameter
Type
Description
Default
Range

ttl

number

Time window (seconds)

60

1-3600

limit

number

Max requests per window

100

1-10000

Response Headers

The library automatically sets informative headers:

Header
Description
Example

X-RateLimit-Limit

Maximum requests per window

100

X-RateLimit-Remaining

Remaining requests in window

47

X-RateLimit-Reset

Seconds until window reset

23

Retry-After

Wait time when blocked (seconds)

37

Core Services

SecurityThrottlerModule

  • Purpose: Main module providing rate limiting functionality

  • Methods: forRootAsync(), forRoot()

  • Usage: Application module configuration

SecurityThrottlerService

  • Purpose: Injectable service for programmatic throttling access

  • Features: Rate limit checking, configuration access

  • Usage: Custom throttling logic implementation

CustomThrottlerGuard

  • Purpose: IP-based rate limiting guard

  • Features: Automatic request tracking, header injection, blocking

  • Usage: Route protection and global application security


📖 Guides

Rate Limiting Setup Guide

Complete guide to setting up rate limiting with different storage backends. Comprehensive setup instructions covering Redis and in-memory storage configuration, TTL and limit settings, environment-specific configurations, and performance optimization for enterprise-scale rate limiting systems.

Redis Configuration Guide

Comprehensive Redis setup for distributed rate limiting. Advanced configuration guide covering Redis cluster setup, high availability configuration, performance tuning, security settings, and enterprise-grade Redis deployment for scalable rate limiting.

Custom Guards Implementation Guide

Learn how to create custom throttling guards for specific use cases. Detailed implementation guide covering custom guard development, request filtering, user-based rate limiting, tier-based throttling, and advanced rate limiting patterns for enterprise applications.

Production Deployment Guide

Best practices for deploying rate limiting in production environments. Enterprise deployment guide covering scalability considerations, monitoring setup, failover strategies, performance optimization, and production-grade rate limiting architecture.


🎯 Examples

Advanced Module Configuration

import { SecurityThrottlerModule } from '@hsuite/throttler';
import { IThrottler } from '@hsuite/throttler-types';

@Injectable()
export class ThrottlerConfigurationService {
  
  // Environment-based configuration
  createProductionConfig(): IThrottler.IOptions {
    return {
      enabled: true,
      storage: IThrottler.IStorage.REDIS,
      settings: {
        ttl: 60,
        limit: 250  // Higher limits for production
      },
      redis: {
        socket: {
          host: process.env.REDIS_HOST || 'redis-cluster.example.com',
          port: parseInt(process.env.REDIS_PORT || '6379')
        },
        password: process.env.REDIS_PASSWORD,
        username: process.env.REDIS_USERNAME || 'default',
        database: parseInt(process.env.REDIS_DATABASE || '0'),
        ttl: parseInt(process.env.REDIS_TTL || '120')
      }
    };
  }

  // Development configuration
  createDevelopmentConfig(): IThrottler.IOptions {
    return {
      enabled: true,
      storage: IThrottler.IStorage.DEFAULT, // In-memory for development
      settings: {
        ttl: 60,
        limit: 1000  // Relaxed limits for development
      },
      redis: {} // Not used with DEFAULT storage
    };
  }

  // Feature-flag based configuration
  createConditionalConfig(features: any): IThrottler.IOptions {
    const baseConfig = this.createProductionConfig();
    
    if (features.aggressiveThrottling) {
      baseConfig.settings.limit = 50;  // Stricter limits
      baseConfig.settings.ttl = 120;   // Longer windows
    }
    
    if (features.highTrafficMode) {
      baseConfig.settings.limit = 1000; // Higher limits
      baseConfig.redis.ttl = 300;       // Longer Redis TTL
    }
    
    return baseConfig;
  }
}

@Module({
  imports: [
    SecurityThrottlerModule.forRootAsync({
      imports: [ConfigModule],
      useFactory: async (
        configService: ConfigService,
        throttlerConfig: ThrottlerConfigurationService
      ): Promise<IThrottler.IOptions> => {
        const environment = configService.get('NODE_ENV', 'development');
        
        switch (environment) {
          case 'production':
            return throttlerConfig.createProductionConfig();
          case 'development':
            return throttlerConfig.createDevelopmentConfig();
          default:
            const features = await configService.get('FEATURES', {});
            return throttlerConfig.createConditionalConfig(features);
        }
      },
      inject: [ConfigService, ThrottlerConfigurationService]
    })
  ],
  providers: [ThrottlerConfigurationService]
})
export class AppModule {}

Custom Throttling Guards

import { CustomThrottlerGuard } from '@hsuite/throttler';
import { Injectable, ExecutionContext } from '@nestjs/common';

@Injectable()
export class AdvancedThrottlerGuard extends CustomThrottlerGuard {
  
  async handleCustomRequest(
    context: ExecutionContext,
    limit: number,
    ttl: number
  ): Promise<boolean> {
    try {
      const request = context.switchToHttp().getRequest();
      
      // Custom IP extraction with proxy support
      const clientIP = this.extractClientIP(request);
      
      // User-based throttling for authenticated requests
      if (request.user) {
        return await this.handleAuthenticatedRequest(request, limit, ttl);
      }
      
      // IP-based throttling for anonymous requests
      return await this.handleAnonymousRequest(clientIP, limit, ttl);
    } catch (error) {
      console.error('Throttling error:', error);
      return false; // Block on error for security
    }
  }

  private async handleAuthenticatedRequest(request: any, limit: number, ttl: number): Promise<boolean> {
    const userId = request.user.id;
    const userTier = request.user.tier || 'basic';
    
    // Tier-based limits
    const tierLimits = {
      basic: limit,
      premium: limit * 2,
      enterprise: limit * 5
    };
    
    const adjustedLimit = tierLimits[userTier] || limit;
    
    // Track by user ID instead of IP
    const key = `user:${userId}`;
    return await this.trackRequest(key, adjustedLimit, ttl);
  }

  private async handleAnonymousRequest(clientIP: string, limit: number, ttl: number): Promise<boolean> {
    // More aggressive limits for anonymous users
    const anonLimit = Math.floor(limit * 0.5);
    const key = `ip:${clientIP}`;
    
    return await this.trackRequest(key, anonLimit, ttl);
  }

  private extractClientIP(request: any): string {
    // Handle various proxy configurations
    return (
      request.headers['cf-connecting-ip'] ||     // Cloudflare
      request.headers['x-real-ip'] ||            // Nginx
      request.headers['x-forwarded-for']?.split(',')[0] || // General proxy
      request.connection?.remoteAddress ||        // Direct connection
      request.socket?.remoteAddress ||           // Socket connection
      request.ip ||                              // Express
      '127.0.0.1'                               // Fallback
    );
  }

  private async trackRequest(key: string, limit: number, ttl: number): Promise<boolean> {
    // Custom tracking logic with Redis or memory storage
    const currentCount = await this.incrementCounter(key, ttl);
    
    if (currentCount > limit) {
      await this.logExcessiveUsage(key, currentCount, limit);
      return false;
    }
    
    return true;
  }

  private async incrementCounter(key: string, ttl: number): Promise<number> {
    // Implementation depends on storage backend
    // This is a simplified example
    return 1; // Replace with actual implementation
  }

  private async logExcessiveUsage(key: string, count: number, limit: number): Promise<void> {
    console.warn(`Rate limit exceeded for ${key}: ${count}/${limit}`);
    // Add monitoring/alerting logic here
  }
}

// Usage in controller
@Controller('api')
@UseGuards(AdvancedThrottlerGuard)
export class APIController {
  @Get('data')
  public getData() {
    return 'Protected with advanced throttling';
  }
}

Dynamic Rate Limiting Service

import { SecurityThrottlerService } from '@hsuite/throttler';
import { Injectable } from '@nestjs/common';

@Injectable()
export class DynamicRateLimitingService {
  
  constructor(private readonly throttlerService: SecurityThrottlerService) {}

  async handleDynamicRateLimit(endpoint: string, clientData: any) {
    try {
      // Calculate dynamic limits based on various factors
      const limits = await this.calculateDynamicLimits(endpoint, clientData);
      
      // Check against current usage
      const canProceed = await this.checkRateLimit(
        clientData.identifier,
        limits.limit,
        limits.ttl
      );

      if (!canProceed) {
        const resetTime = await this.getResetTime(clientData.identifier);
        throw new Error(`Rate limit exceeded. Try again in ${resetTime} seconds.`);
      }

      return {
        allowed: true,
        limits: limits,
        remaining: await this.getRemainingRequests(clientData.identifier, limits.limit)
      };
    } catch (error) {
      throw new Error(`Dynamic rate limiting failed: ${error.message}`);
    }
  }

  private async calculateDynamicLimits(endpoint: string, clientData: any): Promise<{ limit: number; ttl: number }> {
    let baseLimit = 100;
    let baseTtl = 60;

    // Endpoint-specific limits
    const endpointLimits = {
      '/api/heavy-computation': { limit: 10, ttl: 300 },
      '/api/upload': { limit: 20, ttl: 60 },
      '/api/search': { limit: 200, ttl: 60 },
      '/api/data': { limit: 100, ttl: 60 }
    };

    const endpointConfig = endpointLimits[endpoint];
    if (endpointConfig) {
      baseLimit = endpointConfig.limit;
      baseTtl = endpointConfig.ttl;
    }

    // User tier adjustments
    if (clientData.userTier) {
      const tierMultipliers = {
        basic: 1,
        premium: 2,
        enterprise: 5
      };
      
      const multiplier = tierMultipliers[clientData.userTier] || 1;
      baseLimit *= multiplier;
    }

    // Time-based adjustments
    const hour = new Date().getHours();
    if (hour >= 9 && hour <= 17) {
      // Business hours - stricter limits
      baseLimit = Math.floor(baseLimit * 0.8);
    }

    // Load-based adjustments
    const systemLoad = await this.getSystemLoad();
    if (systemLoad > 0.8) {
      baseLimit = Math.floor(baseLimit * 0.6);
    }

    return { limit: baseLimit, ttl: baseTtl };
  }

  private async checkRateLimit(identifier: string, limit: number, ttl: number): Promise<boolean> {
    // Use throttler service to check rate limit
    const currentCount = await this.getCurrentRequestCount(identifier);
    return currentCount < limit;
  }

  private async getCurrentRequestCount(identifier: string): Promise<number> {
    // Implementation to get current request count
    // This would interface with the storage backend (Redis or memory)
    return 0; // Placeholder
  }

  private async getRemainingRequests(identifier: string, limit: number): Promise<number> {
    const currentCount = await this.getCurrentRequestCount(identifier);
    return Math.max(0, limit - currentCount);
  }

  private async getResetTime(identifier: string): Promise<number> {
    // Calculate when the rate limit window resets
    return 60; // Placeholder - return seconds until reset
  }

  private async getSystemLoad(): Promise<number> {
    // Mock system load calculation
    // In production, this would check actual system metrics
    return Math.random(); // 0-1 representing system load percentage
  }

  async handleBurstTraffic(clientData: any): Promise<boolean> {
    try {
      // Detect burst traffic patterns
      const recentRequests = await this.getRecentRequestPattern(clientData.identifier);
      
      if (this.isBurstTraffic(recentRequests)) {
        // Apply burst-specific rate limiting
        const burstLimit = 5; // Very strict limit for burst traffic
        const burstTtl = 30;   // Short window for burst detection
        
        return await this.checkRateLimit(clientData.identifier, burstLimit, burstTtl);
      }
      
      return true; // Not burst traffic, proceed normally
    } catch (error) {
      console.error('Burst traffic handling error:', error);
      return false; // Block on error
    }
  }

  private async getRecentRequestPattern(identifier: string): Promise<number[]> {
    // Get request timestamps from the last few minutes
    // Return array of request counts per time unit
    return [5, 3, 8, 12, 15]; // Placeholder data
  }

  private isBurstTraffic(recentRequests: number[]): boolean {
    if (recentRequests.length < 3) return false;
    
    // Simple burst detection: check if recent requests show rapid increase
    const recent = recentRequests.slice(-3);
    const isIncreasing = recent.every((val, i) => i === 0 || val >= recent[i - 1]);
    const maxIncrease = Math.max(...recent) / Math.min(...recent);
    
    return isIncreasing && maxIncrease > 2;
  }
}

Redis Cluster Configuration

import { SecurityThrottlerModule } from '@hsuite/throttler';
import { IThrottler } from '@hsuite/throttler-types';

@Injectable()
export class RedisClusterConfigurationService {
  
  createRedisClusterConfig(): IThrottler.IOptions {
    return {
      enabled: true,
      storage: IThrottler.IStorage.REDIS,
      settings: {
        ttl: 60,
        limit: 500
      },
      redis: {
        socket: {
          host: process.env.REDIS_CLUSTER_HOST || 'redis-cluster.internal',
          port: parseInt(process.env.REDIS_CLUSTER_PORT || '6379'),
          connectTimeout: 20000,
          commandTimeout: 5000,
          lazyConnect: true
        },
        password: process.env.REDIS_PASSWORD,
        username: process.env.REDIS_USERNAME || 'throttler-user',
        database: parseInt(process.env.REDIS_DATABASE || '1'),
        ttl: 300,
        retryDelayOnFailover: 100,
        maxRetriesPerRequest: 3,
        retryDelayOnClusterDown: 300,
        enableOfflineQueue: false,
        // Connection pool settings
        family: 4,
        keepAlive: true,
        keyPrefix: 'throttler:',
        // Cluster-specific settings
        enableReadyCheck: true,
        lazyConnect: true
      }
    };
  }

  createRedisFailoverConfig(): IThrottler.IOptions {
    const baseConfig = this.createRedisClusterConfig();
    
    // Add failover configuration
    baseConfig.redis = {
      ...baseConfig.redis,
      // Sentinel configuration for high availability
      retryDelayOnFailover: 100,
      enableOfflineQueue: false,
      // Health check settings
      pingInterval: 30000,
      // Reconnection settings
      reconnectOnError: (err: Error) => {
        const targetError = 'READONLY';
        return err.message.includes(targetError);
      }
    };

    return baseConfig;
  }

  async validateRedisConnection(): Promise<boolean> {
    try {
      // Test Redis connection before starting application
      const testConfig = this.createRedisClusterConfig();
      
      // Create test connection
      const Redis = require('ioredis');
      const redis = new Redis(testConfig.redis);
      
      // Test basic operations
      await redis.ping();
      await redis.set('throttler:test', 'connection-test', 'EX', 10);
      const result = await redis.get('throttler:test');
      await redis.del('throttler:test');
      
      await redis.quit();
      
      return result === 'connection-test';
    } catch (error) {
      console.error('Redis connection validation failed:', error);
      return false;
    }
  }
}

// Usage in app module
@Module({
  imports: [
    SecurityThrottlerModule.forRootAsync({
      imports: [ConfigModule],
      useFactory: async (
        configService: ConfigService,
        redisConfig: RedisClusterConfigurationService
      ): Promise<IThrottler.IOptions> => {
        // Validate Redis connection before configuring throttler
        const isRedisAvailable = await redisConfig.validateRedisConnection();
        
        if (!isRedisAvailable) {
          console.warn('Redis not available, falling back to in-memory storage');
          return {
            enabled: true,
            storage: IThrottler.IStorage.DEFAULT,
            settings: {
              ttl: 60,
              limit: 100
            },
            redis: {}
          };
        }
        
        return redisConfig.createRedisClusterConfig();
      },
      inject: [ConfigService, RedisClusterConfigurationService]
    })
  ],
  providers: [RedisClusterConfigurationService]
})
export class AppModule {}

Monitoring and Analytics Integration

import { SecurityThrottlerService } from '@hsuite/throttler';
import { Injectable } from '@nestjs/common';

@Injectable()
export class ThrottlerMonitoringService {
  
  constructor(private readonly throttlerService: SecurityThrottlerService) {}

  async generateThrottlingReport(timeRange: { start: Date; end: Date }) {
    try {
      const report = {
        timeRange,
        summary: {
          totalRequests: 0,
          blockedRequests: 0,
          topBlockedIPs: [],
          averageRequestRate: 0,
          peakRequestRate: 0
        },
        trends: {
          hourlyBreakdown: {},
          topEndpoints: {},
          userAgentAnalysis: {}
        },
        security: {
          suspiciousPatterns: [],
          repeatOffenders: [],
          recommendations: []
        }
      };

      // Collect data from Redis or monitoring systems
      const throttlingData = await this.collectThrottlingData(timeRange);
      
      // Generate summary statistics
      report.summary = await this.generateSummaryStats(throttlingData);
      
      // Analyze trends
      report.trends = await this.analyzeTrends(throttlingData);
      
      // Security analysis
      report.security = await this.analyzeSecurityPatterns(throttlingData);

      return report;
    } catch (error) {
      throw new Error(`Throttling report generation failed: ${error.message}`);
    }
  }

  async monitorRealTimeMetrics() {
    try {
      const metrics = {
        timestamp: new Date(),
        currentRequestRate: await this.getCurrentRequestRate(),
        activeConnections: await this.getActiveConnections(),
        blockedRequests: await this.getRecentBlockedRequests(),
        systemHealth: await this.getSystemHealth(),
        alerts: []
      };

      // Check for alerts
      if (metrics.currentRequestRate > 1000) {
        metrics.alerts.push({
          type: 'HIGH_TRAFFIC',
          message: 'Request rate exceeding normal thresholds',
          severity: 'WARNING'
        });
      }

      if (metrics.blockedRequests.length > 50) {
        metrics.alerts.push({
          type: 'HIGH_BLOCKS',
          message: 'Unusually high number of blocked requests',
          severity: 'CRITICAL'
        });
      }

      return metrics;
    } catch (error) {
      throw new Error(`Real-time monitoring failed: ${error.message}`);
    }
  }

  async optimizeThrottlingSettings() {
    try {
      const analysisData = await this.collectOptimizationData();
      
      const recommendations = {
        currentSettings: await this.getCurrentSettings(),
        recommendations: [],
        projectedImpact: {}
      };

      // Analyze request patterns
      const patterns = this.analyzeRequestPatterns(analysisData);
      
      // Generate recommendations
      if (patterns.averageRequestRate < patterns.currentLimit * 0.5) {
        recommendations.recommendations.push({
          type: 'REDUCE_LIMITS',
          suggestion: 'Consider reducing rate limits for better security',
          newLimit: Math.floor(patterns.currentLimit * 0.8)
        });
      }

      if (patterns.blockedPercentage > 10) {
        recommendations.recommendations.push({
          type: 'INCREASE_LIMITS',
          suggestion: 'High block rate suggests limits may be too strict',
          newLimit: Math.floor(patterns.currentLimit * 1.2)
        });
      }

      return recommendations;
    } catch (error) {
      throw new Error(`Throttling optimization failed: ${error.message}`);
    }
  }

  private async collectThrottlingData(timeRange: any): Promise<any[]> {
    // Implementation to collect throttling data from storage
    return [];
  }

  private async generateSummaryStats(data: any[]): Promise<any> {
    // Implementation to generate summary statistics
    return {
      totalRequests: data.length,
      blockedRequests: data.filter(d => d.blocked).length,
      topBlockedIPs: [],
      averageRequestRate: 0,
      peakRequestRate: 0
    };
  }

  private async analyzeTrends(data: any[]): Promise<any> {
    // Implementation to analyze traffic trends
    return {
      hourlyBreakdown: {},
      topEndpoints: {},
      userAgentAnalysis: {}
    };
  }

  private async analyzeSecurityPatterns(data: any[]): Promise<any> {
    // Implementation to analyze security patterns
    return {
      suspiciousPatterns: [],
      repeatOffenders: [],
      recommendations: []
    };
  }

  private async getCurrentRequestRate(): Promise<number> {
    // Implementation to get current request rate
    return 0;
  }

  private async getActiveConnections(): Promise<number> {
    // Implementation to get active connections
    return 0;
  }

  private async getRecentBlockedRequests(): Promise<any[]> {
    // Implementation to get recent blocked requests
    return [];
  }

  private async getSystemHealth(): Promise<any> {
    // Implementation to get system health metrics
    return { cpu: 45, memory: 67, redis: 'healthy' };
  }

  private async getCurrentSettings(): Promise<any> {
    // Implementation to get current throttling settings
    return { ttl: 60, limit: 100 };
  }

  private async collectOptimizationData(): Promise<any> {
    // Implementation to collect data for optimization
    return {};
  }

  private analyzeRequestPatterns(data: any): any {
    // Implementation to analyze request patterns
    return {
      averageRequestRate: 0,
      currentLimit: 100,
      blockedPercentage: 5
    };
  }
}

🔗 Integration

Required Dependencies

{
  "@nestjs/common": "^10.4.2",
  "@nestjs/core": "^10.4.2",
  "@hsuite/throttler-types": "^2.0.9",
  "@compodoc/compodoc": "^1.1.23"
}

Module Integration

import { Module } from '@nestjs/common';
import { SecurityThrottlerModule, SecurityThrottlerService, CustomThrottlerGuard } from '@hsuite/throttler';

@Module({
  imports: [
    SecurityThrottlerModule.forRootAsync({
      imports: [ConfigModule],
      useFactory: (configService: ConfigService) => ({
        enabled: configService.get<boolean>('THROTTLE_ENABLED', true),
        storage: configService.get('NODE_ENV') === 'production' 
          ? IThrottler.IStorage.REDIS 
          : IThrottler.IStorage.DEFAULT,
        settings: {
          ttl: configService.get<number>('THROTTLE_TTL', 60),
          limit: configService.get<number>('THROTTLE_LIMIT', 100)
        },
        redis: {
          socket: {
            host: configService.get<string>('REDIS_HOST', 'localhost'),
            port: configService.get<number>('REDIS_PORT', 6379)
          },
          password: configService.get<string>('REDIS_PASSWORD'),
          database: configService.get<number>('REDIS_DATABASE', 0)
        }
      }),
      inject: [ConfigService]
    })
  ],
  providers: [
    ThrottlerConfigurationService,
    DynamicRateLimitingService,
    ThrottlerMonitoringService
  ],
  exports: [
    SecurityThrottlerService,
    CustomThrottlerGuard,
    ThrottlerConfigurationService,
    DynamicRateLimitingService,
    ThrottlerMonitoringService
  ]
})
export class ThrottlerModule {}

Documentation Generation

# Generate comprehensive documentation
npm run compodoc

# Generate documentation with coverage report
npm run compodoc:coverage

Environment Configuration

# Throttling Configuration
THROTTLE_ENABLED=true
THROTTLE_TTL=60
THROTTLE_LIMIT=250

# Redis Configuration
REDIS_HOST=localhost
REDIS_PORT=6379
REDIS_PASSWORD=your-secure-password
REDIS_USERNAME=default
REDIS_DATABASE=0
REDIS_TTL=120

# Feature Flags
FEATURES_AGGRESSIVE_THROTTLING=false
FEATURES_HIGH_TRAFFIC_MODE=true

Integration with HSuite Ecosystem

// Complete integration with other HSuite modules
import { SecurityThrottlerModule } from '@hsuite/throttler';
import { AuthModule } from '@hsuite/auth';
import { SmartNetworkModule } from '@hsuite/smart-network';

@Module({
  imports: [
    AuthModule,
    SmartNetworkModule,
    SecurityThrottlerModule.forRootAsync({
      imports: [ConfigModule, AuthModule],
      useFactory: async (
        configService: ConfigService,
        authService: AuthService
      ) => {
        // Integrate with auth service for user-based rate limiting
        const baseConfig = {
          enabled: true,
          storage: IThrottler.IStorage.REDIS,
          settings: {
            ttl: 60,
            limit: 100
          },
          redis: {
            socket: {
              host: configService.get('REDIS_HOST'),
              port: configService.get('REDIS_PORT')
            }
          }
        };

        return baseConfig;
      },
      inject: [ConfigService, AuthService]
    })
  ]
})
export class ThrottlerEcosystemModule {}

@Injectable()
export class IntegratedThrottlerService {
  constructor(
    private throttlerService: SecurityThrottlerService,
    private authService: AuthService,
    private networkService: SmartNetworkService
  ) {}

  async handleSecureRequest(
    request: any,
    session: IAuth.ICredentials.IWeb3.IEntity
  ): Promise<boolean> {
    // 1. Get user tier for rate limiting
    const userTier = await this.authService.getUserTier(session.walletId);
    
    // 2. Check network membership status
    const networkStatus = await this.networkService.getMemberStatus(session.walletId);
    
    // 3. Calculate dynamic limits based on user status
    const limits = this.calculateUserLimits(userTier, networkStatus);
    
    // 4. Apply rate limiting
    return await this.throttlerService.checkRateLimit(
      session.walletId,
      limits.limit,
      limits.ttl
    );
  }

  private calculateUserLimits(userTier: string, networkStatus: any): { limit: number; ttl: number } {
    const baseLimits = { limit: 100, ttl: 60 };
    
    // Adjust based on user tier
    const tierMultipliers = {
      basic: 1,
      premium: 2,
      enterprise: 5
    };
    
    baseLimits.limit *= tierMultipliers[userTier] || 1;
    
    // Adjust based on network status
    if (networkStatus.isActive && networkStatus.reputation > 0.8) {
      baseLimits.limit *= 1.5; // Bonus for good reputation
    }
    
    return baseLimits;
  }
}

Best Practices

🔧 Configuration Best Practices

  • Use Redis storage for production and distributed systems

  • Set appropriate limits based on endpoint resource usage

  • Monitor and adjust limits based on actual usage patterns

  • Implement proper error handling for rate limit exceptions

🛡️ Security Best Practices

  • Consider user-authenticated rate limiting for different tiers

  • Implement IP whitelisting for trusted sources

  • Monitor for potential abuse patterns and automated attacks

  • Log rate limiting events for security analysis

⚡ Performance Best Practices

  • Use Redis connection pooling for high-traffic applications

  • Configure appropriate TTL values for Redis keys

  • Use Redis clustering for high availability

  • Monitor system performance under load

📊 Monitoring Best Practices

  • Track rate limiting metrics and trends

  • Set up alerts for unusual traffic patterns

  • Generate regular reports for optimization

  • Monitor system health and Redis performance


⚡ Enterprise Rate Limiting: Powerful and flexible rate limiting with Redis-based distributed storage for high-performance applications.

🛡️ Advanced Security: IP-based protection, abuse prevention, and comprehensive request tracking with informative headers.

🔧 NestJS Integration: Seamless integration with guards, decorators, and async configuration for enterprise applications.


Built with ❤️ by the HbarSuite Team Copyright © 2024 HbarSuite. All rights reserved.

✨ Quick Start
🏗️ Architecture
🔧 API Reference
📖 Guides
🎯 Examples
🔗 Integration