Your AI-powered development assistant that controls LLM behavior, enforces best practices, and maintains laser focus through intelligent automation.
LLMs are powerful but chaotic - they create too many files, ignore best practices, lose focus, and generate dangerous code. TaskGuard gives you an intelligent system that:
β
Controls LLM behavior through deceptive transparency
β
Enforces best practices automatically
β
Maintains focus on single tasks
β
Prevents dangerous code execution
β
Understands any document format using local AI
β
Provides intelligent insights about your project
# Install TaskGuard
pip install taskguard
# Setup local AI (recommended)
curl -fsSL https://ollama.ai/install.sh | sh
ollama serve
ollama pull llama3.2:3b
# Initialize your project
taskguard init
# Setup shell integration
taskguard setup shell
# IMPORTANT: Load shell functions
source ~/.llmtask_shell.sh
# Start intelligent development
show_tasks
Thatβs it! Your development environment is now intelligently controlled. π
After installation, you must load the shell functions:
# Load functions in current session
source ~/.llmtask_shell.sh
# For automatic loading in new sessions
echo "source ~/.llmtask_shell.sh" >> ~/.bashrc
Common issue: If commands like show_tasks
give βcommand not foundβ, you forgot to run source ~/.llmtask_shell.sh
!
Unlike traditional task managers, TaskGuard uses local AI to understand your documents:
# Parses ANY format automatically:
taskguard parse todo TODO.md # Markdown checkboxes
taskguard parse todo tasks.yaml # YAML structure
taskguard parse todo backlog.org # Org-mode format
taskguard parse todo custom.txt # Your weird custom format
taskguard smart-analysis
# π§ Smart TODO Analysis:
# π‘ AI Insights:
# 1. Authentication tasks are blocking 4 other features
# 2. Consider breaking down "Implement core functionality"
# 3. Testing tasks should be prioritized to catch issues early
taskguard smart-suggest
# π€ AI Task Suggestion:
# π― Task ID: 3
# π Reasoning: Database migration unblocks 3 dependent tasks
# β±οΈ Estimated Time: 4-6 hours
# β οΈ Potential Blockers: Requires staging environment setup
# LLM believes it's using regular tools
python myfile.py
# π¦ Creating safety checkpoint...
# β
python myfile.py completed safely
npm install express
# π¦ Creating safety checkpoint...
# β
npm install express completed safely
show_tasks
# π Current Tasks:
# π― ACTIVE: #1 Setup authentication system
# LLM attempts dangerous code
python dangerous_script.py
# π¨ BLOCKED: dangerous code in dangerous_script.py: os.system(
# π‘ Try: Use subprocess.run() with shell=False
# LLM tries to lose focus
touch file1.py file2.py file3.py file4.py
# π― Focus! Complete current task first: Setup authentication system
# π Files modified today: 3/3
# LLM creates suboptimal code
def process_data(data):
return data.split(',')
python bad_code.py
# π Best Practice Reminders:
# - Missing docstrings in functions
# - Missing type hints in functions
# - Use more descriptive variable names
show_tasks # List all tasks with AI insights
start_task <id> # Start working on specific task
complete_task # Mark current task as done
add_task "title" [cat] [pri] # Add new task
focus_status # Check current focus metrics
productivity # Show productivity statistics
# Alternative aliases
tasks # Same as show_tasks
done_task # Same as complete_task
metrics # Same as productivity
smart_analysis # AI-powered project analysis
smart_suggest # Get AI task recommendations
best_practices [file] # Check best practices compliance
# Alternative aliases
analyze # Same as smart_analysis
insights # Same as smart_analysis
suggest # Same as smart_suggest
check_code [file] # Same as best_practices
tg_status # Show system health
tg_health # Run project health check
tg_backup # Create project backup
safe_rm <files> # Delete with backup
safe_git <command> # Git with backup
# Emergency commands
force_python <file> # Bypass safety checks
force_exec <command> # Emergency bypass
taskguard config # Show current config
taskguard config --edit # Edit configuration
taskguard config --template enterprise # Apply config template
taskguard setup ollama # Setup local AI
taskguard setup shell # Setup shell integration
taskguard test-llm # Test local LLM connection
tg_help # Show all shell commands
overview # Quick project overview
check # Quick system check
init_project # Initialize new project
# Alternative aliases
taskguard_help # Same as tg_help
llm_help # Same as tg_help
taskguard init --template startup
taskguard init --template enterprise
taskguard init --template learning
taskguard init --template python
Input: Mixed format TODO
# Project Backlog
## π₯ Critical Issues
- [x] Fix login bug (PROD-123) - **DONE** β
- [ ] Database migration script π΄ HIGH
- [ ] Backup existing data
- [ ] Test migration on staging
## π Features
β User dashboard redesign (Est: 8h) @frontend @ui
β³ API rate limiting (John working) @backend
β
Email notifications @backend
## Testing
TODO: Add integration tests for auth module
TODO: Performance testing for API endpoints
AI Output: Perfect Structure
[
{
"id": 1,
"title": "Fix login bug (PROD-123)",
"status": "completed",
"priority": "high",
"category": "bugfix"
},
{
"id": 2,
"title": "Database migration script",
"status": "pending",
"priority": "high",
"subtasks": ["Backup existing data", "Test migration on staging"]
},
{
"id": 3,
"title": "User dashboard redesign",
"estimated_hours": 8,
"labels": ["frontend", "ui"]
}
]
# 1. LLM checks project status (using shell functions)
show_tasks
# π Current Tasks:
# β³ #1 π΄ [feature] Setup authentication system
# β³ #2 π΄ [feature] Implement core functionality
# 2. LLM starts focused work
start_task 1
# π― Started task: Setup authentication system
# 3. LLM works only on this task (commands are wrapped)
python auth.py
# π¦ Creating safety checkpoint...
# β
python auth.py completed safely
# β
Code follows best practices!
# 4. LLM completes task properly
complete_task
# β
Task completed: Setup authentication system
# π Changelog updated automatically
# π― Next suggested task: Add authentication tests
# 5. LLM can use AI features
smart_analysis
# π‘ AI Insights:
# 1. Authentication system is now ready for testing
# 2. Consider adding input validation
# 3. Database integration should be next priority
taskguard health --full
# π§ Project Health Report
# ================================
# π Project Health: 75/100
# π― Focus Score: 85/100
# β‘ Velocity: 2.3 tasks/day
#
# π¨ Critical Issues:
# - 3 high-priority tasks blocked by dependencies
# - Authentication module has 0% test coverage
#
# π‘ Recommendations:
# 1. Complete database migration to unblock other tasks
# 2. Add tests before deploying auth module
# 3. Break down large tasks into smaller chunks
taskguard productivity
# π Productivity Metrics:
# Tasks Completed: 5
# Files Created: 12
# Lines Written: 847
# Time Focused: 3h 45m
# Focus Efficiency: 86.5%
# Install
curl -fsSL https://ollama.ai/install.sh | sh
# Setup
ollama serve
ollama pull llama3.2:3b # 2GB, perfect balance
ollama pull qwen2.5:1.5b # 1GB, ultra-fast
# Test
taskguard test-llm
Model | Size | RAM | Speed | Accuracy | Best For |
---|---|---|---|---|---|
qwen2.5:1.5b | 1GB | 4GB | β‘β‘β‘ | βββ | Fast parsing |
llama3.2:3b | 2GB | 6GB | β‘β‘ | ββββ | Recommended |
codellama:7b | 4GB | 8GB | β‘ | βββββ | Code analysis |
python:
# Code Structure
enforce_docstrings: true
enforce_type_hints: true
max_function_length: 50
# Code Quality
require_tests: true
test_coverage_minimum: 80
no_unused_imports: true
# Security
no_eval_exec: true
validate_inputs: true
handle_exceptions: true
javascript:
# Modern Practices
prefer_const: true
prefer_arrow_functions: true
async_await_over_promises: true
# Error Handling
require_error_handling: true
no_silent_catch: true
# Performance
avoid_memory_leaks: true
optimize_bundle_size: true
security:
# Input Validation
validate_all_inputs: true
sanitize_user_data: true
# Authentication
strong_password_policy: true
secure_session_management: true
implement_rate_limiting: true
# Data Protection
encrypt_sensitive_data: true
secure_api_endpoints: true
Metric | Before | After | Improvement |
---|---|---|---|
Dangerous Commands | 15/week | 0/week | π‘οΈ 100% blocked |
Task Completion | 60% | 95% | π― 58% better |
Code Quality Score | 65/100 | 90/100 | π 38% higher |
Focus Time | 40% | 85% | β° 113% better |
Best Practice Adherence | 45% | 88% | β 96% better |
pip install taskguard
# 1. Install TaskGuard
pip install taskguard
# 2. Setup local AI (optional but powerful)
curl -fsSL https://ollama.ai/install.sh | sh
ollama serve
ollama pull llama3.2:3b
# 3. Initialize your project
taskguard init
# 4. Setup shell integration
taskguard setup shell
# 5. Load shell functions (CRITICAL STEP)
source ~/.llmtask_shell.sh
# 6. Test the setup
show_tasks
tg_help
git clone https://github.com/wronai/taskguard.git
cd taskguard
pip install -e ".[dev]"
taskguard init
source ~/.llmtask_shell.sh
pip install "taskguard[all]" # Includes LLM, security, docs
taskguard setup shell
source ~/.llmtask_shell.sh
docker run -it wronai/taskguard:latest
# The most common issue - you forgot to source the shell file
source ~/.llmtask_shell.sh
# Check if functions are loaded
type show_tasks
# If still not working, regenerate shell integration
taskguard setup shell --force
source ~/.llmtask_shell.sh
# Check installation
pip list | grep taskguard
# Reinstall if needed
pip install --force-reinstall taskguard
# Check PATH
which taskguard
# Check if file exists
ls -la ~/.llmtask_shell.sh
# If missing, create it
taskguard setup shell
# Make sure it's executable
chmod +x ~/.llmtask_shell.sh
source ~/.llmtask_shell.sh
# Add to your shell profile for automatic loading
echo "source ~/.llmtask_shell.sh" >> ~/.bashrc
# For zsh users
echo "source ~/.llmtask_shell.sh" >> ~/.zshrc
# Restart terminal or source profile
source ~/.bashrc
We welcome contributions! Areas of focus:
git clone https://github.com/wronai/taskguard.git
cd taskguard
pip install -e ".[dev]"
pre-commit install
pytest
βLocal LLM not connectingβ
# Check Ollama status
ollama list
ollama serve
# Test connection
taskguard test-llm
βToo many false positivesβ
# Adjust sensitivity
taskguard config --template startup
βTasks not showingβ
# Initialize project
taskguard init
Apache 2.0 License - see LICENSE file for details.
βMaximum Intelligence, Minimum Chaosβ
This isnβt just another task manager - itβs an intelligent system that makes LLMs work for you instead of against you. Through deceptive transparency, local AI intelligence, and adaptive learning, weβve created the first truly intelligent development assistant that maintains safety, focus, and quality without sacrificing productivity.
Ready to experience intelligent development? Get started in 2 minutes! π
pip install taskguard && taskguard init
β If this system helped you control an unruly LLM, please star the repository!
Made with β€οΈ by developers, for developers who work with AI.
Your AI-powered development companion - because LLMs are powerful, but controlled LLMs are unstoppable.