Project Scale Analysis (from project.functions.toon):
Function Distribution:
ThresholdPolicy — size-based decision on full data vs metadataMetaExtractor — image/audio/binary/file/numpy/pandas extractors with magic byte detectionregister_extractor() — custom extractor registry for user-defined types@meta_log — dedicated decorator for binary data pipelines (sync + async)BinaryAwareRouter — sink routing based on payload characteristics@log_call(extract_meta=True) / @catch(extract_meta=True) — opt-in in existing decoratorsconfigure(meta_policy=..., auto_extract_meta=True) — global configurationNFO_META_THRESHOLD / NFO_META_EXTRACT env vars@log_call, @catch decorators@logged — class decorator (auto-wrap all public methods)@skip — exclude methods from @loggedauto_log() — module-level patching (one call = all functions logged)auto_log_by_name() — same but accepts module name stringsLogger — central dispatcher with multiple sinksconfigure() — one-liner project setup with sink specs, env overridesconfigure(force=True) — re-configuration guard@log_call, @catch, @logged transparently handle async defpropagate=False prevents double output_StdlibBridge — forward stdlib logging.getLogger() to nfo sinksSQLiteSink, CSVSink, MarkdownSinkJSONSink — structured JSON Lines output for ELK/Grafana LokiPrometheusSink — export metrics (duration, error rate, call count) to PrometheusWebhookSink — HTTP POST alerts to Slack/Discord/Teams on ERRORconfigure() supports json:path and prometheus:port sink specsEnvTagger — auto-tag logs with environment/trace_id/versionDynamicRouter — route logs by env/level/custom rulesDiffTracker — detect output changes between versionsLLMSink — LLM-powered log analysis via litellmdetect_prompt_injection() — regex prompt injection detectiondemo/load_generator.pyDockerfile + examples/docker-compose-service.yml (centralized logging service)examples/nfo.proto)examples/http_service.py) — FastAPI, multi-language endpointbash_client.sh), Go (go_client.go), Rust (rust_client.rs).env.example files (root + examples/) with all NFO_* variablesbump2version config synced: pyproject.toml, VERSION, nfo/__init__.py bump atomicallyproject.functions.toon) with 448 functions analyzedAsyncBufferedSink — background-thread batched writes with configurable buffer_size, flush_interval, flush_on_errorRingBufferSink — keep last N entries in memory, flush context to delegate on ERROR/CRITICAL; customizable trigger_levels@log_call(sample_rate=0.01) — sampling for high-throughput functions; errors always loggedsample_rate on @catch and @meta_log too@log_call(sample_rate="adaptive") — automatic rate based on throughputwith pipeline_context("name") as ctx:nfo_data_bytes_total, nfo_meta_extractions_totaldata_size_bytes, data_format, data_hash, is_meta_log)@meta_log(lazy=True) — compute hash/dimensions only when sink needs themnfo logs --meta --filter "data_format=PNG AND size_bytes > 1000000"OTELSink — OpenTelemetry spans for distributed tracing (Jaeger/Zipkin via OTLP)ElasticsearchSink — direct Elasticsearch indexing for production log aggregationnfo-dashboard CLI: nfo dashboard --db logs.dbtrace_id, environment, level, function_name, date rangeGET /query?env=prod&level=ERROR&last=24hreplay_logs() — replay function calls from SQLite logs for regression testingreplay_from_sqlite("logs.db", max_calls=100) — bounded replaynfo query logs.db --level ERROR --last 24h# Full monitoring stack (working in v0.2.0)
sink = PrometheusSink( # metrics → Grafana
WebhookSink( # alerts → Slack
EnvTagger( # tagging
SQLiteSink("logs.db")
),
url="https://hooks.slack.com/...",
levels=["ERROR"],
),
port=9090,
)
GraphQLSink — GraphQL query interface over SQLite logsPineconeSink / VectorSink — semantic log search via embeddings