curllm

curllm Examples

📚 Documentation Index ⬅️ Back to Main README

This document contains curated, end-to-end examples for common automation tasks with curllm. You can also generate runnable shell scripts for these examples into the examples/ directory.

Top 10 quick examples

  1. Extract all links

    curllm "https://example.com" -d "extract all links"
    
  2. Extract emails and phones

    curllm "https://example.com/contact" -d "extract all emails and phone numbers"
    
  3. Take a screenshot

    curllm "https://example.com" -d "screenshot"
    
  4. Products under 150 zł (public proxy rotation)

    export CURLLM_PUBLIC_PROXY_LIST="https://raw.githubusercontent.com/clarketm/proxy-list/master/proxy-list-raw.txt"
    curllm "https://ceneo.pl" -d "Find all products under 150zł and extract names, prices and urls" \
      --stealth --proxy rotate:public --csv -o products.csv
    
  5. Products with registry rotation (after registering proxies)

    # register proxies via curlx or API, then:
    curllm "https://ceneo.pl" -d "Find all products under 150zł and extract names, prices and urls" \
      --stealth --proxy rotate:registry --html -o products.html
    
  6. BQL: Hacker News links (CSS selectors)

    curllm --bql -d 'query { page(url: "https://news.ycombinator.com") { title links: select(css: "a.storylink, a.titlelink") { text url: attr(name: "href") } }}'
    
  7. Visual fill contact form (stealth)

    curllm --visual --stealth "https://www.prototypowanie.pl/kontakt/" \
      -d "Fill contact form: name=John Doe, email=john@example.com, message=Hello"
    
  8. Use a session (persist cookies between runs)

    curllm --session my-site "https://example.com" -d "screenshot"
    # later: same session reused automatically
    
  9. Export results to XLS (Excel-compatible)

    curllm "https://example.com" -d "extract all links" --xls -o links.xls
    
  10. WordPress: create a post using session

    curllm --session wp-s1 -d '{"wordpress_config":{"url":"https://example.wordpress.com","action":"create_post","title":"Hello","content":"Post body","status":"draft"}}'
    

Table of Contents


Command:

curllm "https://news.ycombinator.com" -d "Extract the page title and the first 30 news links. Use anchors matching CSS selectors 'a.titlelink' or 'a.storylink'. Return JSON shaped exactly as: {\"page\": {\"title\": string, \"links\": [{\"text\": string, \"url\": string}] } }" -v

Script:

Notes:


Command:

curllm "https://example.com" -d "extract all links" -v

Script:


Command:

curllm --bql -d 'query {
  page(url: "https://news.ycombinator.com") {
    title
    links: select(css: "a.storylink, a.titlelink") { text url: attr(name: "href") }
  }
}' -v

Script:


Fill contact form (visual + stealth)

Command:

curllm --stealth --visual \
  -d "Fill contact form: name=John Doe, email=john@example.com, message=Hello" \
  https://www.prototypowanie.pl/kontakt/ -v

Script:


Login and download

Command:

curllm -X POST --visual --stealth \
  -d '{"instruction": "Login and download invoice", "credentials": {"user": "john@example.com", "pass": "secret"}}' \
  https://app.example.com -v

Script:


Visual scrape products

Command:

curllm --visual "https://shop.com" -d "extract top 10 products with prices" -v

Script:


CAPTCHA demo

Command:

curllm --visual --captcha "https://example.com/captcha" -d "solve captcha and submit form" -v

Script:

Notes:


BQL JSON API

Command:

curllm --bql -d 'query { page(url: "https://example.com") { title links { text url } }}' -v

Script:


Stealth scraping news titles

Command:

curllm --stealth "https://news.ycombinator.com" -d "extract first 30 titles" -v

Script:


Override model per-command

Command:

curllm --model qwen2.5:3b "https://example.com" -d "extract emails" -v

Script:


POST with custom headers

Command:

curllm -X POST -H 'Authorization: Bearer TOKEN' -H 'X-Trace: 1' \
  -d '{"instruction": "submit form with authenticated session"}' \
  https://httpbin.org/post -v

Script:


Export results (CSV/HTML/XML/XLS)

Use the CLI export flags to transform JSON results into tabular formats.

Requires: jq.

# CSV
curllm "https://ceneo.pl" -d "Find all products under 150zł and extract names, prices and urls" \
  --csv -o products.csv

# HTML table
curllm "https://example.com" -d "extract all links" --html -o links.html

# XML
curllm "https://example.com" -d "extract all emails" --xml -o emails.xml

# Excel-compatible (.xls generated as HTML table)
curllm "https://ceneo.pl" -d "Find all products under 150zł and extract names, prices and urls" \
  --xls -o products.xls

If -o is not provided, files are saved as curllm_export_YYYYMMDD-HHMMSS.(csv|html|xml|xls).

Tips