Open Source -- HPC Research Tool

Your Remote Environment. From Your Phone.

Control AI coding agents on HPC clusters, WSL setups, or lab macOS machines via Telegram -- no VPN, no terminal, just your phone. Let AI help build your scripts, lauch jobs, debug and check results, simply send a message. Built for researchers and bioinformaticians.

Results 1 Results 2 Code 1 Bash 1 Bash 2

A full video/GIF demo will be added here soon.

How It Works

One authenticated SSH tunnel. Infinite commands.

📱
Input / Output
Telegram
🖥
Relay
Linux / WSL / Mac
🏛
Target Environment
HPC / WSL / Local
🤖
Agent
OpenCode / AI
Upload Drive
Results
Google Drive

Built Different

No screen-scraping. No fragile hacks. A clean, deterministic pipeline.

📡

No VPN Needed

The relay runs on an already-authenticated machine. Your phone sends Telegram messages -- no Zscaler, no battery drain, no dropped cellular connections.

Network Bypass
🧠

Stateful AI Sessions

Session IDs are parsed and re-injected. The AI has full conversation memory from cold start -- no daemons, no tmux processes idling on login nodes.

sessionID Memory
📦

File Transfers

Download files from the target machine to Telegram with /send . Upload from phone with /upload . Supports wildcards for batch transfer.

Send + Upload + Wildcards
🔌

Agent-Agnostic

Works with OpenCode, Claude Code, Aider, or any CLI AI tool with a --format json flag. Same relay, different agent.

Claude Code Compatible
💬

Chat History Viewer

Generate an interactive HTML page from all your AI conversations. Browse sessions, search messages, view tool calls, track token usage.

tools/chat_viewer.py

Commands Reference

Everything you can do from Telegram. Supports per-chat workspace/session isolation via env config, plus optional low-latency local voice transcription with faster-whisper.

AI Prompt
just type naturally
Any text message is sent as a prompt to the AI agent on HPC. Voice/audio messages can also be received and, optionally, pre-transcribed locally with faster-whisper before reaching the agent.
Shell Command
!ls -la ~/results/
Prefix with ! to run raw shell commands directly on HPC.
Switch Model
/model
Open interactive model picker (provider -> model) and switch persistently.
Switch Session
/id ses_abc123
Switch sessions by ID (or run /id to pick from recent sessions).
Kill Process
/kill or /q
Terminate the currently running AI process immediately (/kill or /q).
Download File
/send ~/results/plot.png
Fetch a file from HPC and receive it in Telegram.
Wildcard Fetch
/send Auto*.png
Download all matching files. Supports glob patterns.
Voice Note / Audio
send a Telegram voice note
Receive Telegram voice/audio messages. For best latency, enable optional local faster-whisper transcription so the relay converts speech to text before passing it to the agent.
Upload File
/upload ~/data/input.csv
Send a file from your phone to a path on HPC. Use as caption when sending a document.
Scheduled Tasks
/scheduled
Open interactive scheduled task manager (view/edit/delete).

Any Model. Any Provider.

30+ model aliases. Switch from Telegram with a single command.

OpenAI
GPT-4o / GPT-5 / Codex
GitHub Copilot Pro
Anthropic
Claude Opus 4.6 / Sonnet 4.6
GitHub Copilot Pro
Google
Gemini 2.5 / 3.1 Pro
GitHub Copilot Pro
xAI / Local
Grok Code Fast / Ollama
Copilot Pro / Free
University students: GitHub Copilot Pro (free with .edu email) gives you GPT-5.3-codex, Claude Opus 4.6, Gemini 3.1 Pro, and more. Pair with this relay for frontier AI on your target machine -- completely free.

Quick Start

From zero to controlling your workstation from Telegram.

1

Configure Target Environment Setting

Set CONNECTION_MODE in .env (to ssh, wsl, or local). If using SSH, add ControlMaster config to ~/.ssh/config on your relay machine. Authenticate once with MFA -- the socket persists automatically.

Host hpc ControlMaster auto ControlPath ~/.ssh/sockets/%r@%h:%p ControlPersist 8h
2

Create a Telegram Bot

Message @BotFather for your bot token. Get your Chat ID from @userinfobot.

Token from @BotFather Chat ID from @userinfobot
3

Clone, Configure, Run

Clone the repo, copy .env.example to .env, fill in your values — that's all the config you need.

git clone https://github.com/MichaelG0501/OpencodeClaw.git cd OpencodeClaw # Create a separate Python environment (recommended) # python3 -m venv OpencodeClaw && source OpencodeClaw/bin/activate pip install -r requirements.txt cp .env.example .env # Edit .env with your values python relay_bot.py
4

Send Your First Command

Open Telegram, message your bot. The AI executes on your workstation and streams the response back -- formatted, chunked, readable.

debug my RNA-seq pipeline in ~/analysis/pipeline.R

HPC Best Practices

Stay compliant with your cluster's acceptable use policy.

Recommended

  • Use AI to generate SLURM job scripts
  • Submit jobs via sbatch
  • Monitor with squeue
  • Edit scripts, manage files
  • Lightweight rclone sync
  • Short AI agent queries

Avoid on Login Nodes

  • Running pipelines directly
  • Heavy computation or multi-core jobs
  • Large data processing
  • Long-running processes
  • Anything your HPC policy prohibits

Example Workflow

  • You: "Write a SLURM script for my Nextflow pipeline, 8 cores, 32GB, 4h"
  • AI generates the script and saves it
  • You: !sbatch ~/jobs/run_nf.sh
  • You: !squeue -u $USER
  • You: /send ~/results/output.pdf

Results in Google Drive. Instantly.

Couple the relay with rclone to sync pipeline outputs -- plots, PDFs, slides -- to Google Drive. Open them on your phone seconds after the HPC finishes.

🤖 AI writes analysis, saves output/
🔄 rclone syncs to Google Drive
📊 PDF / PNG / slides appear instantly
📱 View on phone, share with supervisor

Chat History Viewer

Generate an interactive HTML visualization of all your AI conversations. Browse sessions, search messages, inspect tool calls, and track token usage -- all in a dark-themed, responsive interface.

💾 Run chat_viewer.py on HPC
🌐 Generates interactive HTML page
🔍 Search, filter, browse sessions
📋 Track tokens, costs, tool calls

Ready to control your cluster?

Clone the repo, edit your config, and send your first command from Telegram.