Initializing systems...

How to Run OpenClaw, n8n & Ollama on Google Colab for FREE: Complete Self-Hosted AI Automation Guide

Stop paying for AI APIs. This complete guide shows you how to run OpenClaw, n8n, and Ollama on Google Colab's free tier. Self-hosted automation that works 24/7 with local LLMs.

Istiyaq Khan

How to Run OpenClaw, n8n & Ollama on Google Colab for FREE

Stop burning money on AI APIs. Here's the complete free setup that actually works.

If you're a student in Bangladesh (or anywhere), you know the pain: AI automation is powerful, but OpenAI API bills will eat your lunch money. Here's how to run OpenClaw, n8n, and Ollama completely FREE using Google Colab's GPU runtime—even while you sleep.

This is your zero-cost ticket to 24/7 self-hosted AI automation.


What You'll Build

  • OpenClaw: Your personal AI assistant running 100% free
  • n8n: No-code automation workflows without API costs
  • Ollama: Local LLMs (Qwen, Mistral, Gemma) running on Colab's free tier
  • ngrok: Public URLs to connect everything from anywhere

Total cost: $0. Forever.


Prerequisites

  • Google account (for Colab)
  • ngrok account (free tier works fine)
  • Basic understanding of terminal commands
  • Patience for ~15 minutes of setup (once)

Step-by-Step Setup Guide

First open the second terminal in google collab.You will see the terminal icon below, click on this icon to open a second terminal.

Step 1: Install Required Dependencies ( 2nd terminal )

bash
1sudo apt-get install zstd

This installs the compression library needed for Ollama.


Step 2: Install Ollama ( 2nd terminal )

bash
1curl -fsSL https://ollama.com/install.sh | sh

This downloads and installs Ollama—the engine that runs local LLMs.


Step 3: Install ngrok Tunnel Software ( 2nd terminal )

bash
1curl -s https://ngrok-agent.s3.amazonaws.com/ngrok.asc | sudo tee /etc/apt/trusted.gpg.d/ngrok.asc >/dev/null && echo "deb https://ngrok-agent.s3.amazonaws.com buster main" | sudo tee /etc/apt/sources.list.d/ngrok.list && sudo apt update && sudo apt install ngrok

ngrok creates a public URL so OpenClaw and n8n can connect to your Ollama instance.


Step 4: Configure ngrok with Your Auth Token ( 2nd terminal )

bash
1ngrok config add-authtoken YOUR_NGROK_TOKEN_HERE

Replace YOUR_NGROK_TOKEN_HERE with your actual ngrok token from your ngrok dashboard.


Step 5: Start Ollama Server

Run these in Colab's first terminal:

bash
1# Kill any existing instances first 2pkill ollama 3 4# Start the server and send logs to a file in the background 5nohup ollama serve > ollama.log 2>&1 &

Your Ollama server is now running locally.


Step 6: Download Recommended Models

In your second terminal, pull one or two for n8n and OpenClaw:

bash
1ollama pull qwen2.5:14b 2ollama pull qwen2.5:32b 3ollama pull gpt-oss:20b 4ollama pull mistral-nemo:latest 5ollama pull nemotron-cascade-2:30b 6ollama pull gemma4:26b

Pro tip: These models handle automation tasks, coding, and reasoning without the $20/month OpenAI tax.


Step 7: Create Your Public Tunnel ( 2nd terminal )

bash
1ngrok http 11434

This creates a public URL like https://abc123.ngrok.ioSave this URL. You'll need it for OpenClaw and n8n.


Step 8: Fix CORS Permissions (CRITICAL)

Colab has strict CORS rules. Stop Ollama and restart with open permissions.

Run this Python code in Colab:

python
1import os 2import subprocess 3import time 4 5# 1. Kill the existing restricted Ollama server 6os.system("pkill ollama") 7time.sleep(2) 8 9# 2. Restart Ollama with completely open CORS permissions 10env = os.environ.copy() 11env["OLLAMA_HOST"] = "0.0.0.0" 12env["OLLAMA_ORIGINS"] = "*" 13 14subprocess.Popen( 15 ["ollama", "serve"], 16 env=env, 17 stdout=subprocess.DEVNULL, 18 stderr=subprocess.DEVNULL 19) 20time.sleep(3) # Wait for the daemon to start 21 22print("Ollama successfully restarted with open permissions.")

Without this step, OpenClaw can't connect.


Step 9: Keep Your Session Alive (Python Loop) Optional

Colab disconnects idle sessions. Run this to keep it alive:

python
1import time 2from datetime import datetime 3from IPython.display import clear_output 4 5print("Starting Keep-Alive Loop. Press the Stop button to end.") 6 7while True: 8 # Clear the previous output so the cell doesn't get infinitely long 9 clear_output(wait=True) 10 11 # Get and format the current time 12 current_time = datetime.now().strftime("%Y-%m-%d %H:%M:%S") 13 14 # Print the status 15 print(f"[{current_time}] Session is active. Next ping in 5 minutes...") 16 17 # Pause execution for 300 seconds (5 minutes) 18 time.sleep(300)

This loops every 5 minutes to prevent timeout.


Step 10: Keep Colab Alive in Browser (DevTools) Optional

For long-running sessions, open Developer Tools Console (F12 → Console tab) and paste:

javascript
1function keepAlive() { 2 console.log("Simulating click to keep Colab alive..."); 3 // Clicks the connect button at the top right to simulate user activity 4 const connectButton = document.querySelector("colab-connect-button"); 5 if (connectButton) { 6 connectButton.click(); 7 } 8} 9 10// Run the function every 5 minutes (300,000 milliseconds) 11setInterval(keepAlive, 300000);

Leave this browser tab open. It simulates clicks to keep your session alive indefinitely.


Connect OpenClaw to Your Ollama Instance

Now that Ollama is running with a public ngrok URL:

  1. Open your OpenClaw config (usually config.yaml)
  2. Add your Ollama provider, pointing to your ngrok URL:
yaml
1providers: 2 - name: colab-ollama 3 type: ollama 4 baseUrl: https://your-ngrok-url.ngrok.io # Replace with your actual URL 5 defaultModel: qwen2.5:14b
  1. Restart OpenClaw and select your Colab Ollama model

You're now running AI automation 100% free.


Connect n8n to Your Ollama Instance

For n8n workflows:

  1. Install the Ollama community node in n8n
  2. Configure credentials, using your ngrok URL as the base URL
  3. Build workflows using local LLMs instead of OpenAI

Cost per API call: $0.00


Troubleshooting

ProblemSolution
ngrok URL changed on reconnectUse ngrok's paid tier for static URLs, or update config each time
Colab session endedRestart from Step 5—models are cached
Connection refusedMake sure you ran Step 8 (CORS fix)
Out of memoryUse smaller models (qwen2.5:7b instead of 32b)

Why This Matters

If you're building in Bangladesh or any country where $20/month is real money, this setup is liberating:

  • No API bills
  • No rate limits
  • No credit card required
  • Runs on free Google infrastructure
  • Models are yours

This is the democratization of AI infrastructure. You just need the knowledge.


Next Steps

  1. Set up your first OpenClaw workflow using local models
  2. Build n8n automations that don't cost per execution
  3. Document what you build—share it with others fighting the same battle
  4. Scale up when you have revenue, not before

Who This Guide Is For

  • Students learning AI without burning through savings
  • Solopreneurs building before they have revenue
  • Builders in emerging markets where $20/month is prohibitive
  • Anyone who believes AI infrastructure should be accessible

Questions? Drop them in the comments. I built this because I needed it. Now it's yours.


Built with rage against the API bill. Share this with someone who needs it.


About the Author: Istiyaq Khan Razin is the founder of IKK Studio, building AI workflows and content systems for creators and small businesses. Documenting the journey of building real skills and automation from Bangladesh with limited resources.


Published: April 3, 2026

    Run OpenClaw & n8n FREE on Google Colab