Claude Code

Install

curl -fsSL https://claude.ai/install.sh | bash

https://claude.com/product/claude-code

to setup then run command from command line "claude"

Ollama & Claude Code

below shows how to use local ollama with claude code

https://www.youtube.com/watch?v=FPUsn0ZK000

Can i use Claude Code with Other LLMs ?

The short answer is yes, but it requires a bit of a "workaround" because Claude Code is built specifically to talk to Anthropic's API.

While Anthropic doesn't officially support other models, you can use API proxies or local gateways to "trick" Claude Code into using GPT-4, Gemini, or even local models like Qwen or Llama.

How it Works: The "Base URL" Trick

Claude Code uses an environment variable called ANTHROPIC_BASE_URL to know where to send its requests. By default, this points to Anthropic. By changing this, you can route the traffic through a translator that converts Claude's requests into a format other models understand.
 
1. Using Local Models (Ollama)
This is the most popular way to run Claude Code for free using your own hardware.
  • The Setup: Ollama now provides an Anthropic-compatible endpoint specifically for this.
  • The Command: ```bash
    • export ANTHROPIC_BASE_URL="http://localhost:11434"
    • export ANTHROPIC_AUTH_TOKEN="ollama"
    • claude --model qwen3-coder:32b
  • Best Local Models: As of 2026, Qwen 3.5 Coder and GLM-4.7 are the top choices because they handle Claude Code’s complex "tool-calling" (reading files, running shell commands) better than most.

2. Using Other Cloud Models (LiteLLM or OpenRouter)
If you want to use GPT-4o or Gemini 2.0/3.0, you can use a proxy like LiteLLM or a service like OpenRouter.

  • LiteLLM: You run a small server on your machine that acts as a middleman.
  • OpenRouter: You simply point Claude Code to OpenRouter's API:
Bash 
export ANTHROPIC_BASE_URL="https://openrouter.ai/api"
export ANTHROPIC_AUTH_TOKEN="your-openrouter-key"
claude --model openai/gpt-4o


The Catch: Why Claude is still "Best"
Even though you can use other models, Claude Code was designed for Claude 3.5/3.7 Sonnet. You might run into these issues with other LLMs:
  • Diff Failures: Claude is exceptionally good at producing "diffs" (specific instructions on which lines of code to change). Other models often hallucinate line numbers or fail to provide the exact formatting Claude Code needs to apply the edit automatically. 
  • Tool-Calling Loops: Claude Code is an "agent"—it thinks, then reads a file, then thinks, then runs a test. Other models can get "confused" in these long loops and stop acting or start repeating themselves. 
  • Context Length: Claude Code sends a massive amount of context (it reads your whole repo). Local models need at least 64k context window to work effectively without "forgetting" the task halfway through

Plans
















different types









































Comments

Popular posts from this blog

Agentic AI Course : Week 1

LLM Engineering course : Day 1

LLM Engineering : Week 2