Skip to main content

Customize GitHub Copilot CLI Status Line

· 12 min read

Over the past few months, I have switched most of my development workflows to GitHub Copilot CLI. Whether I'm exploring a codebase, writing code, debugging, or even building a new application, I find myself in GitHub Copilot CLI most of the time. It's fast, it keeps me in the flow, and the agentic capabilities make it incredibly productive.

Many users have asked me how to track context usage in the CLI ?

First of all, you can always use the /context and /usage commands to get a snapshot of your current context window usage.

Nevertheless, you can also have a live status line that shows your token consumption and context usage percentage in real time, without needing to type any command, you just need to use the experimental statusLine.command feature to run a custom script to generate the status line content.

Experimental Feature

As of 1st of May 2026, statusLine.command is an experimental feature and is subject to change. The payload format, configuration keys, and behavior described here may evolve in future releases. I'll do my best to keep this post updated, but always check the official docs for the latest.

This post walks you through setting it up from scratch. I also document the full JSON payload that Copilot sends to your script — I captured it directly from a live session to build the reference table below.


Prerequisites

ToolWhy
GitHub Copilot CLIThe host that calls your script
jqParses the JSON payload piped to stdin (stedolan.github.io/jq)
A POSIX-compatible shell (bash, zsh, dash, …)Runs the script

Windows users — see the Windows section at the end. (I am not a Windows user, so I welcome contributions to that section!)


1 — Enable the Experimental Feature

statusLine is behind an experimental gate. Pick one of the following approaches, then restart Copilot CLI:

# Option A: turn on all experimental features
copilot --experimental
# (or type /experimental on inside a running session)

# Option B: enable only STATUS_LINE in config
# Edit ~/.copilot/settings.json and add:
# "feature_flags": { "enabled": ["STATUS_LINE"] }

After restarting, the CLI will look for statusLine in your config.

tip

If you toggle experimental mode, restart Copilot CLI before testing. The feature flags are read at startup, simply call the /restart command.


2 — Create the Script

Save the following as ~/.copilot/statusline.sh (or any path you like):

#!/usr/bin/env bash

set -eu

payload=$(cat)

current_context_tokens=$(echo "$payload" | jq -r '.context_window.current_context_tokens // empty')
displayed_context_limit=$(echo "$payload" | jq -r '.context_window.displayed_context_limit // empty')
used_percentage=$(echo "$payload" | jq -r '.context_window.used_percentage // empty')

total_duration_ms=$(echo "$payload" | jq -r '.cost.total_duration_ms // 0')


current_context_tokens_formatted="$current_context_tokens"
if [ -n "$current_context_tokens" ] && [ "$current_context_tokens" -ge 1000 ]; then
current_context_tokens_formatted="$(printf '%.1fk' "$(echo "scale=1; $current_context_tokens / 1000" | bc)")"
fi

displayed_context_limit_formatted="$displayed_context_limit"
if [ -n "$displayed_context_limit" ] && [ "$displayed_context_limit" -ge 1000 ]; then
displayed_context_limit_formatted="$((displayed_context_limit / 1000))k"
fi


# ── Format total duration as HH:MM:SS if available ──────────────────────────────
total_duration="00:00:00"
if [ "$total_duration_ms" -gt 0 ]; then
# Convert ms to seconds, then format as HH:MM:SS locally no functions for this in bash
total_seconds=$((total_duration_ms / 1000))
hours=$((total_seconds / 3600))
minutes=$(((total_seconds % 3600) / 60))
seconds=$((total_seconds % 60))
total_duration=$(printf "%02d:%02d:%02d" "$hours" "$minutes" "$seconds")
fi

# ── Build the gauge ──────────────────────────────────────────────
# 10-cell bar: each cell = 10 %. Filled = █, empty = ░
gauge=""
if [ -n "$used_percentage" ]; then
# Round to nearest integer (jq gives a float like 42.7)
pct_int=$(printf '%.0f' "$used_percentage")

filled=$(( pct_int / 10 ))
empty=$(( 10 - filled ))

i=0; while [ $i -lt $filled ]; do gauge="${gauge}█"; i=$((i+1)); done
i=0; while [ $i -lt $empty ]; do gauge="${gauge}░"; i=$((i+1)); done

gauge="${gauge} ${pct_int}%"
fi


# ── Format the final status line content ──────────────────────────────
printf "🧠 Context %s/%s - %s - " "$current_context_tokens" "$displayed_context_limit_formatted" "$gauge"
printf "\t⏱️ %s" "$total_duration"

Make it executable:

chmod +x ~/.copilot/statusline.sh

Use Any Language

The statusLine.command is not limited to Bash — you can use any language that supports a shebang (#!). As long as your script reads JSON from stdin and prints the status line to stdout, it will work.

Here are equivalent examples in Shell (sh), Python, and JavaScript (Node.js):

#!/usr/bin/env bash

input=$(cat)
model=$(echo "$input" | jq -r '.model.display_name')
displayed_context_limit=$(echo "$input" | jq -r '.context_window.displayed_context_limit')
printf "%s - Context Limit %s (sh)" "$model" "$displayed_context_limit"
tip

Don't forget to make your script executable (chmod +x) and ensure the interpreter is available on your PATH. For Python and Node.js, you don't need jq — the JSON parsing is handled natively.


3 — Wire It Up in Config

Open ~/.copilot/settings.json and add the statusLine key:

{
"statusLine": {
"type": "command",
"command": "~/.copilot/statusline.sh",
"padding": 1
}
}

Tip: The command value supports ~ expansion and environment variables, so $HOME/.copilot/statusline.sh works too.
If you use COPILOT_HOME or --config-dir, your config lives in that directory instead of ~/.copilot/.

Restart Copilot CLI. After the first model call you should see something like:

 🧠 Context 0/168k - █░░░░░░░░░ 19% -   ⏱️ 00:17:34

Statusline Command JSON Payload

All fields returned by the statusline.command API payload.

Full JSON Payload Reference

Top-Level Fields

FieldTypeNullableDescription
cwdstringNoCurrent working directory of the CLI process
session_idstringYesUnique identifier for the current session
session_namestringYesHuman-readable session title (e.g. "Configure Copilot Integration")
transcript_pathstringYesFile system path to the session state directory
versionstringNoCLI version (e.g. "1.0.40")

model

Information about the currently selected model.

FieldTypeNullableDescription
model.idstringYesModel identifier (e.g. "gpt-5.4", "claude-sonnet-4.5")
model.display_namestringYesHuman-readable model name (e.g. "gpt-5.4 (medium)")

workspace

FieldTypeNullableDescription
workspace.current_dirstringNoCurrent working directory (same value as top-level cwd)

cost

Session-level cost and activity metrics (cumulative for the entire session).

FieldTypeNullableDescription
cost.total_api_duration_msnumberNoTotal time spent in API calls (milliseconds)
cost.total_lines_addednumberNoTotal lines added by edits in this session
cost.total_lines_removednumberNoTotal lines removed by edits in this session
cost.total_duration_msnumberNoWall-clock time since session start (milliseconds)
cost.total_premium_requestsnumberNoNumber of premium model requests made

context_window

Token usage and context window metrics. This section contains two different perspectives — see the notes below the table.

Cumulative Token Counts (across all API calls in the session)

FieldTypeNullableDescriptionUsed by
context_window.total_input_tokensnumberNoSum of input tokens from every API call/usage
context_window.total_output_tokensnumberNoSum of output tokens from every API call/usage
context_window.total_cache_read_tokensnumberNoTotal tokens served from prompt cache/usage cached
context_window.total_cache_write_tokensnumberNoTotal tokens written to prompt cache
context_window.total_reasoning_tokensnumberNoTotal tokens used for chain-of-thought reasoning/usage reasoning
context_window.total_tokensnumberNototal_input_tokens + total_output_tokens

Raw Model View (based on last API call vs full model window)

These fields use the full model context window (max_context_window_tokens) as their reference. They reflect the most recent API call only, not accumulated usage.

FieldTypeNullableDescriptionFormula
context_window.context_window_sizenumberYesRaw maximum context window from model capabilitiesmodel.capabilities.limits.max_context_window_tokens
context_window.last_call_input_tokensnumberNoInput tokens from the most recent main-agent API call
context_window.last_call_output_tokensnumberNoOutput tokens from the most recent main-agent API call
context_window.used_percentagenumberYesPercentage of full context window used by last call(last_call_input + last_call_output) / context_window_size × 100
context_window.remaining_percentagenumberYesPercentage of full context window remaining after last call100 - used_percentage
context_window.remaining_tokensnumberYesTokens remaining after last callcontext_window_size - (last_call_input + last_call_output)

Display View (matches /context command and model badge)

These fields use the displayed context limit — a smaller, more practical number that accounts for output token reservations. Use these fields to match the /context UI.

FieldTypeNullableDescriptionFormula
context_window.current_context_tokensnumberYesLocally-counted accumulated prompt tokens (system + tools + messages). Updated live via session events.Computed by calculateTokenBreakdown()
context_window.displayed_context_limitnumberYesPractical context limit shown in UI. Smaller than context_window_size because it reserves space for output.promptTokenLimit + min(32k, outputTokenLimit) (64k for 1M models)
context_window.current_context_used_percentagenumberYesPercentage of displayed limit used. Matches the model badge.current_context_tokens / displayed_context_limit × 100

Current Model Usage (per-model breakdown)

FieldTypeNullableDescription
context_window.current_usageobjectYesToken usage breakdown for the currently selected model only
context_window.current_usage.input_tokensnumberNo*Input tokens for current model
context_window.current_usage.output_tokensnumberNo*Output tokens for current model
context_window.current_usage.cache_creation_input_tokensnumberNo*Cache write tokens for current model
context_window.current_usage.cache_read_input_tokensnumberNo*Cache read tokens for current model

remote

Information about remote/cloud session state (e.g. Copilot Cloud Agent tasks).

FieldTypeNullableDescription
remote.connectedbooleanNoWhether a remote session is active
remote.indicatorstringYesDisplay indicator character (e.g. "☁") — only present when connected
remote.task_idstringYesRemote task identifier
remote.task_namestringYesRemote task display name
remote.task_urlstringYesURL to the remote task
remote.task_typestringYesType of remote task
remote.repositoryobjectYesRepository info for the remote session (see sub-fields below)
remote.repository.ownerstringNo*Repository owner (e.g. "github")
remote.repository.namestringNo*Repository name (e.g. "copilot-agent-runtime")
remote.repository.branchstringNo*Branch name
remote.pull_request_numbernumberYesAssociated PR number, if any
remote.contextobjectYesWorking directory context from the remote environment (see sub-fields below)
remote.context.cwdstringNo*Remote current working directory
remote.context.gitRootstringYesRemote git repository root
remote.context.repositorystringYesRepository identifier (e.g. "owner/repo")
remote.context.hostTypestringYesHosting platform type (e.g. "github", "ado")
remote.context.repositoryHoststringYesRaw host string (e.g. "github.com")
remote.context.branchstringYesCurrent git branch
remote.context.headCommitstringYesCurrent HEAD commit SHA
remote.context.baseCommitstringYesMerge-base commit SHA (fork point from remote default branch)

Notes

Why context_window_sizedisplayed_context_limit

  • context_window_size is the raw model maximum (e.g. 400,000 for gpt-5.4).
  • displayed_context_limit is the practical limit shown to users = promptTokenLimit + min(32k, outputTokenLimit). It's smaller because part of the window is reserved for output tokens.

Why used_percentagecurrent_context_used_percentage

They differ in both numerator and denominator:

MetricNumeratorDenominator
used_percentagelast_call_input + last_call_output (single API call)context_window_size (full model limit)
current_context_used_percentagecurrent_context_tokens (accumulated, locally counted)displayed_context_limit (practical limit)

Recommendation: Use current_context_used_percentage and displayed_context_limit for UI parity with the CLI.


Windows: Git Bash & PowerShell

Option A — Git Bash

If you have Git for Windows installed, you already have Bash and can install jq via scoop or by dropping the jq.exe binary on your PATH.

Set the command in config to point at the Git Bash executable:

{
"statusLine": {
"type": "command",
"command": "bash C:/Users/you/.copilot/statusline.sh"
}
}

Use forward slashes or escaped backslashes in the path.
~ expansion may not work on Windows; use the absolute path instead.

Option B — PowerShell (no jq needed)

If you prefer a native Windows approach, save a .ps1 script:

# statusline.ps1
$json = $input | Out-String | ConvertFrom-Json
$t = $json.context_window.total_tokens
$p = [math]::Round($json.context_window.used_percentage)
$filled = [int]($p / 10)
$bar = ('█' * $filled) + ('░' * (10 - $filled))
Write-Host -NoNewline "tokens: $t $bar ${p}%"

Config:

{
"statusLine": {
"type": "command",
"command": "powershell -NoProfile -File C:\\Users\\you\\.copilot\\statusline.ps1"
}
}

Caveat: PowerShell startup adds latency. If the 10-second timeout is tight on your machine, prefer the Git Bash route.


Recap

  1. Enable the experimental STATUS_LINE feature and restart.
  2. Create ~/.copilot/statusline.sh with the gauge script.
  3. Configure statusLine.command in ~/.copilot/settings.json.

And remember: you can always type /context and /usage inside a Copilot CLI session to get a one-off snapshot of your context window usage. The status line just makes it always visible without interrupting your flow.

Happy coding!