pyTigerGraph is a Python client for TigerGraph databases. It wraps the REST++ and GSQL APIs and provides both a synchronous and an asynchronous interface.
Full documentation: https://docs.tigergraph.com/pytigergraph/current/intro/
pip install pyTigerGraph| Extra | What it adds | Install command |
|---|---|---|
gds |
Graph Data Science — data loaders for PyTorch Geometric, DGL, and Pandas | pip install 'pyTigerGraph[gds]' |
mcp |
Model Context Protocol server — installs pyTigerGraph-mcp (convenience alias) |
pip install 'pyTigerGraph[mcp]' |
fast |
orjson JSON backend — 2–10× faster parsing, releases the GIL under concurrent load | pip install 'pyTigerGraph[fast]' |
Extras can be combined:
pip install 'pyTigerGraph[fast,gds,mcp]'Install torch before installing the gds extra:
- Install Torch
- Optionally Install PyTorch Geometric or Install DGL
pip install 'pyTigerGraph[gds]'
orjson is a Rust-backed JSON library that is detected and used automatically when installed. No code changes are required. It improves throughput in two ways:
- Faster parsing — 2–10× vs stdlib
json - GIL release — threads parse responses concurrently instead of serialising on the GIL
If orjson is not installed the library falls back to stdlib json transparently.
from pyTigerGraph import TigerGraphConnection
conn = TigerGraphConnection(
host="http://localhost",
graphname="my_graph",
username="tigergraph",
password="tigergraph",
)
print(conn.echo())Use as a context manager to ensure the underlying HTTP session is closed:
with TigerGraphConnection(host="http://localhost", graphname="my_graph") as conn:
result = conn.runInstalledQuery("my_query", {"param": "value"})AsyncTigerGraphConnection exposes the same API as TigerGraphConnection but with async/await syntax. It uses aiohttp internally and shares a single connection pool across all concurrent tasks, making it significantly more efficient than threaded sync code at high concurrency.
import asyncio
from pyTigerGraph import AsyncTigerGraphConnection
async def main():
async with AsyncTigerGraphConnection(
host="http://localhost",
graphname="my_graph",
username="tigergraph",
password="tigergraph",
) as conn:
result = await conn.runInstalledQuery("my_query", {"param": "value"})
print(result)
asyncio.run(main())conn = TigerGraphConnection(
host="http://localhost",
graphname="my_graph",
gsqlSecret="my_secret", # generates a session token automatically
)conn = TigerGraphConnection(
host="https://my-instance.i.tgcloud.io",
graphname="my_graph",
username="tigergraph",
password="tigergraph",
tgCloud=True,
)| Parameter | Type | Default | Description |
|---|---|---|---|
host |
str |
"http://127.0.0.1" |
Server URL including scheme (http:// or https://) |
graphname |
str |
"" |
Target graph name |
username |
str |
"tigergraph" |
Database username |
password |
str |
"tigergraph" |
Database password |
gsqlSecret |
str |
"" |
GSQL secret for token-based auth (preferred over username/password) |
apiToken |
str |
"" |
Pre-obtained REST++ API token |
jwtToken |
str |
"" |
JWT token for customer-managed authentication |
restppPort |
int|str |
"9000" |
REST++ port (auto-fails over to 14240/restpp for TigerGraph 4.x) |
gsPort |
int|str |
"14240" |
GSQL server port |
certPath |
str |
None |
Path to CA certificate for HTTPS |
tgCloud |
bool |
False |
Set to True for TigerGraph Cloud instances |
- Each thread gets its own dedicated HTTP session and connection pool, so concurrent threads never block each other.
- Install
pyTigerGraph[fast]to activate theorjsonbackend and reduce JSON parsing overhead under concurrent load. - Use
ThreadPoolExecutorto run queries in parallel:
from concurrent.futures import ThreadPoolExecutor, as_completed
with TigerGraphConnection(...) as conn:
with ThreadPoolExecutor(max_workers=16) as executor:
futures = [executor.submit(conn.runInstalledQuery, "q", {"p": v}) for v in values]
for f in as_completed(futures):
print(f.result())- Uses a single
aiohttp.ClientSessionwith an unbounded connection pool shared across all concurrent coroutines — no GIL, no thread-scheduling overhead. - Typically achieves higher QPS and lower tail latency than the threaded sync mode for I/O-bound workloads.
import asyncio
from pyTigerGraph import AsyncTigerGraphConnection
async def main():
async with AsyncTigerGraphConnection(...) as conn:
tasks = [conn.runInstalledQuery("q", {"p": v}) for v in values]
results = await asyncio.gather(*tasks)
asyncio.run(main())The gds sub-module provides data loaders that stream vertex and edge data from TigerGraph directly into PyTorch Geometric, DGL, or Pandas DataFrames for machine learning workflows.
Install requirements, then access via conn.gds:
conn = TigerGraphConnection(host="...", graphname="...")
loader = conn.gds.vertexLoader(attributes=["feat", "label"], batch_size=1024)
for batch in loader:
train(batch)See the GDS documentation for full details.
The TigerGraph MCP server is now a standalone package: pyTigerGraph-mcp. It exposes TigerGraph operations as tools for AI agents and LLM applications (Claude Desktop, Cursor, Copilot, etc.).
# Recommended — install the standalone package directly
pip install pyTigerGraph-mcp
# Or via the pyTigerGraph convenience alias (installs pyTigerGraph-mcp automatically)
pip install 'pyTigerGraph[mcp]'
# Start the server (reads connection config from environment variables)
tigergraph-mcpFor full setup instructions, available tools, configuration examples, and multi-profile support, see the pyTigerGraph-mcp README.
Migrating from
pyTigerGraph.mcp? Update your imports:# Old from pyTigerGraph.mcp import serve, ConnectionManager # New from tigergraph_mcp import serve, ConnectionManager
Companion notebook: Google Colab
