diff --git a/docs.json b/docs.json
index a0c5d230..f65331ef 100644
--- a/docs.json
+++ b/docs.json
@@ -870,6 +870,14 @@
"x-api/webhooks/get-stream-links"
]
},
+ {
+ "group": "Powerstream",
+ "pages": [
+ "x-api/powerstream/introduction",
+ "x-api/powerstream/handling-disconnections",
+ "x-api/powerstream/recovery-and-redundancy"
+ ]
+ },
{
"group": "Volume Streams",
"pages": [
diff --git a/x-api/powerstream/handling-disconnections.mdx b/x-api/powerstream/handling-disconnections.mdx
new file mode 100644
index 00000000..f1cae328
--- /dev/null
+++ b/x-api/powerstream/handling-disconnections.mdx
@@ -0,0 +1,74 @@
+---
+title: Handling disconnections
+sidebarTitle: Handling disconnections
+---
+
+## What is a disconnection?
+
+Establishing a connection to the streaming APIs means making a very long lived HTTPS request, and parsing the response incrementally. When connecting to the powerstream endpoint, you should form a HTTPS request and consume the resulting stream for as long as is practical. Our servers will hold the connection open indefinitely, barring server-side error, excessive client-side lag, network issues, routine server maintenance, or duplicate logins. With connections to streaming endpoints, it is likely, and should be expected, that disconnections will take place and reconnection logic built.
+
+## Why a streaming connection might be disconnected
+
+Your stream can disconnect for a number of reasons. Inspect the error message returned by the stream to understand the reason for the failure. Possible reasons for disconnections are as follows:
+
+* An authentication error (such as a wrong token or a wrong authentication method being used).
+* A streaming server is restarted on the X side. This is usually related to a code deploy and should be generally expected and designed around.
+* Your client is not keeping up with the volume of Posts the stream is delivering or is reading data too slowly. Every streaming connection is backed by a queue of messages to be sent to the client. If this queue grows too large over time, the connection will be closed.
+* Your account exceeded your daily/monthly quota of Posts.
+* You have too many active redundant connections.
+* A client stops reading data suddenly. If the rate of Posts being read off of the stream drops suddenly, the connection will be closed.
+* Possible networking issues between server and client
+* A temporary server side issue, scheduled maintenance and updates. (Check the [status page](/status))
+
+## Anticipating disconnects and reconnecting
+
+When streaming Posts, the goal is to stay connected for as long as possible, recognizing that disconnects may occur. The endpoint provides a 20-second keep alive heartbeat (it will look like a new line character). Use this signal to detect if you’re being disconnected.
+
+1. Your code should detect when fresh content and the heartbeat stop arriving.
+2. If that happens, your code should trigger a reconnection logic. Some clients and languages allow you to specify a read timeout, which you can set to 20 seconds.
+3. Your service should detect these disconnections and reconnect as soon as possible.
+
+
+Once an established connection drops, attempt to reconnect immediately. If the reconnect fails, slow down your reconnect attempts according to the type of error experienced:
+
+* Back off linearly for TCP/IP level network errors. These problems are generally temporary and tend to clear quickly. Increase the delay in reconnects by 250ms each attempt, up to 16 seconds.
+* Back off exponentially for HTTP errors for which reconnecting would be appropriate. Start with a 5 second wait, doubling each attempt, up to 320 seconds.
+* Back off exponentially for HTTP 429 errors Rate limit exceeded. Start with a 1 minute wait and double each attempt. Note that every HTTP 429 received increases the time you must wait until rate limiting will no longer be in effect for your account.
+
+
+## Recovering lost data
+
+If you do experience a disconnect, there are some different strategies that you can use to ensure that you receive all of the data that you might have missed. We've documented some key steps that you can take to recover missed data on our integration guide on [recovering data](/x-api/powerstream/recovery-and-redundancy).
+
+## Rate limits and usage
+
+To check connection limits response will return three headers. This is useful to understand how many times you can use the rule endpoint, and how many reconnections attempts are allowed for the streaming endpoint.
+
+* x-rate-limit-limit indicates the number of allotted requests your client is allowed to make during the 15-minute window.
+
+* x-rate-limit-remaining indicates the number of requests made so far in the 15-minute window.
+
+* x-rate-limit-reset is a UNIX timestamp indicating when the 15-minute window will restart, resetting x-rate-limit-remaining to 0.
+
+
+The filter stream endpoint does not currently report usage data. To check how many Posts have been delivered, your code can implement a metering logic, so that consumption can be measured and paused if needed.
+
+Your code that hosts the client side of the stream simply inserts incoming Posts into a first in, first out (FIFO) queue, or a similar memory structure; a separate process/thread should consume Posts from that queue to parse and prepare content for storage. With this design, you can implement a service that can scale efficiently in case incoming Post volumes changes dramatically. Conceptually, you can think of it as downloading an infinitely long file over HTTP.
+
+## Reconnection best practices
+
+### Test backoff strategies
+
+A good way to test a backoff implementation is to use invalid authorization credentials and examine the reconnect attempts. A good implementation will not get any 429 responses.
+
+### Issue alerts for multiple reconnects
+
+If a client reaches its upper threshold of its time between reconnects, it should send you notifications so you can triage the issues affecting your connection.
+
+### Handle DNS changes
+
+Test that your client process honors the DNS Time To live (TTL). Some stacks will cache a resolved address for the duration of the process and will not pick up DNS changes within the prescribed TTL. Such aggressive caching will lead to service disruptions on your client as X shifts load between IP addresses.
+
+### User Agent
+
+Ensure your user-agent HTTP header includes the client’s version. This will be critical in diagnosing issues on X's end. If your environment precludes setting the user-agent field, then set an x-user-agent header.
diff --git a/x-api/powerstream/introduction.mdx b/x-api/powerstream/introduction.mdx
new file mode 100644
index 00000000..535c1295
--- /dev/null
+++ b/x-api/powerstream/introduction.mdx
@@ -0,0 +1,340 @@
+---
+title: Introduction
+sidebarTitle: Introduction
+---
+
+Powerstream is our fastest, real-time streaming API for accessing public X data. Similar to the legacy GNIP Powetrack API, it uses rules to filter Posts based on keywords, operators, and metadata. Once a persistent http connection is made to the Powerstream endpoint, you can start receiving matching Posts in near-real time.
+
+Currently, Powerstream supports up to 1,000 rules and each rule can be 2048 characters.
+
+## Key Features:
+- **Real-time data delivery**: Get data matching your rules in near-real time.
+- **Precise filtering**: Filter for exactly the data you are looking for using Boolean queries with operators.
+- **Delivery**: JSON response over HTTP/1.1 chunked transfer encoding.
+- **Local Data-center support**: Fetch posts only from the local datacenter to reduce latency by avoiding replication lag.
+
+
+The Powerstream API is a premium offering available under select Enterprise plans.
+
+If you're interested in accessing Powerstream or learning more about our Enterprise offerings, please reach out to our Sales team by submitting the [Enterprise Request Form](/forms/enterprise-api-interest).
+We'll be happy to discuss how Powerstream can support your needs.
+
+
+## Authentication
+
+Powerstream API endpoints use OAuth 2.0 Bearer Token. Include in `Authorization: Bearer ` header and you can start using these endpoints.
+
+## Quick Start
+
+This section showcases how to quickly get started with the PowerStream endpoints using Python with the `requests` library. Install it via `pip install requests`. All examples use OAuth 2.0 Bearer Token authentication. Replace `YOUR_BEARER_TOKEN` with your actual token (store it securely, e.g., via `os.getenv('BEARER_TOKEN')`).
+
+We'll cover each endpoint with code snippets. Assume these imports at the top:
+
+```python
+import requests
+import json
+import time
+import sys
+import os # For env vars
+```
+
+### Setup
+```python
+bearer_token = os.getenv('BEARER_TOKEN') or "YOUR_BEARER_TOKEN" # Use env var for security
+base_url = "https://api.x.com/2/powerstream"
+rules_url = f"{base_url}/rules" # For rule management
+headers = {
+ "Authorization": f"Bearer {bearer_token}",
+ "Content-Type": "application/json"
+}
+```
+
+### 1. Create Rules (POST /rules)
+Add rules to filter your stream.
+
+```python
+data = {
+ "rules": [
+ {
+ "value": "(cat OR dog) lang:en -is:retweet",
+ "tag": "pet-monitor"
+ },
+ # Add more rules as needed (up to 100)
+ ]
+}
+
+response = requests.post(rules_url, headers=headers, json=data)
+if response.status_code == 201:
+ rules_added = response.json().get("data", {}).get("rules", [])
+ print("Rules added:")
+ for rule in rules_added:
+ print(f"ID: {rule['id']}, Value: {rule['value']}, Tag: {rule.get('tag', 'N/A')}")
+else:
+ print(f"Error {response.status_code}: {response.text}")
+```
+
+### 2. Delete Rules (POST /rules)
+Remove rules by ID (recommended) or value.
+
+```python
+data = {
+ "rules": [
+ {
+ "value": "(cat OR dog) lang:en -is:retweet",
+ "tag": "pet-monitor"
+ },
+ # Add more rules as needed (up to 100)
+ ]
+}
+
+response = requests.delete(rules_url, headers=headers, json=data)
+if response.status_code == 200:
+ deleted = response.json().get("data", {})
+ print(f"Deleted count: {deleted.get('deleted', 'N/A')}")
+ if 'not_deleted' in deleted:
+ print("Not deleted:", deleted['not_deleted'])
+else:
+ print(f"Error {response.status_code}: {response.text}")
+```
+
+**Tip**: To delete all rules, first GET them, extract IDs, then delete in bulk.
+
+### 3. Get Rules (GET /rules)
+Fetch all active rules.
+
+```python
+response = requests.get(rules_url, headers=headers)
+if response.status_code == 200:
+ rules = response.json().get("data", {}).get("rules", [])
+ if rules:
+ print("Active rules:")
+ for rule in rules:
+ print(f"ID: {rule['id']}, Value: {rule['value']}, Tag: {rule.get('tag', 'N/A')}")
+ else:
+ print("No active rules.")
+else:
+ print(f"Error {response.status_code}: {response.text}")
+```
+
+### 4. PowerStream (GET /stream)
+Connect to the stream for real-time Posts. Use `stream=True` for line-by-line reading. Implement reconnect logic for robustness.
+
+```python
+stream_url = base_url
+
+def main():
+ while True:
+ response = requests.request("GET", stream_url, headers=headers, stream=True)
+ print(response.status_code)
+ for response_line in response.iter_lines():
+ if response_line:
+ json_response = json.loads(response_line)
+ print(json.dumps(json_response, indent=4, sort_keys=True))
+ if response.status_code != 200:
+ print(response.headers)
+ raise Exception(
+ "Request returned an error: {} {}".format(
+ response.status_code, response.text
+ )
+ )
+```
+
+#### Local Datacenter Support
+
+For latency optimization, Powerstream provides an option to fetch only posts that originated or were created in the local datacenter where a connection is established. This avoids replication lag, resulting in faster delivery compared to posts from other datacenters. To enable this, append the query parameter `?localDcOnly=true` to the stream endpoint (e.g., `/2/powerstream?localDcOnly=true`). The datacenter you are connected to will be indicated both in the initial data payload of the stream and as an HTTP header in the response.
+
+To use in code:
+
+```python
+# For local datacenter only:
+stream_url = "https://api.x.com/2/powerstream?localDcOnly=true"
+```
+
+If the `localDcOnly` parameter is enabled, when the stream first connects, it will include the following response headers indicating which local datacenter is being used:
+
+```bash
+'x-powerstream-datacenter': 'atla',
+'x-powerstream-localdconly': 'true'
+```
+
+In addition to this, it will also send an initial payload specifying the datacenter:
+
+```bash
+{
+ "type": "connection_metadata",
+ "datacenter": "atla",
+ "timestamp": 1762557264155
+}
+```
+
+
+**Tip:** To optimize latency, set up connections from different geographic locations (e.g., one near Atlanta on the US East Coast and another near Portland on the US West Coast), enabling `localDcOnly=true` for each. This provides faster access to posts from each respective datacenter. Aggregate the streams on your end to combine cross-datacenter data.
+
+
+## Operators
+
+In order to set rules for filtering, you can use keywords and operators. Check out the list of available operators below.
+
+### Field-Based Operators
+
+#### User Operators
+| Operator | Summary | Example |
+|----------|---------|---------|
+| `from:` | Matches posts from a specific user | `from:xdevelopers` or `from:123456` |
+| `to:` | Matches posts directed to a specific user | `to:jvaleski` |
+| `retweets_of:` | Matches reposts of a specific user | `retweets_of:xdevelopers` |
+
+#### Content Operators
+| Operator | Summary | Example |
+|----------|---------|---------|
+| `contains:` | Matches posts containing specific text/keywords | `contains:hello` or `contains:-2345.432` |
+| `url_contains:` | Matches posts with URLs containing specific text | `url_contains:"com/willplayforfood"` |
+| `lang:` | Matches posts in specific languages | `lang:en` |
+
+#### Entity Operators
+| Operator | Summary | Example |
+|----------|---------|---------|
+| `has:` | Matches posts containing specific entities (Options: mentions, geo, links, media, lang, symbols, images, videos) | `has:images`, `has:geo`, `has:mentions` |
+| `is:` | Matches posts of specific types or with specific properties (Options: retweet, reply) | `is:retweet`, `is:reply` |
+
+#### Location Operators
+| Operator | Summary | Example |
+|----------|---------|---------|
+| `place:` | Matches posts from specific places/locations | `place:"Belmont Central"`, `place:02763fa2a7611cf3` |
+| `bounding_box:` | Matches posts within a geographic bounding box | `bounding_box:[-112.424083 42.355283 -112.409111 42.792311]` |
+| `point_radius:` | Matches posts within a radius of a point | `point_radius:[-111.464973 46.371179 25mi]`, `point_radius:[-111.464973 46.371179 15km]` |
+
+#### Advanced/Content Operators
+| Operator | Summary | Example |
+|----------|---------|---------|
+| `bio:` | Matches posts from users with specific bio content (Uses phrase matching) | N/A |
+| `bio_name:` | Matches posts from users with specific name in bio (Uses phrase matching) | N/A |
+
+#### Additional Operators
+| Operator | Summary | Example |
+|----------|---------|---------|
+| `retweets_of_status_id:` | Matches reposts of specific posts | `retweets_of_status_id:1234567890123456789` |
+| `in_reply_to_status_id:` | Matches replies to specific posts | `in_reply_to_status_id:1234567890123456789` |
+
+### Non-Field Operators
+
+#### Special Syntax Operators
+| Operator | Summary | Example |
+|----------|---------|---------|
+| `@` | Mention operator | `@username` |
+| Phrase matching | Matches exact phrases | `"exact phrase"` |
+
+#### Logical Operators
+| Operator | Summary | Example |
+|----------|---------|---------|
+| `OR` | Logical OR between expressions | `x OR facebook` |
+| Space/AND | Logical AND between expressions | `x facebook` (both terms must be present) |
+| `()` | Grouping for complex expressions | `(x OR facebook) iphone` |
+| `-` | Negation/exclusion | `x -facebook` (x but not facebook) |
+
+## Responses
+
+The payload of the Powestream API is the same format as the legacy GNIP Powertrack API. A sample json response looks like:
+
+```json
+[
+ {
+ "created_at": "Tue Mar 21 20:50:14 +0000 2006",
+ "id": 20,
+ "id_str": "20",
+ "text": "just setting up my twttr",
+ "truncated": false,
+ "entities": {
+ "hashtags": [],
+ "symbols": [],
+ "user_mentions": [],
+ "urls": []
+ },
+ "source": "X Web Client",
+ "in_reply_to_status_id": null,
+ "in_reply_to_status_id_str": null,
+ "in_reply_to_user_id": null,
+ "in_reply_to_user_id_str": null,
+ "in_reply_to_screen_name": null,
+ "user": {
+ "id": 12,
+ "id_str": "12",
+ "name": "jack",
+ "screen_name": "jack",
+ "location": "",
+ "description": "no state is the best state",
+ "url": "https://t.co/ZEpOg6rn5L",
+ "entities": {
+ "url": {
+ "urls": [
+ {
+ "url": "https://t.co/ZEpOg6rn5L",
+ "expanded_url": "http://primal.net/jack",
+ "display_url": "primal.net/jack",
+ "indices": [
+ 0,
+ 23
+ ]
+ }
+ ]
+ },
+ "description": {
+ "urls": []
+ }
+ },
+ "protected": false,
+ "followers_count": 6427829,
+ "friends_count": 3,
+ "listed_count": 32968,
+ "created_at": "Tue Mar 21 20:50:14 +0000 2006",
+ "favourites_count": 36306,
+ "utc_offset": null,
+ "time_zone": null,
+ "geo_enabled": true,
+ "verified": false,
+ "statuses_count": 30134,
+ "lang": null,
+ "contributors_enabled": false,
+ "is_translator": false,
+ "is_translation_enabled": false,
+ "profile_background_color": "EBEBEB",
+ "profile_background_image_url": "http://abs.twimg.com/images/themes/theme7/bg.gif",
+ "profile_background_image_url_https": "https://abs.twimg.com/images/themes/theme7/bg.gif",
+ "profile_background_tile": false,
+ "profile_image_url": "http://pbs.twimg.com/profile_images/1661201415899951105/azNjKOSH_normal.jpg",
+ "profile_image_url_https": "https://pbs.twimg.com/profile_images/1661201415899951105/azNjKOSH_normal.jpg",
+ "profile_banner_url": "https://pbs.twimg.com/profile_banners/12/1742427520",
+ "profile_link_color": "990000",
+ "profile_sidebar_border_color": "DFDFDF",
+ "profile_sidebar_fill_color": "F3F3F3",
+ "profile_text_color": "333333",
+ "profile_use_background_image": true,
+ "has_extended_profile": true,
+ "default_profile": false,
+ "default_profile_image": false,
+ "following": null,
+ "follow_request_sent": null,
+ "notifications": null,
+ "translator_type": "regular",
+ "withheld_in_countries": []
+ },
+ "geo": null,
+ "coordinates": null,
+ "place": null,
+ "contributors": null,
+ "is_quote_status": false,
+ "retweet_count": 122086,
+ "favorite_count": 263321,
+ "favorited": false,
+ "retweeted": false,
+ "lang": "en"
+ }
+]
+```
+
+## Limits & Best Practices
+
+- Rate Limits: 50 requests/24h for rule management; no limit on streams (but connection limits apply).
+- Reconnection: Exponential backoff on disconnects.
+- Monitoring: Use `Connection: keep-alive` headers.
+
diff --git a/x-api/powerstream/recovery-and-redundancy.mdx b/x-api/powerstream/recovery-and-redundancy.mdx
new file mode 100644
index 00000000..0b6e9d20
--- /dev/null
+++ b/x-api/powerstream/recovery-and-redundancy.mdx
@@ -0,0 +1,41 @@
+---
+title: Recovery and redundancy
+sidebarTitle: Recovery and redundancy
+---
+
+When consuming streaming data, maximizing your connection time and receiving all matched data is a fundamental goal. This means that it is important to take advantage of redundant connections, automatically detect disconnections, to reconnect quickly, and to have a plan for recovering lost data.
+
+In this integration guide, we will discuss different recovery and redundancy features: redundant connections, backfill, and recovery.
+
+
+## Redundant connections
+
+A redundant connection simply allows you to establish more than one simultaneous connections to the stream. This provides redundancy by allowing you to connect to the same stream with two separate consumers, receiving the same data through both connections. Thus, your app has a hot failover for various situations such as if one stream is disconnected or if your application's primary server fails.
+
+To use a redundant stream, simply connect to the same URL used for your primary connection. The data for your stream will be sent through both connections.
+
+## Backfill
+
+After you've detected a disconnection, your system should be smart enough to reconnect to the stream. If possible, your system should take note of how long the disconnection lasted so that you can use the proper recovery feature to backfill the data.
+
+If you identified that the disconnection lasted five minutes or less, you can use the backfill parameter, `backfillMinutes`. If you pass this parameter with your `GET /powerstream` request, you will receive the Posts that match your rules within the past one to five minutes. We generally deliver these older Posts first before any newly matched Posts, and also do not deduplicate Posts. This means that if you were disconnected for 90 seconds, but request two minutes worth of backfill data, you will receive 30 seconds worth of duplicate Posts, which your system should be tolerant of. Here is an example of what a request might look like with the backfill parameter:
+
+`curl 'https://api.x.com/2/powerstream?backfillMinutes=5' -H "Authorization: Bearer $ACCESS_TOKEN"`
+
+
+If you identified that the disconnection time lasted for longer than five minutes, you can utilize the [recent search endpoint](/x-api/posts/search/introduction) or the recovery feature to request missed data.
+
+## Recovery
+
+You can use the Recovery feature to recover missed data within the last 24 hours if you are unable to reconnect with the 5 minute backfill window.
+
+The streaming recovery feature allows you to have an extended backfill window of 24 hours. Recovery enables you to 'replay' the time period of missed data. A recovery stream is started when you make a connection request using `startTime` and `endTime` request parameters. Once connected, Recovery will re-stream the time period indicated, then disconnect.
+
+| | | |
+| :--- | :--- | :--- |
+| **Name** | **Type** | **Description** |
+| `startTime` | date (ISO 8601) | YYYY-MM-DDTHH:mm:ssZ (ISO 8601/RFC 3339).
Date in UTC signifying the start time to recover from. |
+| `endTime` | date (ISO 8601) | YYYY-MM-DDTHH:mm:ssZ (ISO 8601/RFC 3339).
Date in UTC signifying the end time to recover to. |
+
+
+Example request URL: `https://api.x.com/2/powerstream?startTime=2022-07-12T15:10:00Z&endTime=2022-07-12T15:20:00Z`
\ No newline at end of file