Skip to content
Open
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
Original file line number Diff line number Diff line change
Expand Up @@ -216,6 +216,12 @@ public class CommonParameter {
public int maxMessageSize;
@Getter
@Setter
public long httpMaxMessageSize;
@Getter
@Setter
public long jsonRpcMaxMessageSize;
@Getter
@Setter
public int maxHeaderListSize;
@Getter
@Setter
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -19,6 +19,7 @@
import lombok.extern.slf4j.Slf4j;
import org.eclipse.jetty.server.ConnectionLimit;
import org.eclipse.jetty.server.Server;
import org.eclipse.jetty.server.handler.SizeLimitHandler;
import org.eclipse.jetty.servlet.ServletContextHandler;
import org.tron.core.config.args.Args;

Expand All @@ -29,6 +30,8 @@ public abstract class HttpService extends AbstractService {

protected String contextPath;

protected long maxRequestSize;

@Override
public void innerStart() throws Exception {
if (this.apiServer != null) {
Expand Down Expand Up @@ -63,7 +66,9 @@ protected void initServer() {
protected ServletContextHandler initContextHandler() {
ServletContextHandler context = new ServletContextHandler(ServletContextHandler.SESSIONS);
context.setContextPath(this.contextPath);
this.apiServer.setHandler(context);
SizeLimitHandler sizeLimitHandler = new SizeLimitHandler(this.maxRequestSize, -1);
Copy link
Copy Markdown
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This moves oversized-request handling in front of every servlet, so those requests never reach RateLimiterServlet.service() / Util.processError(). Today the HTTP APIs consistently set application/json and serialize failures through Util.printErrorMsg(...); after this change an over-limit body gets Jetty's default 413 response instead. That is a client-visible behavior change for existing callers, and the new test only checks status codes so it would not catch the response-format regression.

Copy link
Copy Markdown
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks for flagging the response-format difference. I did a detailed comparison:

Before (checkBodySize rejection):

  • HTTP status: 200 OK (processError() does not set status code)
  • Body: {"Error":"java.lang.Exception : body size is too big, the limit is 4194304"}

After (SizeLimitHandler rejection):

  • HTTP status: 413 Payload Too Large
  • Body: Jetty default error page (non-JSON)

The format is indeed not fully compatible, but I believe this is acceptable:

  1. The existing behavior is itself incorrect — returning 200 OK with an error JSON body violates HTTP semantics. Clients that only check status codes would incorrectly treat the oversized request as successful. The new 413 is the proper HTTP response for this scenario.

  2. The trigger threshold is very high — default is 4 MB. Normal API requests are far below this. Only abnormal or malicious payloads hit this limit, so the impact surface is negligible for legitimate clients.

  3. 413 is a standard HTTP status code — all HTTP client libraries handle it correctly. Clients already need to handle non-JSON infrastructure errors (e.g., Jetty 503, proxy 502/504).

  4. The new layer provides a real security benefitSizeLimitHandler rejects during streaming, before the body is fully buffered into memory, which is the core OOM protection. Falling back to application-layer formatting would defeat this purpose.

Copy link
Copy Markdown
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Oversized requests will now be rejected before they enter the existing servlet/filter pipeline. That means they will no longer go through the current application-side interceptor / rate-limit / metrics path. This may be perfectly fine, but it seems worth documenting or at least acknowledging so everyone is aware of the observability and control-path change.

Copy link
Copy Markdown
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Good point. Acknowledged — oversized requests rejected by SizeLimitHandler (413 before servlet dispatch) will bypass RateLimiterServlet and application-level metrics/logging. This is the expected trade-off: rejecting bad traffic early saves resources, but those requests won't appear in application-layer observability.

For chunked requests specifically, the broad catch(Exception) in servlets absorbs the BadMessageException, so those DO go through the servlet pipeline and are visible in the existing error path.

If Jetty-level 413 observability becomes important, a future enhancement could add a Jetty Handler or access log filter to count rejected requests — but that's outside this PR's scope.

Copy link
Copy Markdown
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

SizeLimitHandler only guarantees a pre-servlet 413 when Content-Length is known. For chunked / unknown-length requests, enforcement happens while downstream code is reading the request body. In the current codebase many HTTP handlers catch broad Exception around request.getReader(), and RateLimiterServlet also catches unexpected Exception, so the streaming over-limit exception can be absorbed before Jetty turns it into a 413. That means the PR still does not prove the “uniform 413” behavior described in the issue for requests without Content-Length.

Copy link
Copy Markdown
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Good point. I've verified the behavior with tests and here's what actually happens:

With Content-Length: SizeLimitHandler rejects before servlet dispatch → clean 413, servlet is never invoked. This covers the vast majority of real-world HTTP clients.

Chunked (no Content-Length): SizeLimitHandler installs a LimitInterceptor on HttpInput. When the accumulated bytes exceed the limit during getReader()/getInputStream(), it throws BadMessageException. However, as you identified, the broad catch (Exception) in RateLimiterServlet (line 119) and individual servlets absorbs this, so the client sees 200 + error JSON instead of 413.

The OOM protection goal is still met: the body read is truncated at the limit — the full oversized payload is never buffered into memory, which is the core security objective of this PR.

The 413-vs-200 discrepancy for chunked requests is a consequence of the existing broad catch (Exception) pattern in the servlet chain, not a SizeLimitHandler deficiency. Tightening those catch blocks is a worthwhile follow-up but belongs in a separate PR to limit blast radius.

I'll add chunked transfer tests to document and assert this behavior.

Copy link
Copy Markdown
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@bladehan1 Thanks for validating the streaming path. I think we should separate plain HTTP servlets from JSON-RPC here, because the real control flow is different.

For the plain HTTP servlet endpoints, the current inconsistency looks fixable in this PR with limited blast radius.
The over-limit exception is still coming from Jetty's SizeLimitHandler; it is only being normalized into 200 by the broad catch chain. A concrete fix would be to special-case request-size exceptions in Util.processError(...) and/or RateLimiterServlet: if the exception (or any cause) is a BadMessageException with code 413, bypass the generic application-error formatting and rethrow / sendError(413) instead.

For JSON-RPC, we need to clarify the contract first. Fixed-length oversized requests are already rejected before JsonRpcServlet / jsonrpc4j is invoked, so they currently come back as transport-level 413 rather than HTTP 200. For chunked requests, the limit is still enforced by Jetty while jsonrpc4j reads req.getInputStream(), but jsonrpc4j catches Throwable internally and this project's HttpStatusCodeProvider hardcodes HTTP 200. So the current synthetic BroadCatchServlet test does not prove the real JSON-RPC behavior.

If the intended contract is uniform 413 for oversized requests, I think HTTP should be fixed in this PR, and JSON-RPC needs an explicit integration fix in the jsonrpc4j layer (for example a custom JsonRpcServer.handle(...) that special-cases request-size exceptions and sends/rethrows 413 instead of letting them fall into the generic jsonrpc4j error path). If that is intentionally out of scope, then the PR description and issue contract should be narrowed accordingly.

Copy link
Copy Markdown
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Can we fail fast here if maxRequestSize was never initialized? This field currently relies on every HttpService subclass remembering to assign it, and the Java default of 0 turns into an accidental zero-byte limit. An explicit validation here would be safer than silently inheriting a bad default.

Copy link
Copy Markdown
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks for the suggestion. After consideration, I don't think a fail-fast check is needed here:

  1. 0 is a valid value — after commit c64d3b2, zero is explicitly allowed for all three size parameters. SizeLimitHandler(0, -1) rejects all non-empty request bodies, which is a legitimate operator choice (e.g., disabling an HTTP endpoint while keeping the port open for health checks).

  2. All 7 subclasses already assign the field — every HttpService subclass sets maxRequestSize from Args.getInstance().getHttpMaxMessageSize() or getJsonRpcMaxMessageSize() before initContextHandler() runs. The config-level validation (non-negative check in Args.java) ensures the value is always sane.

  3. Operator responsibility — the size limit is an operational config. Adding a runtime guard for an uninitialized field would be defending against a developer coding error (forgetting to set the field in a future subclass), not a misconfiguration. That's better caught by code review than runtime checks.

So the current design relies on config validation + operator intent, which I think is the right layer for this.

Copy link
Copy Markdown
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I don't think allowing operator-configured 0 removes the need for an explicit initialization guard here. The concern is not that 0 is invalid; it's that an intentional zero-byte limit and an uninitialized field are now indistinguishable, and the failure mode is a silently broken API.

Since the code now explicitly treats 0 as "reject every non-empty body", relying on Java's default zero is even riskier than before. I would still prefer making the initialization contract explicit here: constructor/abstract getter, or at least a sentinel value that makes "forgot to set maxRequestSize in a future subclass" fail fast rather than inheriting 0-byte behavior.

sizeLimitHandler.setHandler(context);
this.apiServer.setHandler(sizeLimitHandler);
return context;
}

Expand Down
26 changes: 24 additions & 2 deletions framework/src/main/java/org/tron/core/config/args/Args.java
Original file line number Diff line number Diff line change
Expand Up @@ -471,8 +471,30 @@ public static void applyConfigParams(
? config.getLong(ConfigKey.NODE_RPC_MAX_CONNECTION_AGE_IN_MILLIS)
: Long.MAX_VALUE;

PARAMETER.maxMessageSize = config.hasPath(ConfigKey.NODE_RPC_MAX_MESSAGE_SIZE)
? config.getInt(ConfigKey.NODE_RPC_MAX_MESSAGE_SIZE) : GrpcUtil.DEFAULT_MAX_MESSAGE_SIZE;
long rpcMaxMessageSize = config.hasPath(ConfigKey.NODE_RPC_MAX_MESSAGE_SIZE)
Copy link
Copy Markdown
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Please add targeted Args coverage for the new parsing semantics here: human-readable sizes (4m, 128MB), zero-valued limits, and startup failure when node.rpc.maxMessageSize > Integer.MAX_VALUE. This logic changed materially and should have direct tests.

? config.getMemorySize(ConfigKey.NODE_RPC_MAX_MESSAGE_SIZE).toBytes()
: GrpcUtil.DEFAULT_MAX_MESSAGE_SIZE;
if (rpcMaxMessageSize < 0 || rpcMaxMessageSize > Integer.MAX_VALUE) {
throw new TronError("node.rpc.maxMessageSize must be non-negative and <= "
+ Integer.MAX_VALUE + ", got: " + rpcMaxMessageSize, PARAMETER_INIT);
}
PARAMETER.maxMessageSize = (int) rpcMaxMessageSize;

long defaultHttpMaxMessageSize = GrpcUtil.DEFAULT_MAX_MESSAGE_SIZE;
PARAMETER.httpMaxMessageSize = config.hasPath(ConfigKey.NODE_HTTP_MAX_MESSAGE_SIZE)
? config.getMemorySize(ConfigKey.NODE_HTTP_MAX_MESSAGE_SIZE).toBytes()
: defaultHttpMaxMessageSize;
if (PARAMETER.httpMaxMessageSize < 0) {
throw new TronError("node.http.maxMessageSize must be non-negative, got: "
+ PARAMETER.httpMaxMessageSize, PARAMETER_INIT);
}
PARAMETER.jsonRpcMaxMessageSize = config.hasPath(ConfigKey.NODE_JSONRPC_MAX_MESSAGE_SIZE)
? config.getMemorySize(ConfigKey.NODE_JSONRPC_MAX_MESSAGE_SIZE).toBytes()
: defaultHttpMaxMessageSize;
if (PARAMETER.jsonRpcMaxMessageSize < 0) {
throw new TronError("node.jsonrpc.maxMessageSize must be non-negative, got: "
+ PARAMETER.jsonRpcMaxMessageSize, PARAMETER_INIT);
}

PARAMETER.maxHeaderListSize = config.hasPath(ConfigKey.NODE_RPC_MAX_HEADER_LIST_SIZE)
? config.getInt(ConfigKey.NODE_RPC_MAX_HEADER_LIST_SIZE)
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -136,6 +136,7 @@ private ConfigKey() {
public static final String NODE_HTTP_SOLIDITY_ENABLE = "node.http.solidityEnable";
public static final String NODE_HTTP_PBFT_ENABLE = "node.http.PBFTEnable";
public static final String NODE_HTTP_PBFT_PORT = "node.http.PBFTPort";
public static final String NODE_HTTP_MAX_MESSAGE_SIZE = "node.http.maxMessageSize";

// node - jsonrpc
public static final String NODE_JSONRPC_HTTP_FULLNODE_ENABLE =
Expand All @@ -150,6 +151,7 @@ private ConfigKey() {
public static final String NODE_JSONRPC_MAX_SUB_TOPICS = "node.jsonrpc.maxSubTopics";
public static final String NODE_JSONRPC_MAX_BLOCK_FILTER_NUM =
"node.jsonrpc.maxBlockFilterNum";
public static final String NODE_JSONRPC_MAX_MESSAGE_SIZE = "node.jsonrpc.maxMessageSize";

// node - dns
public static final String NODE_DNS_TREE_URLS = "node.dns.treeUrls";
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -297,6 +297,7 @@ public FullNodeHttpApiService() {
port = Args.getInstance().getFullNodeHttpPort();
enable = isFullNode() && Args.getInstance().isFullNodeHttpEnable();
contextPath = "/";
maxRequestSize = Args.getInstance().getHttpMaxMessageSize();
}

@Override
Expand Down
6 changes: 4 additions & 2 deletions framework/src/main/java/org/tron/core/services/http/Util.java
Original file line number Diff line number Diff line change
Expand Up @@ -327,10 +327,12 @@ public static Transaction packTransaction(String strTransaction, boolean selfTyp
}
}

@Deprecated
Copy link
Copy Markdown
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Marking this helper as deprecated does not make the new HTTP limit effective yet. PostParams.getPostParams() and many servlets still call Util.checkBodySize(), and that method is still enforcing parameter.getMaxMessageSize() (the gRPC limit), not httpMaxMessageSize. So any request whose body is > node.rpc.maxMessageSize but <= node.http.maxMessageSize will pass Jetty and then still be rejected in the servlet layer, which means the new independent HTTP setting is not actually honored for a large part of the API surface.

Copy link
Copy Markdown
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Good catch, thanks. checkBodySize() has been updated to use parameter.getHttpMaxMessageSize() instead of parameter.getMaxMessageSize(), so the servlet-layer fallback now honors the independent HTTP limit. See the latest push.

Copy link
Copy Markdown
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Can SizeLimitHandler reliably cover all request-body paths? If the answer is yes, I think it would be better to remove the remaining Util.checkBodySize() calls entirely instead of keeping two different enforcement layers. Right now the two checks are not measuring exactly the same thing: SizeLimitHandler enforces on the raw HTTP request-body bytes seen by Jetty, while Util.checkBodySize() runs after the body has already been decoded into a String and then measures body.getBytes().length. In edge cases such as line-ending normalization or non-UTF-8 request encodings, that value may not be strictly identical to the byte count used by SizeLimitHandler, which means we could end up with small discrepancies where Jetty accepts a request but the servlet-layer fallback still rejects it. If SizeLimitHandler is the intended authoritative mechanism, having only one source of truth seems safer and easier to reason about.

Copy link
Copy Markdown
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Good analysis. I agree that long-term, a single enforcement layer is cleaner. Here's a detailed breakdown of the two layers:

Are they actually measuring the same thing?

I added tests in commit 5e3eed0 to verify this empirically:

Scenario SizeLimitHandler (wire bytes) checkBodySize (body.getBytes().length) Result
ASCII JSON N N Identical (testWireBytesMatchCheckBodySizeForAsciiJson)
UTF-8 JSON with CJK N N Identical (testWireBytesMatchCheckBodySizeForUtf8Json)
Body with \r\n line endings N N − k (k = number of \r stripped) checkBodySize ≤ wire (testCheckBodySizeSafeDirectionWithNewlines)

For TRON's JSON API (UTF-8, mostly ASCII), the two measurements are strongly consistent. The only divergence comes from line-ending normalization via lines().collect(), and it's in a safe directioncheckBodySize sees fewer bytes, so it never rejects what SizeLimitHandler accepts for the same limit value.

Even in the unlikely case of minor discrepancies, operators can adjust the threshold up or down to achieve their desired limit — the behavior is predictable, not erratic.

Why keep both for now?

Removing checkBodySize() in this PR would touch 100+ servlet files (direct calls + PostParams.getPostParams()). I'd prefer to keep this PR focused on adding SizeLimitHandler and handle the removal in a follow-up:

  1. This PR — add SizeLimitHandler as the authoritative first layer, mark checkBodySize() as @Deprecated
  2. Follow-up PR — remove checkBodySize() and all call sites

This follows the standard deprecate-then-remove pattern and keeps each PR reviewable.

public static void checkBodySize(String body) throws Exception {
CommonParameter parameter = Args.getInstance();
if (body.getBytes().length > parameter.getMaxMessageSize()) {
throw new Exception("body size is too big, the limit is " + parameter.getMaxMessageSize());
if (body.getBytes().length > parameter.getHttpMaxMessageSize()) {
throw new Exception("body size is too big, the limit is "
+ parameter.getHttpMaxMessageSize());
}
}

Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -170,6 +170,7 @@ public SolidityNodeHttpApiService() {
port = Args.getInstance().getSolidityHttpPort();
enable = !isFullNode() && Args.getInstance().isSolidityNodeHttpEnable();
contextPath = "/";
maxRequestSize = Args.getInstance().getHttpMaxMessageSize();
}

@Override
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -19,6 +19,7 @@ public JsonRpcServiceOnPBFT() {
port = Args.getInstance().getJsonRpcHttpPBFTPort();
enable = isFullNode() && Args.getInstance().isJsonRpcHttpPBFTNodeEnable();
contextPath = "/";
maxRequestSize = Args.getInstance().getJsonRpcMaxMessageSize();
}

@Override
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -19,6 +19,7 @@ public JsonRpcServiceOnSolidity() {
port = Args.getInstance().getJsonRpcHttpSolidityPort();
enable = isFullNode() && Args.getInstance().isJsonRpcHttpSolidityNodeEnable();
contextPath = "/";
maxRequestSize = Args.getInstance().getJsonRpcMaxMessageSize();
}

@Override
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -173,6 +173,7 @@ public HttpApiOnPBFTService() {
port = Args.getInstance().getPBFTHttpPort();
enable = isFullNode() && Args.getInstance().isPBFTHttpEnable();
contextPath = "/walletpbft";
maxRequestSize = Args.getInstance().getHttpMaxMessageSize();
}

@Override
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -181,6 +181,7 @@ public HttpApiOnSolidityService() {
port = Args.getInstance().getSolidityHttpPort();
enable = isFullNode() && Args.getInstance().isSolidityNodeHttpEnable();
contextPath = "/";
maxRequestSize = Args.getInstance().getHttpMaxMessageSize();
}

@Override
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -24,6 +24,7 @@ public FullNodeJsonRpcHttpService() {
port = Args.getInstance().getJsonRpcHttpFullNodePort();
enable = isFullNode() && Args.getInstance().isJsonRpcHttpFullNodeEnable();
contextPath = "/";
maxRequestSize = Args.getInstance().getJsonRpcMaxMessageSize();
}

@Override
Expand Down
13 changes: 11 additions & 2 deletions framework/src/main/resources/config.conf
Original file line number Diff line number Diff line change
Expand Up @@ -223,6 +223,10 @@ node {
solidityPort = 8091
PBFTEnable = true
PBFTPort = 8092

# The maximum request body size for HTTP API, default 4M (4194304 bytes).
Copy link
Copy Markdown
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Now that zero is intentionally allowed, please document its concrete behavior here. “Must be non-negative” is not enough: operators may reasonably read 0 as “unlimited” or “disabled”, while SizeLimitHandler(0, -1) actually means “reject any non-empty request body”. That semantic needs to be explicit in the sample config.

# Supports human-readable sizes: 4m, 4MB, 4194304. Must be non-negative.
# maxMessageSize = 4m
}

rpc {
Expand All @@ -248,8 +252,9 @@ node {
# Connection lasting longer than which will be gracefully terminated
# maxConnectionAgeInMillis =

# The maximum message size allowed to be received on the server, default 4MB
# maxMessageSize =
# The maximum message size allowed to be received on the server, default 4M (4194304 bytes).
# Supports human-readable sizes: 4m, 4MB, 4194304. Must be non-negative.
# maxMessageSize = 4m

# The maximum size of header list allowed to be received, default 8192
# maxHeaderListSize =
Expand Down Expand Up @@ -357,6 +362,10 @@ node {
# openHistoryQueryWhenLiteFN = false

jsonrpc {
# The maximum request body size for JSON-RPC API, default 4M (4194304 bytes).
# Supports human-readable sizes: 4m, 4MB, 4194304. Must be non-negative.
# maxMessageSize = 4m

# Note: Before release_4.8.1, if you turn on jsonrpc and run it for a while and then turn it off,
# you will not be able to get the data from eth_getLogs for that period of time. Default: false
# httpFullNodeEnable = false
Expand Down
Loading
Loading