Skip to content

fix: Pre-create S3A event log dir before SparkContext init#6317

Merged
ntkathole merged 5 commits intofeast-dev:masterfrom
abhijeet-dhumal:fix/spark-s3a-event-log-init
Apr 27, 2026
Merged

fix: Pre-create S3A event log dir before SparkContext init#6317
ntkathole merged 5 commits intofeast-dev:masterfrom
abhijeet-dhumal:fix/spark-s3a-event-log-init

Conversation

@abhijeet-dhumal
Copy link
Copy Markdown
Contributor

@abhijeet-dhumal abhijeet-dhumal commented Apr 22, 2026

What this PR does / why we need it:

When spark.eventLog.enabled: "true" and spark.eventLog.dir points to an S3A path, feast materialize-incremental silently writes nothing to the online store and exits with code 0.
The failure chain:

SparkContext.__init__
  └─ EventLoggingListener.start()
       └─ EventLogFileWriter.requireLogBaseDirAsDirectory()
            └─ S3A 404 (prefix doesn't exist) → raises RuntimeException
                 └─ caught by _materialize_one(except Exception) → ERROR job
                      └─ CLI exits 0 — no data written, no visible error

S3 has no real directories. An empty prefix is indistinguishable from "does not exist", so Spark's pre-flight check always fails on a fresh bucket.

Which issue(s) this PR fixes:

In get_or_create_new_spark_session() (compute_engines/spark/utils.py), before building the SparkSession, call _ensure_s3a_event_log_dir() which:

  1. Checks if the S3A prefix already contains objects (no-op if it does)
  2. Writes a zero-byte .keep placeholder if empty
  3. Uses boto3 — already a Feast dependency via the S3 offline store
  4. Is fully non-fatal: swallows errors and lets Spark surface its own message if the write fails

No-ops for non-S3A paths (hdfs://, file://, etc.) and when event logging is disabled.

Checks

  • I've made sure the tests are passing.
  • My commits are signed off (git commit -s)
  • My PR title follows conventional commits format

Testing Strategy

  • Unit tests
  • Integration tests
  • Manual tests
  • Testing is not required for this change

Misc


Open in Devin Review

@abhijeet-dhumal abhijeet-dhumal requested a review from a team as a code owner April 22, 2026 15:21
devin-ai-integration[bot]

This comment was marked as resolved.

@abhijeet-dhumal abhijeet-dhumal force-pushed the fix/spark-s3a-event-log-init branch from c8351c5 to 448212d Compare April 22, 2026 15:40
@ntkathole ntkathole changed the title fix(spark): pre-create S3A event log dir before SparkContext init fix: Pre-create S3A event log dir before SparkContext init Apr 22, 2026
Copy link
Copy Markdown

@R-behera R-behera left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This looks like a useful guard for the S3A event log edge case, and the focused tests help. One follow-up worth considering is whether some Feast users rely on credentials or endpoint details only through Spark/Hadoop config rather than environment variables. If so, a short note or test around that path could prevent surprises when the pre-create step runs before Spark fully applies the config.

"spark.hadoop.fs.s3a.endpoint",
os.environ.get("FEAST_S3A_ENDPOINT", ""),
)
access_key = os.environ.get("AWS_ACCESS_KEY_ID", "")
Copy link
Copy Markdown
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

access_key = spark_config.get(
    "spark.hadoop.fs.s3a.access.key",
    os.environ.get("AWS_ACCESS_KEY_ID", ""),
)
secret_key = spark_config.get(
    "spark.hadoop.fs.s3a.secret.key",
    os.environ.get("AWS_SECRET_ACCESS_KEY", ""),
)
session_token = spark_config.get(
    "spark.hadoop.fs.s3a.session.token",
    os.environ.get("AWS_SESSION_TOKEN", ""),
) or None

@ntkathole
Copy link
Copy Markdown
Member

@abhijeet-dhumal Let's handle both comment from devin and @R-behera suggestion

@abhijeet-dhumal abhijeet-dhumal force-pushed the fix/spark-s3a-event-log-init branch from b60d47c to 19bdd11 Compare April 24, 2026 08:15
@abhijeet-dhumal
Copy link
Copy Markdown
Contributor Author

@abhijeet-dhumal Let's handle both comment from devin and @R-behera suggestion

@ntkathole Addressed both your comments ✅
credentials (access.key, secret.key, session.token) are now read from spark config first with env var fallback, and the Devin-flagged bucket-root path bug is fixed.

@abhijeet-dhumal
Copy link
Copy Markdown
Contributor Author

This looks like a useful guard for the S3A event log edge case, and the focused tests help. One follow-up worth considering is whether some Feast users rely on credentials or endpoint details only through Spark/Hadoop config rather than environment variables. If so, a short note or test around that path could prevent surprises when the pre-create step runs before Spark fully applies the config.

@R-behera Good catch on the Spark/Hadoop config credentials path ✅
_ensure_s3a_event_log_dir now reads spark.hadoop.fs.s3a.access.key, secret.key, and session.token from the spark config before falling back to environment variables. Added tests verifying both the spark-config-takes-precedence and env-var-fallback paths.


endpoint = spark_config.get(
"spark.hadoop.fs.s3a.endpoint",
os.environ.get("FEAST_S3A_ENDPOINT", ""),
Copy link
Copy Markdown
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Wondering if this can be AWS_ENDPOINT_URL instead or atleast we need to document this new env var in our docs ?

Copy link
Copy Markdown
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Good call — switched to AWS_ENDPOINT_URL . No custom env vars to document now. Spark config (spark.hadoop.fs.s3a.endpoint) still takes precedence when set.

@ntkathole
Copy link
Copy Markdown
Member

@abhijeet-dhumal let's fix the linting

aws_access_key_id=access_key or None,
aws_secret_access_key=secret_key or None,
aws_session_token=session_token,
config=BotoConfig(signature_version="s3v4"),
Copy link
Copy Markdown
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Also, consider supporting minio or other path style

addressing_style = (
    "path"
    if spark_config.get("spark.hadoop.fs.s3a.path.style.access", "false").lower() == "true"
    else "auto"
)

config=BotoConfig(
    signature_version="s3v4",
    s3={"addressing_style": addressing_style},
)

Copy link
Copy Markdown
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Added .. - _ensure_s3a_event_log_dir now reads spark.hadoop.fs.s3a.path.style.access and passes addressing_style: "path" to BotoConfig when it's "true", otherwise defaults to "auto". Tests cover both paths

…prevent silent materialize failure

Spark's EventLogFileWriter.requireLogBaseDirAsDirectory() is called
inside SparkContext.__init__. When spark.eventLog.dir points to an S3A
path that doesn't exist yet (S3 has no real directories), SparkContext
fails to initialise — silently from Feast's perspective because
_materialize_one() catches the exception and returns an ERROR job.

Add _ensure_s3a_event_log_dir() to utils.py: before building the
SparkSession, check if the S3A prefix exists and write a zero-byte
placeholder if it doesn't. Uses boto3 (already a Feast dep via S3 offline
store). Non-fatal: logs a warning and lets Spark surface its own error
if the write fails.

Signed-off-by: abhijeet-dhumal <abhijeetdhumal652@gmail.com>
… config, add session token support

Signed-off-by: abhijeet-dhumal <abhijeetdhumal652@gmail.com>
…linting

Signed-off-by: abhijeet-dhumal <abhijeetdhumal652@gmail.com>
Signed-off-by: abhijeet-dhumal <abhijeetdhumal652@gmail.com>
Signed-off-by: abhijeet-dhumal <abhijeetdhumal652@gmail.com>
@ntkathole ntkathole force-pushed the fix/spark-s3a-event-log-init branch from 22b7e8e to 70215e2 Compare April 27, 2026 11:54
@ntkathole ntkathole merged commit 9feca77 into feast-dev:master Apr 27, 2026
21 of 24 checks passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants