Skip to content

AlibabaResearch/DecoupledTemporalEncoding

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

1 Commit
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Decoupled Temporal Encoding (DTE)

This repository provides a minimal TensorFlow 1.12 implementation of Decoupled Temporal Encoding (DTE) for generative recommendation.

DTE contains two temporal components:

  • Macro-temporal module: injects temporal context into input item embeddings.
  • Micro-sequential module: adds a time-gated order bias to masked self-attention.

This code is intended as a reference implementation / reproducible demo for the paper, based on the KuaiRand 1K setting rather than a full production system.

Environment

This code is developed and tested with:

  • Framework: TensorFlow 1.12

Note: this project is implemented in TensorFlow 1.x graph mode.

Model Setting

The demo follows the paper setting:

  • decoder-only Transformer backbone
  • single decoder block
  • 8 attention heads
  • embedding size = 64
  • MLP tower = 16 -> 8 -> 1
  • optimizer = Adam
  • learning rate = 1e-4
  • batch size = 1024

For the temporal modules:

Macro module

  • learnable decay coefficient lambda
  • adaptive fusion MLP: 16 -> 8 -> 4
  • user context from average pooling over historical item embeddings

Micro module

  • pairwise time gaps are computed in seconds
  • learnable threshold tau
  • fixed temperature gamma = 10

File Structure

  • config.py
  • feature_maps.py
  • layers.py
  • data_loader.py
  • model.py
  • train.py

Descriptions:

  • layers.py: macro and micro temporal modules
  • data_loader.py: local txt data reader
  • model.py: model graph
  • train.py: training entry

How to Run

Install dependencies:

pip install tensorflow==1.12.0 numpy

Prepare your local data and training file.

Run:

python train.py

Notes

  • This is a minimal demo, not a full industrial system.
  • Only a subset of user/item side features is retained.
  • The code focuses on the two main ideas of the paper:
    • macro temporal encoding
    • micro time-gated attention bias

Citation

If you find this code useful, please cite:

@inproceedings{dte2026,
  title={Decoupled Temporal Encoding for Generative Recommendation},
  author={Pengfei Jia, Jingjian Wang, Jingmao Li, Ge Zhang, Feng Shi},
  year={2026}
}

About

No description, website, or topics provided.

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

 
 
 

Contributors

Languages