I work at the intersection of Linux systems, GPU computing, and AI/ML workflows.
My focus is on building correct, reproducible, and debuggable systems — especially where performance, GPUs, and machine learning infrastructure meet.
Rather than treating the OS or runtime as a black box, I care about understanding how systems behave under real workloads.
- Linux system performance & power management
- NVIDIA GPU, CUDA, and GPU debugging
- Containerized ML workflows (Docker + GPU)
- Python environments for AI/ML (PEP 668–compliant)
- Reproducible development and infrastructure setups
Host-level system tuning for Linux + GPU workloads
- CPU power profiles & performance tuning
- NVIDIA PRIME + CUDA validation
- Memory & swap optimization
- PyTorch GPU verification
- Benchmarks, design decisions, and lessons learned
👉 https://github.com/vikram2327/ubuntu-performance-ml-setup
Containerized GPU workflows with explicit verification
- Docker Engine + NVIDIA Container Toolkit
- GPU passthrough into containers
- CUDA & PyTorch GPU validation inside Docker
- Clear separation between host and container concerns
- Troubleshooting and design rationale included
👉 https://github.com/vikram2327/docker-nvidia-gpu-ml
- Prefer correctness over shortcuts
- Make system behavior observable and verifiable
- Document trade-offs, not just steps
- Keep setups reproducible and explainable
- Avoid fragile hacks that break over time
- GitHub: https://github.com/vikram2327
- LinkedIn: https://www.linkedin.com/in/vikrampratapsingh2
I’m always interested in conversations around Linux systems, GPU computing, ML infrastructure, and performance engineering.