Skip to content

Conversation

@zayac
Copy link

@zayac zayac commented Nov 12, 2025

Summary

Implements LOG operation for the Vulkan backend with F32 and F16 support.

Part of #14909.

Testing

./build/bin/test-backend-ops -o LOG
ggml_vulkan: Found 1 Vulkan devices:
ggml_vulkan: 0 = NVIDIA GeForce RTX 5070 (NVIDIA) | uma: 0 | fp16: 1 | bf16: 1 | warp size: 32 | shared memory: 49152 | int dot: 1 | matrix cores: NV_coopmat2
Testing 2 devices

Backend 1/2: Vulkan0
  Device description: NVIDIA GeForce RTX 5070
  Device memory: 12227 MB (10998 MB free)

  LOG(type=f16,ne=[10,5,4,3]): OK
  LOG(type=f16,ne=[7,1,5,3]): OK
  LOG(type=f32,ne=[10,5,4,3]): OK
  LOG(type=f32,ne=[7,1,5,3]): OK
  4/4 tests passed
  Backend Vulkan0: OK
Backend 2/2: CPU
  Skipping CPU backend
2/2 backends passed
OK

@zayac zayac requested a review from 0cc4m as a code owner November 12, 2025 00:48
@github-actions github-actions bot added documentation Improvements or additions to documentation Vulkan Issues specific to the Vulkan backend ggml changes relating to the ggml tensor library for machine learning labels Nov 12, 2025
}
return nullptr;
case GGML_OP_LOG:
if ((src0->type == GGML_TYPE_F32 || src0->type == GGML_TYPE_F16) &&
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This should check that src and dst types match.

@@ -0,0 +1,19 @@
#version 450

#extension GL_EXT_shader_explicit_arithmetic_types_float16 : enable
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Remove this line, not all devices support it and it shouldn't be needed.

return;
}

const float val = float(data_a[get_aoffset() + src0_idx(idx)]);
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggest using FLOAT_TYPE rather than float.

string_to_spv("cos_f32", "cos.comp", {{"A_TYPE", "float"}, {"D_TYPE", "float"}, {"FLOAT_TYPE", "float"}});

string_to_spv("log_f32", "log.comp", {{"A_TYPE", "float"}, {"D_TYPE", "float"}, {"FLOAT_TYPE", "float"}});
string_to_spv("log_f16", "log.comp", {{"A_TYPE", "float16_t"}, {"D_TYPE", "float16_t"}, {"FLOAT_TYPE", "float16_t"}});
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
string_to_spv("log_f16", "log.comp", {{"A_TYPE", "float16_t"}, {"D_TYPE", "float16_t"}, {"FLOAT_TYPE", "float16_t"}});
string_to_spv("log_f16", "log.comp", {{"A_TYPE", "float16_t"}, {"D_TYPE", "float16_t"}, {"FLOAT_TYPE", "float16"}});

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

documentation Improvements or additions to documentation ggml changes relating to the ggml tensor library for machine learning Vulkan Issues specific to the Vulkan backend

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants