Skip to content

Conversation

@yucai-intel
Copy link
Contributor

@yucai-intel yucai-intel commented Nov 21, 2025

#2219
To temporarily work around the issue where FP16's -0.0 is erroneously converted to NaN during certain fusion passes (fp16 -> fp32 -> fp8), we are currently avoiding the use of the sycl::half data type in the intermediate conversion steps.
This bypass prevents the problematic fusion from occurring, ensuring correct handling of the negative zero value until the error is fixed.

@yucai-intel yucai-intel changed the title Float8 Conversion: Forced Correction for -0.0 Temporary Fix for FP16 -> FP8 conversion failure on -0.0 Nov 27, 2025
@yucai-intel yucai-intel marked this pull request as ready for review November 27, 2025 08:44
Copy link
Contributor

@guangyey guangyey left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

One question: how do you identify this is a compiler issue, any reproducer founded or regression compiler version detected?

@CuiYifeng
Copy link
Contributor

CuiYifeng commented Nov 28, 2025

One question: how do you identify this is a compiler issue, any reproducer founded or regression compiler version detected?

@guangyey Thanks for the question. We found that this issue does not occur with the following explicit fp16->fp32->fp8 conversion:

import torch
x = torch.tensor(-0.0, dtype=torch.float16).xpu()
y = x.to(torch.float32)
z = y.to(torch.float8_e4m3fn)
print(z)

however, we will get nan in the following usage with implicit fp16->fp32 conversion:

import torch
x = torch.tensor(-0.0, dtype=torch.float16).xpu()
z = x.to(torch.float8_e4m3fn)
print(z)

The key difference between these two cases is that the conversion in the first case is submitted as two kernels, but the conversion in the second one is submitted as one kernel, where some optimizations exist in the second case. Such conjecture has been confirmed by a reproducer in local.
Furthermore, we are currently not sure whether the problem is caused by compiler or IGC, so I have updated PR description.

Copy link
Contributor

Copilot AI left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Copilot encountered an error and was unable to review this pull request. You can try again by re-requesting a review.

@CuiYifeng CuiYifeng requested a review from Copilot November 28, 2025 13:13
Copy link
Contributor

Copilot AI left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Pull request overview

Copilot reviewed 1 out of 1 changed files in this pull request and generated 1 comment.


💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.

@CuiYifeng CuiYifeng self-requested a review December 1, 2025 02:59
@CuiYifeng CuiYifeng requested a review from Copilot December 1, 2025 06:08
Copy link
Contributor

Copilot AI left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Pull request overview

Copilot reviewed 2 out of 2 changed files in this pull request and generated 1 comment.


💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.

Comment on lines 31 to 58
template <>
struct CastScalarFunc<Half, Float8_e4m3fn> {
C10_HOST_DEVICE Float8_e4m3fn operator()(Half src_val) const {
return Float8_e4m3fn(c10::detail::fp16_ieee_to_fp32_value(src_val.x));
}
};

template <>
struct CastScalarFunc<Half, Float8_e4m3fnuz> {
C10_HOST_DEVICE Float8_e4m3fnuz operator()(Half src_val) const {
return Float8_e4m3fnuz(c10::detail::fp16_ieee_to_fp32_value(src_val.x));
}
};

template <>
struct CastScalarFunc<Half, Float8_e5m2> {
C10_HOST_DEVICE Float8_e5m2 operator()(Half src_val) const {
return Float8_e5m2(c10::detail::fp16_ieee_to_fp32_value(src_val.x));
}
};

template <>
struct CastScalarFunc<Half, Float8_e5m2fnuz> {
C10_HOST_DEVICE Float8_e5m2fnuz operator()(Half src_val) const {
return Float8_e5m2fnuz(c10::detail::fp16_ieee_to_fp32_value(src_val.x));
}
};

Copy link

Copilot AI Dec 1, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The four template specializations contain duplicated logic (identical implementation pattern with only the return type differing). Consider extracting this into a helper function template or macro to reduce code duplication and improve maintainability. For example, a helper template could be: template<typename Float8Type> Float8Type half_to_float8(Half src_val) { return Float8Type(c10::detail::fp16_ieee_to_fp32_value(src_val.x)); }

Suggested change
template <>
struct CastScalarFunc<Half, Float8_e4m3fn> {
C10_HOST_DEVICE Float8_e4m3fn operator()(Half src_val) const {
return Float8_e4m3fn(c10::detail::fp16_ieee_to_fp32_value(src_val.x));
}
};
template <>
struct CastScalarFunc<Half, Float8_e4m3fnuz> {
C10_HOST_DEVICE Float8_e4m3fnuz operator()(Half src_val) const {
return Float8_e4m3fnuz(c10::detail::fp16_ieee_to_fp32_value(src_val.x));
}
};
template <>
struct CastScalarFunc<Half, Float8_e5m2> {
C10_HOST_DEVICE Float8_e5m2 operator()(Half src_val) const {
return Float8_e5m2(c10::detail::fp16_ieee_to_fp32_value(src_val.x));
}
};
template <>
struct CastScalarFunc<Half, Float8_e5m2fnuz> {
C10_HOST_DEVICE Float8_e5m2fnuz operator()(Half src_val) const {
return Float8_e5m2fnuz(c10::detail::fp16_ieee_to_fp32_value(src_val.x));
}
};
// Helper function template for Half to Float8_* conversion
template <typename Float8Type>
C10_HOST_DEVICE Float8Type half_to_float8(Half src_val) {
return Float8Type(c10::detail::fp16_ieee_to_fp32_value(src_val.x));
}
// Partial specialization for CastScalarFunc<Half, Float8Type>
template <typename Float8Type>
struct CastScalarFunc<Half, Float8Type> {
C10_HOST_DEVICE Float8Type operator()(Half src_val) const {
return half_to_float8<Float8Type>(src_val);
}
};

Copilot uses AI. Check for mistakes.
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

4 participants