Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
Show all changes
49 commits
Select commit Hold shift + click to select a range
1709e15
fix compatibility mi_theap_calloc macro
daanx Apr 20, 2026
4bb24ca
clear committed bits on commit failure
daanx Apr 20, 2026
d716fcf
check for NULL in guarded pointer setup
daanx Apr 20, 2026
386971f
check for NULL on guarded aligned allocation
daanx Apr 20, 2026
c92b468
fix clearing the weak field of the random context in chacha
daanx Apr 20, 2026
e863098
use atomic variable for the deferred free function
daanx Apr 20, 2026
c6e8d11
use atomic once initialization of auto_thread_done
daanx Apr 20, 2026
d0cf283
check for NULL subproc in subproc_delete
daanx Apr 20, 2026
291ad74
check for overflow _mi_os_alloc_aligned_at_offset
daanx Apr 20, 2026
be2ddd0
Merge branch 'dev' into dev2
daanx Apr 20, 2026
e14c522
propagate weak field on random context split
daanx Apr 20, 2026
fe3e26c
Merge branch 'dev' into dev2
daanx Apr 20, 2026
b642693
Emscripten: add missing include for `getentropy()`
kleisauke Apr 21, 2026
f45a090
Merge pull request #1280 from kleisauke/dev-emscripten-missing-include
daanx Apr 21, 2026
50780e7
add stale labeling workflow
daanx Apr 21, 2026
dc3572c
rename stale workflow
daanx Apr 21, 2026
5021a39
add push trigger to activate stale workflow once
daanx Apr 21, 2026
57cdd0d
remove push trigger from stale workflow
daanx Apr 21, 2026
e240131
clarify use of deferred_free (issue #1271, issue 3.6)
daanx Apr 22, 2026
f1b98b2
let unix_madvise always return an error code (issue #1271, issues 3.8)
daanx Apr 22, 2026
52d5661
fix comparison in mi_os_alloc_aligned_at_offset to allow decommitting…
daanx Apr 22, 2026
3ef9c77
use _mi_is_aligned instead of modulo (issue #1271, issues 3.11)
daanx Apr 22, 2026
58b36a3
on emscripten backend, delete the tls key on shutdown (issue #1271, i…
daanx Apr 22, 2026
68b7a80
use _mi_is_aligned instead of %
daanx Apr 23, 2026
7865180
use lock for initial output buffer (issue #1271, issue 3.12)
daanx Apr 24, 2026
f437fb8
change out_default to be atomic
daanx Apr 24, 2026
e49fb94
fix page used count in heap visitor to match all used blocks in a pag…
daanx Apr 24, 2026
ed6fe72
add assertions that the bottom 2 bits of the threadid are zero'd
daanx Apr 24, 2026
e146ce0
more accurate memory accounting for aligned os memory (issue #1271, i…
daanx Apr 24, 2026
eeb3ff6
Merge branch 'dev' into dev2
daanx Apr 24, 2026
60a1f3b
update MSVC C atomics wrapper to implement loads as readonly and use …
daanx Apr 24, 2026
5540daa
Merge branch 'dev' into dev2
daanx Apr 24, 2026
8c8eb3c
only count decommit if needs_recommit is true (issue #1281)
daanx Apr 27, 2026
1287ce7
Merge branch 'dev' into dev2
daanx Apr 27, 2026
cf6ba6b
fix unused variable warning (issue #1279)
daanx Apr 27, 2026
1dbd470
Merge branch 'dev' into dev2
daanx Apr 27, 2026
50a711f
align on large OS page boundary for larger allocations
daanx Apr 27, 2026
3b7c8fb
also use eager arena commit if large OS pages are allowed
daanx Apr 27, 2026
e701d0a
Merge branch 'dev' of https://github.com/microsoft/mimalloc into dev
daanx Apr 27, 2026
688dbaf
remove unneeded try_alignment adjustment as that is done in os.c now
daanx Apr 27, 2026
5c667af
Merge branch 'dev' into dev2
daanx Apr 27, 2026
5dfa174
use mi_segment_is_abandoned instead of direct check
daanx Apr 29, 2026
2aee53b
add guard in _mi_page_ptr_unalign to prevent division by zero
daanx Apr 29, 2026
c9b9c8c
always perform a cookie check when using _mi_segment_of
daanx Apr 29, 2026
e610a09
add SpecBot invariant checks
daanx Apr 1, 2026
8046d48
check segment cookie already at security level 3 (versus 4) as docume…
daanx Apr 29, 2026
24ef7bd
bump version to v1.9.10
daanx Apr 29, 2026
02a2f5d
bump version to v2.3.2
daanx Apr 29, 2026
da23e1a
Merge branch 'dev2' into dev-main
daanx Apr 29, 2026
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
4 changes: 2 additions & 2 deletions .github/workflows/release.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -9,7 +9,7 @@ permissions:
contents: write

env:
RELEASE: Release v3.3.1
RELEASE: Release v3.3.2
FORCE_JAVASCRIPT_ACTIONS_TO_NODE24: true

name: Release
Expand All @@ -19,7 +19,7 @@ jobs:
runs-on: ${{ matrix.os }}
strategy:
matrix:
branch: [v1.9.9,v2.3.1,v3.3.1] # [dev,dev2,dev3]
branch: [v1.9.10,v2.3.2,v3.3.2] # [dev,dev2,dev3]
# we build on the oldest ubuntu version for better binary compatibility.
os: [windows-latest, macOS-latest, macos-15-intel, ubuntu-22.04, ubuntu-22.04-arm]

Expand Down
27 changes: 27 additions & 0 deletions .github/workflows/stale.yaml
Original file line number Diff line number Diff line change
@@ -0,0 +1,27 @@
on:
workflow_dispatch: # allow running the workflow manually
schedule:
- cron: "15 21 * * *" # minute, hour, day (1-31), month (1-12), day of the week (0 - 6 or SUN-SAT)

name: Close inactive issues
jobs:
close-issues:
runs-on: ubuntu-latest
permissions:
issues: write
pull-requests: write
steps:
- uses: actions/stale@v10
with:
days-before-issue-stale: 360
days-before-issue-close: 14
stale-issue-label: "stale"
stale-issue-message: "This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs in the next 14 days. Thank you for your contributions!"
close-issue-message: "This issue was closed because it has been inactive for 14 days since being marked as stale. Please feel free to reopen if this is still an active issue."
days-before-pr-stale: -1
days-before-pr-close: -1
stale-pr-label: "stale"
stale-pr-message: "This PR has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs in the next 14 days. Thank you for your contributions!"
close-pr-message: "This PR was closed because it has been inactive for 14 days since being marked as stale. Please feel free to reopen if you think this PR should still be considered. Thank you again for your help."
operations-per-run: 32
repo-token: ${{ secrets.GITHUB_TOKEN }}
2 changes: 1 addition & 1 deletion cmake/mimalloc-config-version.cmake
Original file line number Diff line number Diff line change
@@ -1,6 +1,6 @@
set(mi_version_major 2)
set(mi_version_minor 3)
set(mi_version_patch 1)
set(mi_version_patch 2)
set(mi_version ${mi_version_major}.${mi_version_minor})

set(PACKAGE_VERSION ${mi_version})
Expand Down
2 changes: 1 addition & 1 deletion contrib/vcpkg/portfile.cmake
Original file line number Diff line number Diff line change
Expand Up @@ -3,7 +3,7 @@ vcpkg_from_github(
REPO microsoft/mimalloc
HEAD_REF master

# The "REF" can be a commit hash, branch name (dev3), or a version (v3.3.1).
# The "REF" can be a commit hash, branch name (dev3), or a version (v3.3.2).
REF "v${VERSION}"
# REF e2db21e9ba9fb9172b7b0aa0fe9b8742525e8774

Expand Down
2 changes: 1 addition & 1 deletion contrib/vcpkg/vcpkg.json
Original file line number Diff line number Diff line change
@@ -1,6 +1,6 @@
{
"name": "mimalloc",
"version": "3.3.0",
"version": "3.3.2",
"port-version": 0,
"description": "Compact general purpose allocator with excellent performance",
"homepage": "https://github.com/microsoft/mimalloc",
Expand Down
2 changes: 1 addition & 1 deletion doc/release-notes.md
Original file line number Diff line number Diff line change
Expand Up @@ -10,6 +10,6 @@ Notes:
- Generally it is recommended to download sources (or use `vcpkg` etc.) and build mimalloc as
part of your project.
- Source releases can also be downloaded directly from github by the tag.
For example <https://github.com/microsoft/mimalloc/archive/v3.3.0.tar.gz>.
For example <https://github.com/microsoft/mimalloc/archive/v3.3.2.tar.gz>.
- Binary releases include a release-, debug-, and secure build.
- Linux binaries are built on Ubuntu 22.
4 changes: 2 additions & 2 deletions include/mimalloc.h
Original file line number Diff line number Diff line change
Expand Up @@ -8,7 +8,7 @@ terms of the MIT license. A copy of the license can be found in the file
#ifndef MIMALLOC_H
#define MIMALLOC_H

#define MI_MALLOC_VERSION 20301 // major + 2 digits minor + 2 digits patch
#define MI_MALLOC_VERSION 20302 // major + 2 digits minor + 2 digits patch

// ------------------------------------------------------
// Compiler specific attributes
Expand Down Expand Up @@ -380,7 +380,7 @@ typedef mi_heap_t mi_theap_t;
#define mi_theap_collect(hp,force) mi_heap_collect(hp,force)
#define mi_theap_malloc(hp,sz) mi_heap_malloc(hp,sz)
#define mi_theap_zalloc(hp,sz) mi_heap_zalloc(hp,sz)
#define mi_theap_calloc(hp,cnt,sz) mi_heap_malloc(hp,cnt,sz)
#define mi_theap_calloc(hp,cnt,sz) mi_heap_calloc(hp,cnt,sz)
#define mi_theap_malloc_small(hp,sz) mi_heap_malloc_small(hp,sz)
#define mi_theap_malloc_aligned(hp,sz,a) mi_heap_malloc_aligned(hp,sz,a)
#define mi_theap_realloc(hp,p,newsz) mi_heap_realloc(hp,p,newsz)
Expand Down
157 changes: 105 additions & 52 deletions include/mimalloc/atomic.h
Original file line number Diff line number Diff line change
Expand Up @@ -155,15 +155,17 @@ static inline void mi_atomic_maxi64_relaxed(volatile int64_t* p, int64_t x) {
#elif defined(_MSC_VER)

// Deprecated: MSVC plain C compilation wrapper that uses Interlocked operations to model C11 atomics.
// It is recommended to always compile as C++ when using MSVC
// It is recommended to always compile as C++ when using MSVC.

#include <intrin.h>
#ifdef _WIN64
typedef LONG64 msc_intptr_t;
#define MI_64(f) f##64
typedef LONG64 msc_intptr_t;
#define MI_MSC_64(f) f##64
#define MI_MSC_XX(f) f##64
#else
typedef LONG msc_intptr_t;
#define MI_64(f) f
typedef LONG msc_intptr_t;
#define MI_MSC_64(f) f
#define MI_MSC_XX(f) f##32
#endif

typedef enum mi_memory_order_e {
Expand All @@ -177,23 +179,23 @@ typedef enum mi_memory_order_e {

static inline uintptr_t mi_atomic_fetch_add_explicit(_Atomic(uintptr_t)*p, uintptr_t add, mi_memory_order mo) {
(void)(mo);
return (uintptr_t)MI_64(_InterlockedExchangeAdd)((volatile msc_intptr_t*)p, (msc_intptr_t)add);
return (uintptr_t)MI_MSC_64(_InterlockedExchangeAdd)((volatile msc_intptr_t*)p, (msc_intptr_t)add);
}
static inline uintptr_t mi_atomic_fetch_sub_explicit(_Atomic(uintptr_t)*p, uintptr_t sub, mi_memory_order mo) {
(void)(mo);
return (uintptr_t)MI_64(_InterlockedExchangeAdd)((volatile msc_intptr_t*)p, -((msc_intptr_t)sub));
return (uintptr_t)MI_MSC_64(_InterlockedExchangeAdd)((volatile msc_intptr_t*)p, -((msc_intptr_t)sub));
}
static inline uintptr_t mi_atomic_fetch_and_explicit(_Atomic(uintptr_t)*p, uintptr_t x, mi_memory_order mo) {
(void)(mo);
return (uintptr_t)MI_64(_InterlockedAnd)((volatile msc_intptr_t*)p, (msc_intptr_t)x);
return (uintptr_t)MI_MSC_64(_InterlockedAnd)((volatile msc_intptr_t*)p, (msc_intptr_t)x);
}
static inline uintptr_t mi_atomic_fetch_or_explicit(_Atomic(uintptr_t)*p, uintptr_t x, mi_memory_order mo) {
(void)(mo);
return (uintptr_t)MI_64(_InterlockedOr)((volatile msc_intptr_t*)p, (msc_intptr_t)x);
return (uintptr_t)MI_MSC_64(_InterlockedOr)((volatile msc_intptr_t*)p, (msc_intptr_t)x);
}
static inline bool mi_atomic_compare_exchange_strong_explicit(_Atomic(uintptr_t)*p, uintptr_t* expected, uintptr_t desired, mi_memory_order mo1, mi_memory_order mo2) {
(void)(mo1); (void)(mo2);
uintptr_t read = (uintptr_t)MI_64(_InterlockedCompareExchange)((volatile msc_intptr_t*)p, (msc_intptr_t)desired, (msc_intptr_t)(*expected));
const uintptr_t read = (uintptr_t)MI_MSC_64(_InterlockedCompareExchange)((volatile msc_intptr_t*)p, (msc_intptr_t)desired, (msc_intptr_t)(*expected));
if (read == *expected) {
return true;
}
Expand All @@ -207,68 +209,119 @@ static inline bool mi_atomic_compare_exchange_weak_explicit(_Atomic(uintptr_t)*p
}
static inline uintptr_t mi_atomic_exchange_explicit(_Atomic(uintptr_t)*p, uintptr_t exchange, mi_memory_order mo) {
(void)(mo);
return (uintptr_t)MI_64(_InterlockedExchange)((volatile msc_intptr_t*)p, (msc_intptr_t)exchange);
return (uintptr_t)MI_MSC_64(_InterlockedExchange)((volatile msc_intptr_t*)p, (msc_intptr_t)exchange);
}
static inline void mi_atomic_thread_fence(mi_memory_order mo) {
(void)(mo);
_Atomic(uintptr_t) x = 0;
mi_atomic_exchange_explicit(&x, 1, mo);
}

static inline uintptr_t mi_atomic_load_explicit(_Atomic(uintptr_t) const* p, mi_memory_order mo) {
(void)(mo);
#if defined(_M_IX86) || defined(_M_X64)
return *p;
#else
uintptr_t x = *p;
if (mo > mi_memory_order_relaxed) {
while (!mi_atomic_compare_exchange_weak_explicit((_Atomic(uintptr_t)*)p, &x, x, mo, mi_memory_order_relaxed)) { /* nothing */ };
}
return x;
#endif
// assert(mo<=mi_memory_order_acquire); // others are not used by mimalloc
#if defined(_M_IX86) || defined(_M_X64)
return (uintptr_t)MI_MSC_XX(__iso_volatile_load)((volatile const intptr_t*)p);
#elif defined(_M_ARM) || defined(_M_ARM64)
if (mo == mi_memory_order_relaxed) {
return (uintptr_t)MI_MSC_XX(__iso_volatile_load)((volatile const intptr_t*)p);
}
else if (mo <= mi_memory_order_acquire) {
return MI_MSC_XX(__ldar)((volatile const uintptr_t*)p);
}
else {
const uintptr_t u = (uintptr_t)MI_MSC_XX(__iso_volatile_load)((volatile const intptr_t*)p);
__dmb(15); // _ARM(64)_BARRIER_SY
return u;
}
#else
#warning "define mi_atomic_load_explicit for MSVC C compilation on this platform (which should be readonly, see issue #1277)"
return MI_MSC_XX(__iso_volatile_load)((volatile const intptr_t*)p);
#endif
}
static inline void mi_atomic_store_explicit(_Atomic(uintptr_t)*p, uintptr_t x, mi_memory_order mo) {
(void)(mo);
#if defined(_M_IX86) || defined(_M_X64)
*p = x;
#else
mi_atomic_exchange_explicit(p, x, mo);
#endif
// assert(mo<=mi_memory_order_release); // others are not used by mimalloc
#if defined(_M_IX86) || defined(_M_X64)
MI_MSC_XX(__iso_volatile_store)((volatile intptr_t*)p, x);
#elif defined(_M_ARM) || defined(_M_ARM64)
if (mo == mi_memory_order_relaxed) {
MI_MSC_XX(__iso_volatile_store)((volatile intptr_t*)p, x);
}
else if (mo <= mi_memory_order_release) {
MI_MSC_XX(__stlr)((volatile uintptr_t*)p,x);
}
else {
mi_atomic_exchange_explicit(p, x, mo);
}
#else
mi_atomic_exchange_explicit(p, x, mo);
#endif
}

static inline int64_t mi_atomic_loadi64_explicit(_Atomic(int64_t)*p, mi_memory_order mo) {
(void)(mo);
#if defined(_M_X64)
return *p;
#else
int64_t old = *p;
int64_t x = old;
while ((old = InterlockedCompareExchange64(p, x, old)) != x) {
x = old;
}
return x;
#endif
// assert(mo<=mi_memory_order_acquire); // others are not used by mimalloc
#if defined(_M_IX86) || defined(_M_X64)
return __iso_volatile_load64((volatile const int64_t*)p);
#elif defined(_M_ARM) || defined(_M_ARM64)
if (mo == mi_memory_order_relaxed) {
return __iso_volatile_load64((volatile const int64_t*)p);
}
#if defined(_M_ARM64)
else if (mo <= mi_memory_order_acquire) {
return __ldar64((volatile const uintptr_t*)p);
}
#endif
else {
const int64_t i = __iso_volatile_load64((volatile const int64_t*)p);
__dmb(15); // _ARM(64)_BARRIER_SY
return i;
}
#else
#warning "define mi_atomic_loadi64_explicit for MSVC C compilation on this platform (which should be readonly, see issue #1277)"
return __iso_volatile_load64((volatile const int64_t*)p);
#endif
}

static inline void mi_atomic_storei64_explicit(_Atomic(int64_t)*p, int64_t x, mi_memory_order mo) {
(void)(mo);
#if defined(_M_X64)
*p = x;
#else
InterlockedExchange64(p, x);
#endif
// assert(mo<=mi_memory_order_release); // others are not used by mimalloc
#if defined(_M_IX86) || defined(_M_X64)
__iso_volatile_store64((volatile int64_t*)p,x);
#elif defined(_M_ARM) || defined(_M_ARM64)
if (mo == mi_memory_order_relaxed) {
__iso_volatile_store64((volatile int64_t*)p,x);
}
#if defined(_M_ARM64)
else if (mo == mi_memory_order_release) {
__stlr64((volatile uint64_t*)p, (uint64_t)x);
}
#endif
else {
InterlockedExchange64(p, x);
}
#else
InterlockedExchange64(p, x);
#endif
}

// These are used by the statistics
static inline int64_t mi_atomic_addi64_relaxed(volatile _Atomic(int64_t)*p, int64_t add) {
#ifdef _WIN64
return (int64_t)mi_atomic_addi((int64_t*)p, add);
#else
int64_t current;
int64_t sum;
do {
current = *p;
sum = current + add;
} while (_InterlockedCompareExchange64(p, sum, current) != current);
return current;
#endif
#ifdef _WIN64
return (int64_t)mi_atomic_addi((int64_t*)p, add);
#elif defined(_M_ARM)
return _InterlockedExchangeAdd64(p, add);
#else
// x86
int64_t current;
int64_t sum;
do {
current = __iso_volatile_load64((volatile const int64_t*)p);
sum = current + add;
} while (_InterlockedCompareExchange64(p, sum, current) != current);
return current;
#endif
}
static inline void mi_atomic_void_addi64_relaxed(volatile int64_t* p, const volatile int64_t* padd) {
const int64_t add = *padd;
Expand All @@ -289,7 +342,7 @@ static inline void mi_atomic_addi64_acq_rel(volatile _Atomic(int64_t*)p, int64_t
}

static inline bool mi_atomic_casi64_strong_acq_rel(volatile _Atomic(int64_t*)p, int64_t* exp, int64_t des) {
int64_t read = _InterlockedCompareExchange64(p, des, *exp);
const int64_t read = _InterlockedCompareExchange64(p, des, *exp);
if (read == *exp) {
return true;
}
Expand Down
5 changes: 2 additions & 3 deletions include/mimalloc/internal.h
Original file line number Diff line number Diff line change
Expand Up @@ -352,7 +352,7 @@ mi_decl_noreturn mi_decl_cold void _mi_assert_fail(const char* assertion, const
Inlined definitions
----------------------------------------------------------- */
#define MI_UNUSED(x) (void)(x)
#if (MI_DEBUG>0)
#if (MI_DEBUG>1)
#define MI_UNUSED_RELEASE(x)
#else
#define MI_UNUSED_RELEASE(x) MI_UNUSED(x)
Expand All @@ -378,8 +378,7 @@ static inline bool _mi_is_power_of_two(uintptr_t x) {

// Is a pointer aligned?
static inline bool _mi_is_aligned(void* p, size_t alignment) {
mi_assert_internal(alignment != 0);
return (((uintptr_t)p % alignment) == 0);
return (alignment==0 || ((uintptr_t)p % alignment) == 0);
}

// Align upwards
Expand Down
4 changes: 3 additions & 1 deletion include/mimalloc/prim.h
Original file line number Diff line number Diff line change
Expand Up @@ -282,7 +282,9 @@ static inline mi_threadid_t _mi_prim_thread_id(void) mi_attr_noexcept;
#if defined(MI_PRIM_THREAD_ID)

static inline mi_threadid_t _mi_prim_thread_id(void) mi_attr_noexcept {
return MI_PRIM_THREAD_ID(); // used for example by CPython for a free threaded build (see python/cpython#115488)
const mi_threadid_t tid = MI_PRIM_THREAD_ID(); // used for example by CPython for a free threaded build (see python/cpython#115488)
mi_assert_internal( (tid & 0x03) == 0 ); // mimalloc reserves the bottom 2 bits
return tid;
}

#elif defined(_WIN32)
Expand Down
11 changes: 8 additions & 3 deletions readme.md
Original file line number Diff line number Diff line change
Expand Up @@ -15,9 +15,9 @@ is a general purpose allocator with excellent [performance](#performance) charac
Initially developed by Daan Leijen for the runtime systems of the
[Koka](https://koka-lang.github.io) and [Lean](https://github.com/leanprover/lean) languages.

Latest release : `v3.3.1` (2026-04-20) recommended.
Latest v2 release: `v2.3.1` (2026-04-20) stable.
Latest v1 release: `v1.9.9` (2026-04-20) legacy.
Latest release : `v3.3.2` (2026-04-29) recommended.
Latest v2 release: `v2.3.2` (2026-04-29) stable.
Latest v1 release: `v1.9.10` (2026-04-29) legacy.

mimalloc is a drop-in replacement for `malloc` and can be used in other programs
without code changes, for example, on dynamically linked ELF-based systems (Linux, BSD, etc.) you can use it as:
Expand Down Expand Up @@ -88,6 +88,11 @@ New development is mostly on v3, while v1 and v2 are maintained with security an
- __v1__: legacy version: initial design of mimalloc (release tags: `v1.9.x`, development branch `dev`). Send PR's against this version if possible.

### Releases
* 2026-04-29, `v1.9.10`, `v2.3.2`, `v3.3.2`: various bug and security fixes through LLM audit (by @Zoxc).
Only increase minimal purge size automatically if allow_thp is set to 2. Enable large OS alignment
on all platforms (fixing OS large pages on Windows). Fix accounting of committed memory on Linux/macOS.
Update MSVC atomics implementation when using C mode. Upstream Emscripten fixes. Proper atomic do-once
implementation.
* 2026-04-20, `v1.9.9`, `v2.3.1`, `v3.3.1`: various bug and security fixes. Special thanks to
@jinpzhanAMD, @res2k, and @GoldJohnKing for their help in improving Windows finalization, and
@Zoxc for his help in finding various issues.
Expand Down
5 changes: 3 additions & 2 deletions src/alloc-aligned.c
Original file line number Diff line number Diff line change
Expand Up @@ -33,8 +33,9 @@ static mi_decl_noinline mi_decl_restrict void* mi_heap_malloc_guarded_aligned(mi
return NULL;
}
const size_t oversize = size + alignment - 1;
void* base = _mi_heap_malloc_guarded(heap, oversize, zero);
void* p = _mi_align_up_ptr(base, alignment);
void* const base = _mi_heap_malloc_guarded(heap, oversize, zero);
if (base==NULL) return NULL;
void* const p = _mi_align_up_ptr(base, alignment);
mi_track_align(base, p, (uint8_t*)p - (uint8_t*)base, size);
mi_assert_internal(mi_usable_size(p) >= size);
mi_assert_internal(_mi_is_aligned(p, alignment));
Expand Down
Loading
Loading