Skip to content

Commit 96fdf1f

Browse files
committed
co
- cache locality (temporal, spacial locality & average memory access time, cache hit) - cache miss (cache write miss) - cpu cache (cache line) - cache strategy - cache eviction
1 parent 78d14b6 commit 96fdf1f

28 files changed

+233
-120
lines changed

content/Algorithm/Recursion/Sorting.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -458,7 +458,7 @@ import java.util.*;
458458
> And we can perform sorting on subset of the dataset at a time. For example, if our dataset is 10GB, and we need to load the data into the RAM to perform the sorting, but there is only 2GB RAM. With merge sort, we are able to load in 2GB at a time to perform sorting, and eventually sort the dataset.
459459
460460
>[!caution]- Slow on small arrays!
461-
> The allocation of different arrays are scattered in the [[Main Memory]]. Merge sort has a **space complexity of $O(n)$** with different temporary arrays at each merge layer. Working on multiple arrays means we sacrifice the performance gain from [[CPU Cache#Cache Locality]].
461+
> The allocation of different arrays are scattered in the [[Main Memory]]. Merge sort has a **space complexity of $O(n)$** with different temporary arrays at each merge layer. Working on multiple arrays means we sacrifice the performance gain from [[Cache Locality]].
462462
>
463463
> The [[Recursion]] nature of the algorithm comes with extra overhead too. Recursion is also less predicable, thus impact the [[Branch Prediction]] negatively.
464464
>

content/C/C Function.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -17,7 +17,7 @@ description:
1717

1818

1919
### Function Prototype
20-
- In [[C]], a function prototype consists of the **return type**, **name of the function**, and the **list of parameter [[Datatype|datatypes]]** (names of parameters are **optional**)
20+
- In [[C/C]], a function prototype consists of the **return type**, **name of the function**, and the **list of parameter [[Datatype|datatypes]]** (names of parameters are **optional**)
2121

2222
>[!important] Placement of function prototype
2323
> It's good practice to put function prototypes at the top of the program, before the `main()` function. This informs the [[Language Processors#Compiler|compiler]] of the functions your program may use, along with their return types and parameter types.

content/C/C Structure.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -32,7 +32,7 @@ typedef struct {
3232
> Using the code snippet above, we can access and modify the `balance` using `a_1.balance;`
3333
3434
>[!important] Dereferencing a pointer to a structure and accessing its attributes
35-
> We need to make sure we use parentheses like `(*player_ptr).name`, because `.` has a higher [[C#C Operator Precedence|operator precedence]].
35+
> We need to make sure we use parentheses like `(*player_ptr).name`, because `.` has a higher [[C/C#C Operator Precedence|operator precedence]].
3636
>
3737
> Or we can simply use `player_ptr->name` to achieve the same, this is a syntactic sugar.
3838

content/Computer Organisation/Number System/Character Encoding (字符编码).md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -40,7 +40,7 @@ description:
4040
- String in [[Go]] is encoded with [[#UTF-8]] and is treated as an [[Array]] of [[Computer Data Representation#Byte]]. This explains why the index is off and `len(myString)` returns $8$, instead of $6$
4141

4242
![[utf-8_string_go.png|500]]
43-
- This behaviour applies to other languages like [[C]], you get $7$ when you run `printf("%d", strlen("😊家"));`, instead of $2$
43+
- This behaviour applies to other languages like [[C/C]], you get $7$ when you run `printf("%d", strlen("😊家"));`, instead of $2$
4444

4545

4646
>[!tip] Abstract away this weird behavior

content/Data Structure/Array.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -16,7 +16,7 @@ References:
1616

1717

1818
>[!success] Cache Hit
19-
> Elements of array are stored in [[Main Memory]] in a **compact manner**, thus making great use of [[CPU Cache#Cache Locality]].
19+
> Elements of array are stored in [[Main Memory]] in a **compact manner**, thus making great use of [[Cache Locality]].
2020
2121
>[!important]
2222
> Array has a **fixed size**. If we want to **expand**, we have to **create another bigger array** & **copy all the elements** to the new array which is very **time consuming**.
@@ -59,7 +59,7 @@ References:
5959
- In most programming languages, `my_array[i]` is a **convenient syntax** that [[Abstraction|abstracts]] the process of accessing the element at index `i` in an array. While the underlying mechanism usually involves **calculating the [[Memory Address]]** of the element and then **[[Pointer#Pointer Dereference|dereferencing]] that address** to **obtain the value**
6060

6161
### Array Versus Linked List
62-
- When iterating over **all elements** in an [[Array]] and a [[Linked List]], the **array** is typically **much faster** if the elements are present in the [[CPU Cache]]. This is because arrays store elements in [[Data Structure#Continuous Memory|contiguous memory]], allowing them to benefit from [[CPU Cache#Cache Locality|cache locality]]. However, when [[CPU Cache#Cache Miss|cache misses]] occur, an **array** may be **slightly slower** than a **linked list**, as it needs to **calculate** the [[Memory Address|memory address]] of each **subsequent element**, whereas the next **element's address** is **directly stored** within each **node of a linked list**
62+
- When iterating over **all elements** in an [[Array]] and a [[Linked List]], the **array** is typically **much faster** if the elements are present in the [[CPU Cache]]. This is because arrays store elements in [[Data Structure#Continuous Memory|contiguous memory]], allowing them to benefit from [[CPU Cache#Cache Locality|cache locality]]. However, when [[Cache Miss|cache misses]] occur, an **array** may be **slightly slower** than a **linked list**, as it needs to **calculate** the [[Memory Address|memory address]] of each **subsequent element**, whereas the next **element's address** is **directly stored** within each **node of a linked list**
6363

6464

6565

content/Data Structure/Linked List.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -23,12 +23,12 @@ References:
2323
2424

2525
>[!attention] Cache Miss!!!
26-
> Since the connection between 2 nodes are via Pointer, the nodes are scattered around the Main Memory. This means we can't make use of [[CPU Cache#Cache Locality]] and this results in a very high rate [[CPU Cache#Cache Miss]].
26+
> Since the connection between 2 nodes are via Pointer, the nodes are scattered around the Main Memory. This means we can't make use of [[Cache Locality]] and this results in a very high rate [[Cache Miss]].
2727
>
2828
> ![[linked_list_cache_miss.gif]]
2929
3030
>[!caution] Memory Leak
31-
> For languages like [[C]] which doesn't come with a [[Garbage Collector]], we need to manually release deleted node from the [[Address Space#Heap Segment]] to prevent [[Address Space#Memory leak]].
31+
> For languages like [[C/C]] which doesn't come with a [[Garbage Collector]], we need to manually release deleted node from the [[Address Space#Heap Segment]] to prevent [[Address Space#Memory leak]].
3232
3333

3434

content/Data Structure/Stack.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -58,7 +58,7 @@ References:
5858
Both support Stack Operations without much difference
5959

6060
**Time Efficiency**
61-
- [[Array]] has [[CPU Cache#Cache Locality]] to take advantage of [[CPU Cache]] for extreme fast access. However, array has fixed size. If there isn't any space left in the array, new insertion needs to create a new array and transfer all elements to that new array, and the time complexity will be O(n)
61+
- [[Array]] has [[Cache Locality]] to take advantage of [[CPU Cache]] for extreme fast access. However, array has fixed size. If there isn't any space left in the array, new insertion needs to create a new array and transfer all elements to that new array, and the time complexity will be O(n)
6262
- [[Linked List]] has to use extra time to perform pointer operation
6363
- Conclusion: Array has slightly better time efficiency, since expansion is a low frequency operation, while pointer operation occurs whenever there is an insertion operation. However, [[Linked List]] has more stable performance
6464

content/Data Structure/Tree/Binary Tree.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -31,7 +31,7 @@ description:
3131
## Binary Tree Linked List Representation
3232
---
3333
![[binary_tree_linked_list.png|400]]
34-
- Compared to the [[#Binary Tree Array Representation|array representation]], the linked list representation of a binary tree is **easier to scale in terms of size**, but it comes with **higher memory usage** due to the overhead of storing pointers for each node. Additionally, it cannot take advantage of [[CPU Cache#Cache Locality|CPU cache locality]] as effectively as the array representation
34+
- Compared to the [[#Binary Tree Array Representation|array representation]], the linked list representation of a binary tree is **easier to scale in terms of size**, but it comes with **higher memory usage** due to the overhead of storing pointers for each node. Additionally, it cannot take advantage of [[Cache Locality|CPU cache locality]] as effectively as the array representation
3535
## Degenerate Binary Tree
3636
---
3737
![[bst_to_skewed_tree.png|400]]

content/NUS/CS2100 Computer Organisation.md

Lines changed: 9 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -8,7 +8,7 @@ tags:
88
- computer_organisation
99
- boolean_algebra
1010
Creation Date: 2024-02-12, 18:18
11-
Last Date: 2024-10-29T23:38:17+08:00
11+
Last Date: 2024-11-06T17:19:42+08:00
1212
References:
1313
draft:
1414
description: Find notes and cheat sheets for NUS CS2100 on this website. Get help preparing for your final exam and answers to your questions.
@@ -190,4 +190,11 @@ title: cs2100 nus notes
190190
>[!seealso] Interesting Related Topics
191191
> - [[CPU]]
192192
> - [[GPU]]
193-
> - [[Specialised Processor]]
193+
> - [[Specialised Processor]]
194+
195+
## Week 12
196+
---
197+
- [ ] [[CPU Cache]]
198+
- [ ] [[Cache Locality]]
199+
- [ ] [[Cache Miss]]
200+
- [ ] [[Cache Strategy]]

content/OOP/Generics.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -48,7 +48,7 @@ References:
4848
> Because generic types are the same regardless of their parameterised types. So that means we can swap the same class with different parameterised types which will result in a mismatched assignment like assigning an integer returned from the generic to a string. This will work if the parameterised type is string.
4949
5050
>[!caution] Bad for Caching
51-
> As you can see for array, the data is actually scattered around the memory. And this means we are unable to take advantage of [[CPU Cache#Cache Locality]] and each element indexing takes [[Latency Number|10-100 times longer]]! This is why Java is slower than system languages! Look at all the jumping around for the [[Linked List]] below.
51+
> As you can see for array, the data is actually scattered around the memory. And this means we are unable to take advantage of [[Cache Locality]] and each element indexing takes [[Latency Number|10-100 times longer]]! This is why Java is slower than system languages! Look at all the jumping around for the [[Linked List]] below.
5252
>
5353
> ![[java_generics_memory_linked_list.gif]]
5454

0 commit comments

Comments
 (0)