Skip to content

maxtess49/stage

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

34 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Bio-inspired Metaheuristics for the Multidimensional Knapsack Problem

Research internship project - Université d'Angers
Implementation and benchmarking of bio-inspired metaheuristics on the Multidimensional Knapsack Problem (MKP), with a focus on identifying what actually drives performance differences between algorithms.


Problem

The Multidimensional Knapsack Problem (MKP) is a NP-hard combinatorial optimization problem where items must be selected to maximize a profit function subject to multiple resource constraints.

Formally: maximize ∑ cⱼxⱼ subject to ∑ aᵢⱼxⱼ ≤ bᵢ for all constraints i, with xⱼ ∈ {0, 1}.


Motivation

A growing number of bio-inspired algorithms are published each year, often with increasing comprehension difficulty, each claiming superiority through novel nature-inspired metaphors. This project investigates whether these performance differences stem from the algorithmic metaphor itself, or from shared underlying mechanisms, specifically the repair operator (pseudo-utility).


Algorithms Implemented

Algorithm Type Reference
bGWO — Binary Grey Wolf Optimizer Swarm intelligence Luo & Zhao
SLMS — Self-Learning Moth Search Swarm intelligence Feng & Wang
PSO — Particle Swarm Optimization Swarm intelligence Kennedy & Eberhart
ES — Evolution Strategy Evolutionary Rechenberg
Population algorithms Custom variants This work

Papers implemented:

  • A binary grey wolf optimizer for the multidimensional knapsack problem - Kaiping Luo, Qiuhong Zhao
  • A binary moth search algorithm based on self-learning for multidimensional knapsack problems - Yanhong Feng, Gai-Ge Wang

Key Results

All experiments run over 30 random seeds on standard MKP benchmark instances (GK and sac94 datasets).

GWO - Validation against paper results

Instance Size Best known Paper (bGWO) Reimplementation
gk1 15×100 3766 3766 3762
gk2 25×100 3958 3948 3955
gk3 25×150 5656 5647 5647
gk5 25×200 7560 7552 7555
gk7 25×500 19220 19213 19213

Reimplementation closely matches original paper results, validating the implementation.

Algorithm comparison on GK benchmark (mean over 30 seeds)

Instance Best known PSO PSO + repair GWO SLMS
gk1 3766 3601 3757 3757 3713
gk3 5656 5412 5636 5634 5547
gk5 7560 7219 7552 7552 7378
gk7 19220 18418 19206 19208 19096

Main finding - The repair operator is the key variable

A standard PSO without the pseudo-utility repair mechanism performs significantly worse than GWO.
Adding the same repair mechanism from GWO to PSO closes the performance gap entirely.

ANOVA (α = 0.05) between GWO and PSO with repair:

  • GK1: p = 0.226 → no significant difference
  • GK6: p = 0.052 → no significant difference
  • GK2: p < 0.001 → significant difference (GWO slightly better)

Conclusion: The nature-inspired metaphor contributes little to performance. The pseudo-utility repair operator is the dominant factor driving results across algorithms.

Evolution Strategy results

ES with bitflip mutation and uniform crossover underperforms GWO/SLMS regardless of population size (10–25 children). Adding the repair mechanism improves results but does not close the gap with GWO/SLMS.


Project Structure

stage/
│── instances/        # Benchmark instances (GK, chubeas, sac94 datasets + format.txt)
├── models/
|   ├── mkp.py            # MKP problem model, fitness function, pseudo-utility
│   ├── gwo.py            # Binary Grey Wolf Optimizer
│   ├── slms.py           # Self-Learning Moth Search
│   ├── pso_base.py       # Particle Swarm Optimization
|   ├── pso_2.py          # Particle Swarm Optimization with different formulas for adjusting position and velocity of the particles
│   ├── stratevo.py       # Evolution Strategy
|   ├── abc.py            # Discrete improved artificial bee colony (not fully implemented)
|   └── aio.py            # Custom variant of a population algorithm 
│── notebooks/            # Jupyter notebooks used for early prototyping
├── main.py          # Run experiments across algorithms and instances
├── requirements.txt
├── seeds.csv        # 30 seeds used across all tests
├── Todolist.md
└── README.md

Usage

pip install -r requirements.txt
python main.py

Results are output as raw comparison tables across algorithms and instances, using fixed random seeds for reproducibility.


Dependencies

  • numpy
  • scipy

References

  • Luo, K., & Zhao, Q. — A binary grey wolf optimizer for the multidimensional knapsack problem
  • Feng, Y., & Wang, G. G. — A binary moth search algorithm based on self-learning for multidimensional knapsack problems
  • Kennedy, J., & Eberhart, R. — Particle swarm optimization

About

Project made during my internship at LERIA for my master's degree.

Topics

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

 
 
 

Contributors