@@ -3,4 +3,130 @@ title: Developing
33layout : default
44nav_order : 3
55---
6- # Developing
6+ # Developing
7+
8+ There are multiple ways of running FleetOptimiser in developing mode.
9+ One can decide to run none or multiple of services as containers.
10+
11+ Below follows an explanation of how to run the services without containerisation.
12+
13+ ## Prerequisites
14+ - [ npm] ( https://docs.npmjs.com/downloading-and-installing-node-js-and-npm )
15+ - [ python ^3.10] ( https://www.python.org/downloads/ ) & pip
16+ - [ redis] ( https://redis.io/docs/latest/operate/oss_and_stack/install/install-redis/ )
17+ - [ rabbitmq] ( https://www.rabbitmq.com/docs/download )
18+
19+
20+ ### Frontend
21+ Go to ` fleetoptimiser-frontend ` and run
22+ ```
23+ /fleetoptimiser-frontend >> npm install
24+ ```
25+
26+ to install all the necessary packages for running the FleetOptimiser frontend.
27+
28+ When installed, you'll be able to run the frontend in dev mode, by running the following command
29+
30+ ```
31+ /fleetoptimiser-frontend >> npm run dev
32+
33+ > fleetoptimiser-frontend@0.1.0 dev
34+ > next dev
35+
36+ ▲ Next.js 14.2.14
37+ - Local: http://localhost:3000
38+ - Experiments (use with caution):
39+ · esmExternals
40+
41+ ✓ Starting...
42+ ✓ Ready in 5.2s
43+ ```
44+
45+ This sets the ` node_env ` as ` development ` asserting that no user authentication is needed.
46+
47+ The frontend can now be accessed at localhost:3000.
48+
49+
50+ ### Backend API
51+ It's recommended to run the backend and workers from a virtual environment to have all FleetOptimiser required packages
52+ located in a single environment. Hence, we'll refer to ` virtualenv ` whenever python relevant code is executed.
53+
54+ Create venv and install necessary packages:
55+ ```
56+ /OS2fleetoptimiser >> /path/to/python3.10 -m venv virtualenv
57+ /OS2fleetoptimiser >> source virtualenv/bin/activate
58+ /OS2fleetoptimiser >> pip install poetry==1.3.1
59+ /OS2fleetoptimiser >> poetry config virtualenvs.create false
60+ /OS2fleetoptimiser >> poetry install
61+
62+ Installing the current project: fleetmanager (1.0.0)
63+ ```
64+
65+ Now, we've successfully installed necessary packages and modules in FleetOptimiser.
66+ To run the backend api run the following command:
67+
68+
69+ ```
70+ /OS2fleetoptimiser >> uvicorn fleetmanager:api.app --port 3001 --host 0.0.0.0 --proxy-headers --root-path /api --workers 2
71+ INFO: Uvicorn running on http://0.0.0.0:3001 (Press CTRL+C to quit)
72+ INFO: Started parent process [29300]
73+ INFO: Started server process [29302]
74+ INFO: Waiting for application startup.
75+ INFO: Application startup complete.
76+ INFO: Started server process [29303]
77+ INFO: Waiting for application startup.
78+ INFO: Application startup complete.
79+ ```
80+
81+ Backend api can be reached on localhost:3001, API documentation will be shown if base path is visited with a browser.
82+
83+ ### Celery worker
84+ To have a worker running, which picks up simulation tasks as well as location precision tasks, we need to boot a worker with celery.
85+
86+
87+ With the venv activated as shown in Backend API, run the following command to start a worker:
88+ ```
89+ /OS2fleetoptimiser >> celery -A fleetmanager.tasks.celery worker --pool threads
90+ -------------- celery@LAPTOP-john v5.4.0 (opalescent)
91+ --- ***** -----
92+ -- ******* ---- Linux-5.15.146.1-microsoft-standard-WSL2-x86_64-with-glibc2.31 2024-11-21 14:42:49
93+ - *** --- * ---
94+ - ** ---------- [config]
95+ - ** ---------- .> app: fleetmanager_0fd19aa5f2d348669935339d76e3b414:0x7f15eb141120
96+ - ** ---------- .> transport: amqp://guest:**@localhost:5672//
97+ - ** ---------- .> results: redis://localhost/
98+ - *** --- * --- .> concurrency: 8 (thread)
99+ -- ******* ---- .> task events: OFF (enable -E to monitor tasks in this worker)
100+ --- ***** -----
101+ -------------- [queues]
102+ .> default exchange=celery(direct) key=celery
103+ ```
104+
105+ As you can see the worker will post results to redis running on the localhost and expect 'brokerage' through localhost port 5672.
106+ Hence, we need to make sure that we have a locally running redis and rabbitmq on the expected ports; 6379 and 5672 respectively.
107+
108+ ### Communication
109+ Make sure that the services uses the correct default ports for successful communication between them.
110+
111+ | from | to | default port localhost |
112+ | ----------------| -------------| ------------------------|
113+ | frontend | backend | 3001 |
114+ | backend/celery | rabbitmq | 5672 |
115+ | backend/celery | redis | 6379 |
116+
117+ In development, it's the easiest to have redis and/or rabbitmq running as docker containers with a mapping to the localhost ports.
118+
119+ ### Docker
120+ Any of the services can also be run with docker/compose, which is exemplified in the ` docker-compose-dev.yaml ` file where the
121+ backend and worker is run containerised and one would run the frontend separately. Hence, the backend mapping to localhost:3001.
122+
123+ In that scenario, you would run
124+ ```
125+ docker-compose -f docker-compose-dev.yaml up -d
126+ docker run -p 5672:5672 rabbitmq:4.0-management -d
127+ docker run -p 6379:6379 redis -d
128+ npm run dev
129+ ```
130+
131+ See [ production] ( production.html ) for environment variables for the backend.
132+
0 commit comments