To run the adaptor you will need:
- Both the nia-ps-adaptor (aka translator) and nia-ps-facade containers running
- A populated PostgreSQL DB, for more details see Database Requirements
- A message broker
- An instance of the MHS Adaptor running
The Adaptors services emit logs which are captured by the docker containers they are hosted within. Whichever Docker container orchestration technology is used, the log streams can be captured and forwarded to an appropriate log indexing service for consumption, storage and subsequent queries.
The consumption of these logs form an essential part of issue investigation and resolution.
The log messages relating to a specific transfer can be identified by the Conversation ID. Which is a correlating ID present throughout the patient record migration and carried in the GP2GP messages themselves.
Based on the use cases outlined in Performance, allocating 2 vCPUs and 4 GB RAM to the facade and translator will handle most patient record transfers. During testing of receiving an electronic health record of size >100MB which we considered to be beyond the upper limit based on analysis of GP2GP transfers made in early 2024 we identified a need to increase the translator RAM up to 8 GB for the transfer to complete successfully.
yyyy-mm-dd HH:mm:ss.SSS Level=DEBUG Logger=u.n.a.p.t.s.BundleMapperService ConversationId=6836FD37-B856-4167-A087-7E3989020FA3 Thread="org.springframework.jms.JmsListenerEndpointContainer#0-1" Message="Mapped Bundle with [261] entries"
- Level: The logging level of the message (INFO/DEBUG/WARN/ERROR)
- Logger: The name of the Java class that emitted the message
- ConversationId: The ID correlating all messages for a patient transfer
- Message: The log message
The Adaptor conforms to the GP2GP specification by timing out in-progress transfers. This ensures transfers are ended gracefully in the scenario a GP2GP message has not been received.
The timeout datetime is calculated using the following formula:
Timeout [secs] = (A x persistDuration contract property of EHR Response [secs])
+ (B x Number of COPC Common Point to Point EHR messages
x persistDuration contract property of COPC Common Point to Point messages [secs])
The formula includes adjustable weightings (A and B) to offset potential transmission delays.
From the documentation:
A & B are weighting factors associated with general message transmission delays and volume based throughput times to allow adjustment if required ....
The Persist Duration of each message is unique to the sending organisation and is obtained from the Spine Directory Service (SDS) FHIR API. Responses for an organisation's message type are cached by default, the frequency the cache is
updated is configurable via the environment variable TIMEOUT_SDS_POLL_FREQUENCY.
The adaptor checks incomplete transfers periodically, at a default frequency of every two hours. However, this is configurable via the environment variable TIMEOUT_CRON_TIME.
Should you wish to specify a maximum timeout period (in seconds), thus bypassing the above logic, you may specify the
value in the migrationTimeoutOverride environment variable.
For more configuration options see the Migration timeout variables section.
A separate guide identifies dependent components and how the Adaptor behaves when they are unavailable.
- The adaptor requires a PostgreSQL database
- The adaptor stores the identifiers, status, and metadata for each patient transfer
- The adaptor uses the database as a source of SNOMED information
- Deleting the database, or its records will cause any in-progress transfers to fail
- In addition to the /Patient/$gpc.migratestructuredrecord endpoint, the database can be used to monitor for any failed or incomplete transfers
The adaptor uses Liquibase to perform DB migrations. New versions of the Adaptor may require DB changes, which will necessitate the execution of the migration script before the new version of the application can be executed.
The DB migration is built as a Docker image, hosted on DockerHub under nhsdev/nia-ps-db-migration.
Required environment variables:
- POSTGRES_PASSWORD e.g. super5ecret
- PS_DB_OWNER_NAME e.g. postgres
- PS_DB_URL e.g. jdbc:postgresql://hostname:port
- GPC_FACADE_USER_DB_PASSWORD e.g. another5ecret, used when creating the user
gpc_user - GP2GP_TRANSLATOR_USER_DB_PASSWORD e.g. yetanother5ecret, used when creating the user
gp2gp_user
For example, the DB migration can be run as an ECS task in AWS.
When passing passwords into this script it is the responsibility of the supplier to ensure that passwords are being kept secure by using appropriate controls within their infrastructure.
The adaptor requires an up-to-date copy of the SNOMED DB as part of translating FHIR CodableConcepts.
The SNOMED loader script is built as a Docker image, hosted on DockerHub under nhsdev/nia-ps-snomed-schema.
Running the loader script will delete any existing SNOMED data, and then proceed to populate it using the provided extract.
Required environment variables:
- PS_DB_OWNER_NAME e.g. postgres
- POSTGRES_PASSWORD e.g. super5ecret
- PS_DB_HOST e.g. hostname.domain.com
- PS_DB_PORT e.g. 5432
The docker container has a required argument which is the path to a zipped SnomedCT RF2 file. The container does not come bundled with any SNOMED data itself. You will need to provide this file to the container.
The SNOMED loader script is also responsible for populating the materialised view immunization_codes which is used to
identify which Observations are to be treated as Immunizations. Details of how these are built are provided in the
documentation snomed database loader documentation README.md.
To test immunization codes are loaded correctly the script test-load-immunization-codes.sh can be executed against the database using the required environment variables listed above.
When passing passwords into this script it is the responsibility of the supplier to ensure that passwords are being kept secure by using appropriate controls within their infrastructure.
Example usage:
$ docker run --rm -e PS_DB_OWNER_NAME=postgres -e POSTGRES_PASSWORD=super5ecret -e PS_DB_HOST=postgres -e PS_DB_PORT=5432 \
-v /path/to/uk_sct2mo_41.0.0_20250924000001Z.zip:/snomed/uk_sct2mo_41.0.0_20250924000001Z.zip \
nhsdev/nia-ps-snomed-schema /snomed/uk_sct2mo_41.0.0_20250924000001Z.zipAs part of the installation of the adaptor, we do not provide the SNOMED database files as they are updated regularly under TRUD (Technology Reference Update Distribution). To acquire the most recent SNOMED database:-
- Head to https://isd.digital.nhs.uk/ and create a new account.
- Log in
- Search for the following: SNOMED CT UK Monolith Edition, RF2: Snapshot. We recommend the full Monolith edition, not the delta version.
- Subscribe to the data store.
- Once subscribed you will be able to download the most recent version of the SNOMED DB, at the time of writing this is release 36.0.0. (uk_sct2mo_36.0.0_20230412000001Z.zip)
- Now run the loader script as described above, and the SNOMED database will be installed for you.
You will now receive email notifications from TRUD once the subscribed data source is updated. We recommend updating your SNOMED version as soon as you receive the notification. To do this:-
- Log in to https://isd.digital.nhs.uk/
- Download the newest version of the SNOMED Monolith edition.
- Before continuing, please be aware that the SNOMED database will be unavailable whilst being rebuilt. All instances of the translator service should be stopped before performing the SNOMED update, any in progress GP2GP transfers will be on hold while the translator is stopped. The facade does not need to be stopped, so API requests can continue to be made.
- Run the loader script as described above.
- Start the translator service again, which will resume processing any in progress transfers.
The service uses a queue for communication between the HTTP facade, and GP2GP translator.
For this communication to be successful, each service needs to be configured to communicate to the same queue.
The MHS Inbound adaptor accepts incoming HTTPS spine messages, and pushes them onto ActiveMQ.
%%{ init: { 'flowchart': { 'nodeSpacing': 80, 'rankSpacing': 80, 'curve': 'stepBefore' } } }%%
graph LR
ActiveMQ[(ActiveMQ)];
style ActiveMQ fill:#000080,color:#fff
RequestingAdaptor[GP2GP FHIR Request Adaptor];
SendingAdaptor[GP2GP FHIR Send Adaptor];
MHSInbound[MHS Inbound Adaptor];
ActiveMQ -- MHS Inbound Queue --> RequestingAdaptor
RequestingAdaptor -- GP2GP Inbound Queue --> ActiveMQ
ActiveMQ -- GP2GP Inbound Queue --> SendingAdaptor
MHSInbound -- MHS Inbound Queue --> ActiveMQ
The set-up shown above is described as the daisy-chaining configuration. In this mode, the Request Adaptor and Send Adaptor execute against a single instance of the MHS Adaptor. Messages received by the GP2GP FHIR Request Adaptor with a conversation ID it doesn't recognise are forwarded to the GP2GP FHIR Send Adaptor queue.
When the daisy-chaining configuration is disabled, the adaptor will put messages it doesn't recognise into the dead letter queue.
In the diagram above there is a single broker for all queues, but the adaptor supports having separate brokers for each queue.
An example daisy chaining environment is provided in /test-suite/daisy-chaining/, and each environment variable described within Inbound message queue variables.
The adaptor will put messages it doesn't recognise into the dead letter queue.
Additionally, any messages which is recognised but can't be processed due to an error are sent to the dead letter queue once the number of attempted redeliveries exceeds the threshold.
The number of redeliveries is configurable with the MHS_AMQP_MAX_REDELIVERIES environment variable.
- The broker must be configured with a limited number of retries and dead-letter queues
- It is the responsibility of the GP supplier to configure adequate monitoring against the dead-letter queues that allows ALL undeliverable messages to be investigated fully.
- The broker must use persistent queues to avoid loss of data
- The Adaptor has been assured against ActiveMQ, the use of other MQ implementations is the responsibility of the GP supplier to test
Using AmazonMQ
- A persistent broker (not in-memory) must be used to avoid data loss.
- A configuration profile that includes settings for retry and dead-lettering and placing non persistent messages onto the dead letter queue must be applied.
- AmazonMQ uses the scheme
amqp+ssl://but this MUST be changed toamqps://when configuring the adaptor.
Using Azure Service Bus
- The ASB must use MaxDeliveryCount and dead-lettering
- Azure Service Bus may require some parameters as part of the URL configuration. For example:
PS_AMQP_BROKER=amqps://<NAME>.servicebus.windows.net/;SharedAccessKeyName=<KEY NAME>;SharedAccessKey=<KEY VALUE>
GP2GP messaging splits the patient's Electronic Health Record (EHR) into an EHR Extract and associated attachments. The adaptor uses AWS / Azure object storage to manage the attachments.
It is the responsibility of the GP System Supplier to manage the data stored in object storage post transfer.
The sending GP2GP system can split attachments into multiple parts that the adaptor must reassemble. It could also send the EHR Extract itself as a compressed attachment. Therefore, an incomplete / failed transfer could have both of these uploaded to object storage. If a transfer fails they will not be deleted by the adaptor.
Assembled attachments will be uploaded to object storage. The adaptor obtains a URL for each attachment, which it inserts into the returned FHIR bundle. When using AWS S3 this URL is pre-signed and valid for 60 minutes from the point at which the bundle was generated. After this time, the S3 download URL will expire, but no files will be deleted from the S3 bucket.
The pre-assembled attachment parts are removed from storage when an attachment is assembled. However, if the transfer contains a compressed EHR Extract this is not removed from storage automatically.
Attachment files are named as conversationId_documentId where documentId is the name of the file (including extension)
and conversationId is an identifier unique to transfer.
In the event of an upload failure the adaptor will retry. By default, the retry limit 3 times. However, this is configurable
via the STORAGE_RETRY_LIMIT environment variable.
The adaptor requires permission to read, write and delete from the object storage bucket / container. For example, in AWS
the adaptor would require permission to perform the actions s3:GetObject, s3:PutObject and s3:DeleteObject.
The adaptor defaults to the LocalMock storage option which MUST NOT be used in production, and is designed only for
testing. Attachments are not stored in a long term storage area when using LocalMock.
For more configuration see the Attachment storage variables section.
The Adaptors team have their own AWS environment they use for deploying both the GP2GP and Patient Swtiching adaptors and MHS adaptor. The infrastructure as code can be used as an example and is found inside the integration-adaptors-deployment repository.
These queue variables need to be the same between the translator and facade containers, as this is how the two services communicate.
Required
PS_AMQP_BROKER: the location of the PS Adaptors queue. This should be set to the url of a single JMS broker (the PS Adaptor does not support concurrent PS Adaptor brokers), default =amqp://localhost:5672PS_AMQP_USERNAME: The username for accessing the PS brokerPS_AMQP_PASSWORD: The password for accessing the PS broker
Optional
PS_QUEUE_NAME: The name of the patient switching queue, default =pssQueuePS_AMQP_MAX_REDELIVERIES: Number of times a message on thePS_QUEUE_NAMEqueue will be retried before being abandoned, default =3
Optional
PS_LOGGING_LEVEL: Spring logging level. UseDEBUGfor diagnosing problems in test environments, default =INFO
Required
PS_DB_URL: JDBC URL for the PostgreSQL database service, default =jdbc:postgresql://localhost:5436
Optional
GPC_FACADE_SERVER_PORT: HTTP server port for the services endpoints, default =8081
Optional configuration for enabling Transport Layer Security for the HTTP server. See also the Apache Tomcat SSL/TLS guidance.
Optional
SSL_ENABLED: Providetrueto enable TLS, default =falseKEY_STORE: Path to the keystoreKEY_PASSWORD: Server private key passwordKEY_STORE_PASSWORD: Keystore passwordTRUST_STORE: Path to the truststoreTRUST_STORE_PASSWORD: Truststore password
Required
GPC_FACADE_USER_DB_PASSWORD: DB password for thegpc_useruser
The recommended heap space for the Translator is 4 GB. Also, it should be run on (at least) two CPUs for better GC performance.
Optional
GP2GP_TRANSLATOR_SERVER_PORT: HTTP server port exposing a/healthcheckendpoint, default =8085
Required
GP2GP_TRANSLATOR_USER_DB_PASSWORD: DB password for thegp2gp_useruser
Required
MHS_AMQP_BROKER: the location of the MHS Adaptors inbound queue. This should be set to the url of a single JMS broker (the Adaptor does not support concurrent MHS Adaptor brokers), default =amqp://localhost:5672MHS_AMQP_USERNAME: The username for accessing the MHS brokerMHS_AMQP_PASSWORD: The password for accessing the MHS broker
Optional
MHS_QUEUE_NAME: The name of the MHS Adaptors inbound queue, default =mhsQueueMHS_AMQP_MAX_REDELIVERIES: Number of times a message on the MHS Queue will be retried before being abandoned, default =3MHS_DLQ_PREFIX: Prefix added toMHS_QUEUE_NAMEfor unprocessable messages, default =DLQ.PS_DAISY_CHAINING_ACTIVE: set totrueto enable daisy-chaining, default =falseGP2GP_AMQP_BROKERS: the location of the GP2GP Adaptors inbound queue. This should be set to the url of a single JMS broker (the Adaptor does not support concurrent GP2GP Adaptor brokers), default =amqp://localhost:5672GP2GP_MHS_INBOUND_QUEUE: The name of the GP2GP Adaptors inbound queue, default =gp2gpInboundQueueGP2GP_AMQP_USERNAME: The username for accessing the GP2GP brokerGP2GP_AMQP_PASSWORD: The password for accessing the GP2GP broker
Required
MHS_BASE_URL: URL of MHS Outbound Adaptor, default =http://localhost:8080
The following variables are used for storing attachments.
Required
STORAGE_TYPE: The type of object storage to use for attachments (S3, Azure or LocalMock), default =LocalMockSTORAGE_REGION: The AWS region of the S3 bucket, leave blank if using AzureSTORAGE_CONTAINER_NAME: The name of the Azure Storage container or Amazon S3 BucketSTORAGE_REFERENCE: The Azure account name or AWS Access Key ID (leave undefined if using an AWS instance role)STORAGE_SECRET: The Azure account key or the Amazon Access Key. (leave undefined if using an AWS instance role)
Optional
STORAGE_RETRY_LIMIT: The number of retries that are performed when uploading an attachment to storage before failing the transfer, default =3
The following variables are used determine if a migration has timed out:
Required
SDS_API_KEY: Your SDS FHIR API Key
Optional
SDS_BASE_URL: The URL of the SDS FHIR API, default =https://api.service.nhs.uk/spine-directory/FHIR/R4TIMEOUT_CRON_TIME: The frequency of the timeout check specified as a Cron expression. Format =<second> <minute> <hour> <day of month> <month> <day of week>, default =0 0 */2 * * *(AKA every 2 hours)TIMEOUT_SDS_POLL_FREQUENCY: The frequency at which SDS is polled for updated message persist durations, defined in terms of the number of times a migration has been identified by the timeout cron, default =3TIMEOUT_EHR_EXTRACT_WEIGHTING: The weighting factor A, to account transmission delays and volume throughput times of the RCMR_IN030000UK06 message, default =1TIMEOUT_COPC_WEIGHTING: The weighting factor B, to account transmission delays and volume throughput times of the COPC_IN000001UK01 message, default =1MIGRATION_TIMEOUT_OVERRIDE: Overwrite the existing timeout logic with fixed 48 hour maximum timeout period. Default = false