133 Commits

Author SHA1 Message Date
ba3227545d chore(release): 0.0.1-alpha.4
All checks were successful
Build and Push LST Docker Image / docker (push) Successful in 2m4s
Release and Build Image / release (push) Successful in 12s
2026-04-15 07:31:49 -05:00
84909bfcf8 ci(service): changes to the script to allow running the powershell on execution palicy restrictions
Some checks failed
Build and Push LST Docker Image / docker (push) Has been cancelled
2026-04-15 07:31:06 -05:00
e0d0ac2077 feat(datamart): psi data has been added :D 2026-04-15 07:29:35 -05:00
52a6c821f4 fix(datamart): error when running build and crashed everything
All checks were successful
Build and Push LST Docker Image / docker (push) Successful in 1m34s
2026-04-14 20:30:34 -05:00
eccaf17332 feat(datamart): migrations completed remaining is the deactivation that will be ran by anylitics
Some checks failed
Build and Push LST Docker Image / docker (push) Failing after 39s
2026-04-14 20:25:20 -05:00
6307037985 feat(tcp crud): tcp server start, stop, restart endpoints + status check
All checks were successful
Build and Push LST Docker Image / docker (push) Successful in 1m30s
2026-04-13 17:30:47 -05:00
4b6061c478 ci(agent): added in sherman
All checks were successful
Build and Push LST Docker Image / docker (push) Successful in 1m36s
2026-04-13 15:36:50 -05:00
fc6dc82d84 refactor(services): added in examples for migration stuff 2026-04-13 15:36:29 -05:00
6ba905a887 docs(docs): removed docusorus as all docs will be inside lst now to better assist users 2026-04-13 15:36:02 -05:00
f33587a3d9 refactor(sql): corrections to the way we reconnect so the app can error out and be reactivated later 2026-04-13 15:35:12 -05:00
80189baf90 feat(ocp): printer sync and logging logic added 2026-04-13 15:34:18 -05:00
87f738702a docs(notifcations): docs for intro, notifcations, reprint added
All checks were successful
Build and Push LST Docker Image / docker (push) Successful in 2m25s
2026-04-10 21:35:12 -05:00
38a0b65e94 refactor(connection): corrected the connection to the old system 2026-04-10 21:33:55 -05:00
9a0ef8e51a refactor(notification): blocking added 2026-04-10 21:33:26 -05:00
dcb3f2dd13 refactor(server): added in serverCrash email 2026-04-10 21:32:25 -05:00
e47ea9ec52 ci(agent): added in jeff city 2026-04-10 21:31:57 -05:00
ca3425d327 docs(env example): updated the file 2026-04-10 21:30:46 -05:00
3bf024cfc9 refactor(agent): changed to have the test servers on there own push for better testing
production servers will soon pull a build from git rather and push the zip so splitting things up
now
2026-04-10 14:12:02 -05:00
9d39c13510 refactor(puchase): changes how the error handling works so a better email can be sent 2026-04-10 13:58:30 -05:00
c9eb59e2ad refactor(reprint): new query added to deactivate the old notifcation so no chance of duplicates 2026-04-10 13:57:52 -05:00
b0e5fd7999 feat(migrate): quality alert migrated 2026-04-10 13:57:15 -05:00
07ebf88806 refactor(templates): corrections for new notify process on critcal errors 2026-04-10 10:33:01 -05:00
79e653efa3 refactor(logging): when notify is true send the error to systemAdmins 2026-04-10 10:32:20 -05:00
d05a0ce930 chore(release): 0.0.1-alpha.3
All checks were successful
Build and Push LST Docker Image / docker (push) Successful in 2m1s
Release and Build Image / release (push) Successful in 11s
2026-04-10 08:22:16 -05:00
995b1dda7c refactor(send email): changes the error message to show the true message in the error
All checks were successful
Build and Push LST Docker Image / docker (push) Successful in 2m3s
2026-04-09 21:15:26 -05:00
97f93a1830 refactor(reprints): changes the module and submodule around to be more accurate 2026-04-09 21:14:36 -05:00
635635b356 refactor(gp connect): gp connect as was added to long live services 2026-04-09 21:13:38 -05:00
a691dc276e feat(puchase hist): finished up purhcase historical / gp updates 2026-04-09 21:12:43 -05:00
8dfcbc5720 chore(release): 0.0.1-alpha.2
All checks were successful
Build and Push LST Docker Image / docker (push) Successful in 2m29s
Release and Build Image / release (push) Successful in 17s
2026-04-08 16:13:38 -05:00
103ae77e9f build(release): docker and release corrections
Some checks failed
Build and Push LST Docker Image / docker (push) Has been cancelled
2026-04-08 16:12:54 -05:00
beeccc6e8d chore(release): 0.0.1-alpha.1
Some checks failed
Build and Push LST Docker Image / docker (push) Has been cancelled
Release and Build Image / release (push) Failing after 15s
2026-04-08 15:58:21 -05:00
0880298cf5 refactor(opendock refactor on how releases are posted): this was a bug maybe just a better refactory
Some checks failed
Build and Push LST Docker Image / docker (push) Has been cancelled
2026-04-08 15:57:20 -05:00
34b0abac36 feat(puchase history): purhcase history changed to long running no notification 2026-04-08 15:55:25 -05:00
28c226ddbc build(agent): added westbend into the flow 2026-04-07 22:33:38 -05:00
42861cc69e feat(purchase): historical data capture for alpla purchase 2026-04-07 22:33:11 -05:00
5f3d683a13 refactor(notification): reprint - removed a console log as it shouldnt bc there 2026-04-06 16:41:39 -05:00
a17787e852 feat(notification): reprint added
All checks were successful
Build and Push LST Docker Image / docker (push) Successful in 2m6s
2026-04-06 16:01:06 -05:00
5865ac3b99 feat(notification): base notifcaiton sub and admin compelted
All checks were successful
Build and Push LST Docker Image / docker (push) Successful in 1m59s
can now sub to a notification and user can remove them selfs plus an admin can remove,updates to add
new emails are good as well
2026-04-06 12:59:30 -05:00
637de857f9 feat(user notifications): added the ability for users to sub to notifications and add multi email 2026-04-06 09:29:46 -05:00
3ecf5fb916 refactor(userprofile): changes to have the table be blank and say nothing subscribed
All checks were successful
Build and Push LST Docker Image / docker (push) Successful in 2m32s
later we will leave this off the profile and add it once at least one notification is subscribed
2026-04-05 20:50:27 -05:00
92ba3ef512 docs(readme): updated progress data
All checks were successful
Build and Push LST Docker Image / docker (push) Successful in 1m18s
2026-04-05 20:44:49 -05:00
7d6c2db89c style(notifcaion): style changes to the notificaion card and started the table
All checks were successful
Build and Push LST Docker Image / docker (push) Successful in 1m49s
2026-04-03 17:16:58 -05:00
74262beb65 refactor(notification): select menu looks propper now 2026-04-03 17:16:31 -05:00
f3b8dd94e5 refactor(queries): changed dev version to be 1500ms vs 5000ms 2026-04-03 17:16:02 -05:00
0059b9b850 build(changelog): reset the change log after all crap testing 2026-04-03 17:15:22 -05:00
1ad789b2b9 chore(release): 0.1.0-alpha.12
All checks were successful
Build and Push LST Docker Image / docker (push) Successful in 1m45s
Release and Build Image / release (push) Successful in 10s
2026-04-03 16:54:44 -05:00
079478f932 fix(typo): more dam typos 2026-04-03 16:54:29 -05:00
d6d5b451cd chore(release): 0.1.0-alpha.11
All checks were successful
Build and Push LST Docker Image / docker (push) Successful in 1m45s
Release and Build Image / release (push) Successful in 10s
2026-04-03 16:49:20 -05:00
76747cf917 fix(release): typo that caused errors 2026-04-03 16:49:12 -05:00
6e85991062 refactor(release): changes to only have the changelog in the release 2026-04-03 16:43:17 -05:00
98e408cb85 chore(release): 0.1.0-alpha.10
All checks were successful
Build and Push LST Docker Image / docker (push) Successful in 1m48s
Release and Build Image / release (push) Successful in 1m22s
2026-04-03 15:30:02 -05:00
ed052dff3c refactor(changelog): reverted back to commit-chagnelog, like more than changeset for solo dev 2026-04-03 15:29:49 -05:00
8f59bba614 chore(release): 0.1.0-alpha.9
All checks were successful
Release and Build Image / release (push) Successful in 1m52s
2026-04-03 15:22:26 -05:00
fb2c5609aa chore(release): version packages
All checks were successful
Build and Push LST Docker Image / docker (push) Successful in 1m46s
Release and Build Image / release (push) Successful in 1m20s
2026-04-03 13:06:52 -05:00
17aed6cb89 fix(lala): something here 2026-04-03 13:06:14 -05:00
b02b93b83f chore(release): version packages
All checks were successful
Build and Push LST Docker Image / docker (push) Successful in 1m50s
Release and Build Image / release (push) Successful in 1m26s
2026-04-03 12:51:52 -05:00
9ceba8b5bb fix(i suck): more learning experance 2026-04-03 12:51:11 -05:00
2c0dbf95c7 chore(release): version packages
Some checks failed
Build and Push LST Docker Image / docker (push) Successful in 1m50s
Release and Build Image / release (push) Failing after 1m22s
2026-04-03 12:44:43 -05:00
860207a60b fix(build): typo 2026-04-03 12:44:16 -05:00
5c6460012a chore(release): version packages
Some checks failed
Build and Push LST Docker Image / docker (push) Successful in 1m54s
Release and Build Image / release (push) Failing after 1m43s
2026-04-03 12:37:54 -05:00
be1d4081e0 docs(sop): added more info 2026-04-03 12:37:13 -05:00
83a94cacf3 fix(build): type in how we pushed the header over
All checks were successful
Build and Push LST Docker Image / docker (push) Successful in 1m20s
2026-04-03 12:33:20 -05:00
0ce3790675 chore(release): version packages
Some checks failed
Build and Push LST Docker Image / docker (push) Successful in 1m51s
Release and Build Image / release (push) Failing after 1m23s
2026-04-03 12:23:13 -05:00
5854889eb5 refactor(build): added in more info to the relase section 2026-04-03 12:22:26 -05:00
4caaf74569 chore(release): version packages
All checks were successful
Build and Push LST Docker Image / docker (push) Successful in 1m49s
Release and Build Image / release (push) Successful in 1m22s
2026-04-03 12:09:59 -05:00
fe889ca757 fix(build): issue with how i wrote the release token
Some checks failed
Build and Push LST Docker Image / docker (push) Has been cancelled
2026-04-03 12:08:57 -05:00
699c124b0e chore(release): version packages
Some checks failed
Build and Push LST Docker Image / docker (push) Successful in 1m42s
Release and Build Image / release (push) Failing after 6s
2026-04-03 11:56:40 -05:00
7d55c5f431 refactor(build): changes to the way we do release so it builds as well
All checks were successful
Build and Push LST Docker Image / docker (push) Successful in 1m21s
2026-04-03 11:54:41 -05:00
c4fd74fc93 chore(release): version packages
Some checks failed
Build and Push LST Docker Image / docker (push) Successful in 1m44s
Create Gitea Release / release (push) Failing after 17s
2026-04-03 11:42:52 -05:00
3775760734 fix(wrelease): forgot to save
All checks were successful
Build and Push LST Docker Image / docker (push) Successful in 1m18s
2026-04-03 11:41:27 -05:00
643d12ff18 refactor(build): changes to auto release when we cahnge version
Some checks failed
Build and Push LST Docker Image / docker (push) Has been cancelled
2026-04-03 11:40:09 -05:00
82eaa23da7 chore(release): version packages
All checks were successful
Build and Push LST Docker Image / docker (push) Successful in 1m57s
2026-04-03 11:18:25 -05:00
b18d1ced6d build(`build): added a personal sop to the setup until we move it
All checks were successful
Build and Push LST Docker Image / docker (push) Successful in 1m54s
2026-04-03 11:17:09 -05:00
69c5cf87fd fix(docker): fixes to allow an external url more easy
Some checks failed
Build and Push LST Docker Image / docker (push) Failing after 12s
when running in docker we might be using a different url thats not predefined in the cors so we want
to allow 1 more
2026-04-03 10:49:57 -05:00
1fadf0ad25 testing the docker runner
All checks were successful
Build and Push LST Docker Image / docker (push) Successful in 1m28s
2026-04-03 10:15:18 -05:00
beae6eb648 lots of changes with docker
All checks were successful
Build and Push LST Docker Image / docker (push) Successful in 2m57s
2026-04-03 09:51:52 -05:00
82ab735982 add gitea docker workflow
Some checks failed
Build and Push LST Docker Image / docker (push) Has been cancelled
2026-04-03 09:51:02 -05:00
dbd56c1b50 helper command set to correct drive now 2026-03-27 18:31:16 -05:00
037a473ab7 added dayton in 2026-03-27 18:31:02 -05:00
32998d417f table and query work 2026-03-27 18:30:50 -05:00
ddcb7e76a3 fixed imports on several files 2026-03-25 06:56:19 -05:00
191cb2b698 changed limas folder after migration 2026-03-25 06:56:01 -05:00
2021141967 notification added in with subs :D 2026-03-20 23:43:52 -05:00
751c8f21ab bug fixes on user 2026-03-20 09:32:17 -05:00
85073c19d2 user forms added 2026-03-19 17:44:17 -05:00
6b8d7b53d0 login form created 2026-03-18 19:14:08 -05:00
e025d0f5cc added iowa test server to the mix 2026-03-18 12:22:14 -05:00
e67e9e6d72 more logging stuff 2026-03-18 12:22:00 -05:00
2846b9cb0d logs route behind protected route and menu 2026-03-16 20:59:05 -05:00
5db2a7fe75 frontend added and socket io 2026-03-16 18:07:23 -05:00
81dc575b4f socket io stuff entered 2026-03-12 15:05:37 -05:00
bf7d765989 correction to monitor opendock activation 2026-03-11 16:23:04 -05:00
4f24fe4660 agent finished and updates servers 2026-03-10 16:41:40 -05:00
68d13b03d3 agent starting :D 2026-03-01 14:10:19 -06:00
c3379919b9 intial setting and auth intergrated 2026-02-24 15:53:58 -06:00
326c2e125c socket setup 2026-02-20 16:54:01 -06:00
880902c478 added bruno 2026-02-20 12:19:32 -06:00
100c9ff9be moved brunoapi stuff inside the app 2026-02-20 12:17:51 -06:00
a8af021621 added opendock apt check route 2026-02-20 12:17:39 -06:00
5469a0dc5c scaler updates 2026-02-20 11:05:03 -06:00
2d1f613d39 db cleanups and logging for od 2026-02-20 09:58:20 -06:00
597d990a69 reafactored data mart and added better job monitor 2026-02-19 13:20:20 -06:00
76503f558b feat(frontends): added vite and docusorus into the games 2026-02-18 12:11:51 -06:00
23c000fa7f recator placement of code 2026-02-17 11:46:57 -06:00
31f8c368d9 refactor(datamart): more work on getting this to be a more dynamic/sync system 2026-01-29 15:09:30 -06:00
81bd4d6dcb feat(server): added in admin section of socketio 2026-01-26 06:27:54 -06:00
152f7042c9 refactor(datamart): added public access 2026-01-26 06:27:23 -06:00
ba4635a7a7 fix(build): cant have docker in the build to build its self silly 2026-01-13 18:00:18 -06:00
9ca24a266a refactor(prod sql): removed closeing the pool that was weird 2026-01-13 17:57:30 -06:00
e7a0a3ff21 refactor(build): added docker to the build so we are always updated 2026-01-13 17:57:08 -06:00
f40a4acad1 refactor(datamart): refactored the get queries to only send back info not the entire query 2026-01-13 17:56:43 -06:00
74677d12c4 refactor(datamart): more work on the new query system 2026-01-13 17:07:46 -06:00
00b4fb1a0a refactor(logging): transport fixes for dev to production 2026-01-13 17:07:15 -06:00
255ccb0f7d fix(server): default port added in case its not passed over in the env 2026-01-13 17:06:51 -06:00
d9d182d908 refactor(app): updated base url to be blank if it was not docker or dev 2026-01-13 17:06:25 -06:00
780335d35c feat(docker): added in docker build stuff to run this in docker as well as windows service 2026-01-13 17:05:23 -06:00
e6d996e40b Merge branch 'main' of https://git.tuffraid.net/cowch/lst_v3 2026-01-05 20:31:40 -06:00
b777d87e5a feat(datamart): get, add, update queries 2026-01-05 20:06:15 -06:00
3f989b769f ci(package lock): just a starnge update 2026-01-04 17:45:18 -06:00
c06a52a4ac test(datamart): more work on datamart setup 2026-01-03 10:48:56 -06:00
404974dde0 chore(lint): more linting oopysd 2025-12-31 15:17:12 -06:00
4e6d35bc67 test(datamart): more data mart work 2025-12-31 15:14:54 -06:00
04fe1f1bfe ci(linter errors): fixes for linting errors to make it more happy 2025-12-31 15:14:33 -06:00
2c3a6065bd docs(readme): more updates to the readme for status 2025-12-31 15:14:07 -06:00
b4dbdd6932 test(auth): work on auth login and signup 2025-12-31 15:11:33 -06:00
cc3e823a7d test(datamart): more work on datamart stuff 2025-12-30 21:04:54 -06:00
9eeede7fbe feat(auth): setup login and sign up and can use email or username 2025-12-30 21:04:26 -06:00
ff2cd7e9f8 feat(auth): added in the intital setup auth 2025-12-30 08:04:10 -06:00
9b5a75300a feat(datamart): intial setup to datamart migrations 2025-12-30 08:02:18 -06:00
9531401e56 ci(app): testing and other app config changes 2025-12-30 08:01:16 -06:00
9efd6419b6 refactor(logger): added in the db posting 2025-12-29 06:26:40 -06:00
ea72fd10cd feat(datamart): intial foundation of the datamart setup
this will allow for faster datamart addtions and updates
2025-12-23 19:30:34 -06:00
1b200147b7 test(docker): testing on docker stuff 2025-12-23 19:29:17 -06:00
388 changed files with 76225 additions and 1806 deletions

View File

@@ -1,8 +0,0 @@
# Changesets
Hello and welcome! This folder has been automatically generated by `@changesets/cli`, a build tool that works
with multi-package repos, or single-package repos to help you version and publish your code. You can
find the full documentation for it [in our repository](https://github.com/changesets/changesets)
We have a quick list of common questions to get you started engaging with this project in
[our documentation](https://github.com/changesets/changesets/blob/main/docs/common-questions.md)

View File

@@ -1,11 +0,0 @@
{
"$schema": "https://unpkg.com/@changesets/config@3.1.2/schema.json",
"changelog": "@changesets/cli/changelog",
"commit": false,
"fixed": [],
"linked": [],
"access": "restricted",
"baseBranch": "main",
"updateInternalDependencies": "patch",
"ignore": []
}

View File

@@ -1,4 +1,12 @@
node_modules
.git
.env
dist
Dockerfile
docker-compose.yml
docker-compose.yml
npm-debug.log
builds
testFiles
nssm.exe
postgresql-17.9-2-windows-x64.exe
VSCodeUserSetup-x64-1.112.0.msi

52
.env-example Normal file
View File

@@ -0,0 +1,52 @@
NODE_ENV=development
# Server
PORT=3000
URL=http://localhost:3000
SERVER_IP=10.75.2.38
TIMEZONE=America/New_York
TCP_PORT=2222
# Better auth Secret
BETTER_AUTH_SECRET=
RESET_EXPIRY_SECONDS=3600
# logging
LOG_LEVEL=
# SMTP password
SMTP_PASSWORD=
# opendock
OPENDOCK_URL=https://neutron.opendock.com
OPENDOCK_PASSWORD=
DEFAULT_DOCK=
DEFAULT_LOAD_TYPE=
DEFAULT_CARRIER=
# prodServer when ruining on an actual prod server use localhost this way we don't go out and back in.
PROD_SERVER=
PROD_PLANT_TOKEN=
PROD_USER=
PROD_PASSWORD=
# Tech user for alplaprod api
TEC_API_KEY=
# AD STUFF
# this is mainly used for purchase stuff to reference reqs
LDAP_URL=
# postgres connection
DATABASE_HOST=localhost
DATABASE_PORT=5432
DATABASE_USER=
DATABASE_PASSWORD=
DATABASE_DB=
# Gp connection
GP_USER=
GP_PASSWORD=
# how often to check for new/updated queries in min
QUERY_TIME_TYPE=m #valid options are m, h
QUERY_CHECK=1

View File

@@ -0,0 +1,31 @@
name: Build and Push LST Docker Image
on:
push:
branches:
- main
jobs:
docker:
runs-on: ubuntu-latest
steps:
- name: Checkout (local)
run: |
git clone https://git.tuffraid.net/cowch/lst_v3.git .
git checkout ${{ gitea.sha }}
- name: Login to registry
run: echo "${{ secrets.PASSWORD }}" | docker login git.tuffraid.net -u "cowch" --password-stdin
- name: Build image
run: |
docker build \
-t git.tuffraid.net/cowch/lst_v3:latest \
-t git.tuffraid.net/cowch/lst_v3:${{ gitea.sha }} \
.
- name: Push
run: |
docker push git.tuffraid.net/cowch/lst_v3:latest
docker push git.tuffraid.net/cowch/lst_v3:${{ gitea.sha }}

View File

@@ -0,0 +1,229 @@
name: Release and Build Image
on:
push:
tags:
- "v*"
jobs:
release:
runs-on: ubuntu-latest
env:
# Internal/origin Gitea URL. Do NOT use the Cloudflare fronted URL here.
# Examples:
# http://gitea.internal.lan:3000
# https://gitea-origin.yourdomain.local
GITEA_INTERNAL_URL: "https://git.tuffraid.net"
# Internal/origin registry host. Usually same host as above, but without protocol.
# Example:
# gitea.internal:3000
REGISTRY_HOST: "git.tuffraid.net"
steps:
- name: Check out repository
uses: actions/checkout@v4
- name: Prepare release metadata
shell: bash
run: |
set -euo pipefail
TAG="${GITHUB_REF_NAME:-${GITHUB_REF##refs/tags/}}"
VERSION="${TAG#v}"
IMAGE_NAME="${REGISTRY_HOST}/${{ gitea.repository }}"
echo "TAG=$TAG" >> "$GITHUB_ENV"
echo "VERSION=$VERSION" >> "$GITHUB_ENV"
echo "IMAGE_NAME=$IMAGE_NAME" >> "$GITHUB_ENV"
if [[ "$TAG" == *-* ]]; then
echo "PRERELEASE=true" >> "$GITHUB_ENV"
else
echo "PRERELEASE=false" >> "$GITHUB_ENV"
fi
echo "Resolved TAG=$TAG"
echo "Resolved VERSION=$VERSION"
echo "Resolved IMAGE_NAME=$IMAGE_NAME"
- name: Log in to Gitea container registry
shell: bash
env:
REGISTRY_USERNAME: ${{ secrets.REGISTRY_USERNAME }}
REGISTRY_TOKEN: ${{ secrets.RELEASE_TOKEN }}
run: |
set -euo pipefail
echo "$REGISTRY_TOKEN" | docker login "$REGISTRY_HOST" -u "$REGISTRY_USERNAME" --password-stdin
- name: Build Docker image
shell: bash
run: |
set -euo pipefail
docker build \
-t "$IMAGE_NAME:$TAG" \
-t "$IMAGE_NAME:latest" \
.
- name: Push version tag
shell: bash
run: |
set -euo pipefail
docker push "$IMAGE_NAME:$TAG"
- name: Push latest tag
if: ${{ !contains(env.TAG, '-') }}
shell: bash
run: |
set -euo pipefail
docker push "$IMAGE_NAME:latest"
- name: Push prerelease channel tag
if: ${{ contains(env.TAG, '-') }}
shell: bash
env:
TAG: ${{ env.TAG }}
run: |
set -euo pipefail
CHANNEL="${TAG#*-}"
CHANNEL="${CHANNEL%%.*}"
echo "Resolved prerelease channel: $CHANNEL"
docker tag "$IMAGE_NAME:$TAG" "$IMAGE_NAME:$CHANNEL"
docker push "$IMAGE_NAME:$CHANNEL"
- name: Extract matching CHANGELOG section
shell: bash
env:
VERSION: ${{ env.VERSION }}
run: |
set -euo pipefail
python3 - <<'PY'
import os
import re
from pathlib import Path
version = os.environ["VERSION"]
changelog_path = Path("CHANGELOG.md")
if not changelog_path.exists():
Path("release_body.md").write_text(f"Release {version}\n", encoding="utf-8")
raise SystemExit(0)
text = changelog_path.read_text(encoding="utf-8")
# Matches headings like:
# ## [0.1.0]
# ## 0.1.0
# ## [0.1.0-alpha.1]
pattern = re.compile(
rf"^##\s+\[?{re.escape(version)}\]?[^\n]*\n(.*?)(?=^##\s+\[?[^\n]+|\Z)",
re.MULTILINE | re.DOTALL,
)
match = pattern.search(text)
if match:
body = match.group(1).strip()
else:
body = f"Release {version}"
if not body:
body = f"Release {version}"
Path("release_body.md").write_text(body + "\n", encoding="utf-8")
print("----- release_body.md -----")
print(body)
print("---------------------------")
PY
- name: Create Gitea release
shell: bash
env:
RELEASE_TOKEN: ${{ secrets.RELEASE_TOKEN }}
GITEA_REPOSITORY: ${{ gitea.repository }}
GITEA_INTERNAL_URL: ${{ env.GITEA_INTERNAL_URL }}
TAG: ${{ env.TAG }}
PRERELEASE: ${{ env.PRERELEASE }}
run: |
set -euo pipefail
python3 - <<'PY'
import json
import os
import urllib.request
import urllib.error
from pathlib import Path
tag = os.environ["TAG"]
prerelease = os.environ["PRERELEASE"].lower() == "true"
server_url = os.environ["GITEA_INTERNAL_URL"].rstrip("/")
repo = os.environ["GITEA_REPOSITORY"]
token = os.environ["RELEASE_TOKEN"]
body = Path("release_body.md").read_text(encoding="utf-8").strip()
# Check if the release already exists for this tag
get_url = f"{server_url}/api/v1/repos/{repo}/releases/tags/{tag}"
get_req = urllib.request.Request(
get_url,
method="GET",
headers={
"Authorization": f"token {token}",
"Accept": "application/json",
"User-Agent": "lst-release-workflow/1.0",
},
)
existing_release = None
try:
with urllib.request.urlopen(get_req) as resp:
existing_release = json.loads(resp.read().decode("utf-8"))
except urllib.error.HTTPError as e:
if e.code != 404:
details = e.read().decode("utf-8", errors="replace")
print("Failed checking existing release:")
print(details)
raise
payload = {
"tag_name": tag,
"name": tag,
"body": body,
"draft": False,
"prerelease": prerelease,
}
data = json.dumps(payload).encode("utf-8")
if existing_release:
release_id = existing_release["id"]
url = f"{server_url}/api/v1/repos/{repo}/releases/{release_id}"
method = "PATCH"
print(f"Release already exists for tag {tag}, updating release id {release_id}")
else:
url = f"{server_url}/api/v1/repos/{repo}/releases"
method = "POST"
print(f"No release exists for tag {tag}, creating a new one")
req = urllib.request.Request(
url,
data=data,
method=method,
headers={
"Authorization": f"token {token}",
"Content-Type": "application/json",
"Accept": "application/json",
"User-Agent": "lst-release-workflow/1.0",
},
)
try:
with urllib.request.urlopen(req) as resp:
print(resp.read().decode("utf-8"))
except urllib.error.HTTPError as e:
details = e.read().decode("utf-8", errors="replace")
print("Release create/update failed:")
print(details)
raise
PY

12
.gitignore vendored
View File

@@ -1,4 +1,16 @@
# ---> Node
testFiles
builds
.includes
.buildNumber
temp
brunoApi
.scriptCreds
node-v24.14.0-x64.msi
postgresql-17.9-2-windows-x64.exe
VSCodeUserSetup-x64-1.112.0.exe
nssm.exe
# Logs
logs
*.log

View File

@@ -11,7 +11,7 @@
{ "type": "ci", "hidden": false, "section": "📈 Project changes" },
{ "type": "build", "hidden": false, "section": "📈 Project Builds" }
],
"commitUrlFormat": "https://git.tuffraid.net/cowch/lst/commits/{{hash}}",
"compareUrlFormat": "https://git.tuffraid.net/cowch/lst/compare/{{previousTag}}...{{currentTag}}",
"commitUrlFormat": "https://git.tuffraid.net/cowch/lst_v3/commits/{{hash}}",
"compareUrlFormat": "https://git.tuffraid.net/cowch/lst_v3/compare/{{previousTag}}...{{currentTag}}",
"header": "# All Changes to LST can be found below.\n"
}

View File

@@ -10,6 +10,7 @@
"\tmessage: \"${5:Failed to connect to the prod sql server.}\",",
"\tdata: ${6:[]},",
"\tnotify: ${7:false},",
"\troom: ${8:''},",
"});"
],
"description": "Insert a returnFunc template"
@@ -22,5 +23,27 @@
"\tsubModule: \"${2:start up}\",",
"});"
]
},
"Create Example Route Template":{
"prefix": "createRoute",
"body":[
"import { Router } from \"express\";",
"\timport { apiReturn } from \"../utils/returnHelper.utils.js\";",
"\tconst r = Router();",
"\tr.post(\"/\", async (req, res) => {",
"\t",
"\tapiReturn(res, {",
"\tsuccess: true,",
"\tlevel: \"info\", //connect.success ? \"info\" : \"error\",",
"\tmodule: \"routes\",",
"\tsubModule: \"auth\",",
"\tmessage: \"Testing route\",",
"\tdata: [],",
"\tstatus: 200, //connect.success ? 200 : 400,",
"});",
"});",
"\texport default r;"
]
}
}

20
.vscode/settings.json vendored
View File

@@ -3,6 +3,8 @@
"workbench.colorTheme": "Default Dark+",
"terminal.integrated.env.windows": {},
"editor.formatOnSave": true,
"typescript.preferences.importModuleSpecifier": "relative",
"javascript.preferences.importModuleSpecifier": "relative",
"editor.codeActionsOnSave": {
"source.fixAll.biome": "explicit",
"source.organizeImports.biome": "explicit"
@@ -52,9 +54,25 @@
"alpla",
"alplamart",
"alplaprod",
"alplapurchase",
"bookin",
"Datamart",
"dotenvx",
"dyco",
"intiallally",
"manadatory",
"OCME",
"onnotice",
"opendock",
"opendocks",
"palletizer",
"ppoo",
"prodlabels"
"preseed",
"prodlabels",
"prolink",
"Skelly",
"trycatch",
"whse"
],
"gitea.token": "8456def90e1c651a761a8711763d6ef225d6b2db",
"gitea.instanceURL": "https://git.tuffraid.net",

View File

@@ -1,7 +1,134 @@
# lst_v3
# All Changes to LST can be found below.
## 1.0.1
## [0.0.1-alpha.4](https://git.tuffraid.net/cowch/lst_v3/compare/v0.0.1-alpha.3...v0.0.1-alpha.4) (2026-04-15)
### Patch Changes
- cf18e94: core stuff
### 🌟 Enhancements
* **datamart:** migrations completed remaining is the deactivation that will be ran by anylitics ([eccaf17](https://git.tuffraid.net/cowch/lst_v3/commits/eccaf17332fb1c63b8d6bbea6f668c3bb42d44b7))
* **datamart:** psi data has been added :D ([e0d0ac2](https://git.tuffraid.net/cowch/lst_v3/commits/e0d0ac20773159373495d65023587b76b47df34f))
* **migrate:** quality alert migrated ([b0e5fd7](https://git.tuffraid.net/cowch/lst_v3/commits/b0e5fd79998d551d4f155d58416157a324498fbd))
* **ocp:** printer sync and logging logic added ([80189ba](https://git.tuffraid.net/cowch/lst_v3/commits/80189baf906224da43ec1b9b7521153d2a49e059))
* **tcp crud:** tcp server start, stop, restart endpoints + status check ([6307037](https://git.tuffraid.net/cowch/lst_v3/commits/6307037985162bc6b49f9f711132853296f43eee))
### 🐛 Bug fixes
* **datamart:** error when running build and crashed everything ([52a6c82](https://git.tuffraid.net/cowch/lst_v3/commits/52a6c821f4632e4b5b51e0528a0d620e2e0deffc))
### 📚 Documentation
* **docs:** removed docusorus as all docs will be inside lst now to better assist users ([6ba905a](https://git.tuffraid.net/cowch/lst_v3/commits/6ba905a887dbd8f306d71fed75bb34c71fee74c9))
* **env example:** updated the file ([ca3425d](https://git.tuffraid.net/cowch/lst_v3/commits/ca3425d327757120c2cc876fff28e8668c76838d))
* **notifcations:** docs for intro, notifcations, reprint added ([87f7387](https://git.tuffraid.net/cowch/lst_v3/commits/87f738702a935279a248d471541cdd9d49330565))
### 🛠️ Code Refactor
* **agent:** changed to have the test servers on there own push for better testing ([3bf024c](https://git.tuffraid.net/cowch/lst_v3/commits/3bf024cfc97d2841130d54d1a7c5cb5f09f0f598))
* **connection:** corrected the connection to the old system ([38a0b65](https://git.tuffraid.net/cowch/lst_v3/commits/38a0b65e9450c65b8300a10058a8f0357400f4e6))
* **logging:** when notify is true send the error to systemAdmins ([79e653e](https://git.tuffraid.net/cowch/lst_v3/commits/79e653efa3bcb2941ccee06b28378e709e085ec0))
* **notification:** blocking added ([9a0ef8e](https://git.tuffraid.net/cowch/lst_v3/commits/9a0ef8e51a36e3ab45b601b977f1b5cf35d56947))
* **puchase:** changes how the error handling works so a better email can be sent ([9d39c13](https://git.tuffraid.net/cowch/lst_v3/commits/9d39c13510974b5ada2a6f6c2448da3f1b755a5c))
* **reprint:** new query added to deactivate the old notifcation so no chance of duplicates ([c9eb59e](https://git.tuffraid.net/cowch/lst_v3/commits/c9eb59e2ad9847418ac55cb8a4a91c013f6c97bb))
* **server:** added in serverCrash email ([dcb3f2d](https://git.tuffraid.net/cowch/lst_v3/commits/dcb3f2dd1382986639b722778fad113392533b28))
* **services:** added in examples for migration stuff ([fc6dc82](https://git.tuffraid.net/cowch/lst_v3/commits/fc6dc82d8458a9928050dd3770778d6a6e1eea7f))
* **sql:** corrections to the way we reconnect so the app can error out and be reactivated later ([f33587a](https://git.tuffraid.net/cowch/lst_v3/commits/f33587a3d9a72ca72806635fac9d1214bb1452f1))
* **templates:** corrections for new notify process on critcal errors ([07ebf88](https://git.tuffraid.net/cowch/lst_v3/commits/07ebf88806b93b9320f8f9d36b867572dd9a9580))
### 📈 Project changes
* **agent:** added in jeff city ([e47ea9e](https://git.tuffraid.net/cowch/lst_v3/commits/e47ea9ec52a6ebaf5a8f67a7e8bd2c73da6186fb))
* **agent:** added in sherman ([4b6061c](https://git.tuffraid.net/cowch/lst_v3/commits/4b6061c478cbeba7c845dc1c8a015b9998721456))
* **service:** changes to the script to allow running the powershell on execution palicy restrictions ([84909bf](https://git.tuffraid.net/cowch/lst_v3/commits/84909bfcf85b91d085ea9dca78be00482b7fd231))
## [0.0.1-alpha.3](https://git.tuffraid.net/cowch/lst_v3/compare/v0.0.1-alpha.2...v0.0.1-alpha.3) (2026-04-10)
### 🌟 Enhancements
* **puchase hist:** finished up purhcase historical / gp updates ([a691dc2](https://git.tuffraid.net/cowch/lst_v3/commits/a691dc276e8650c669409241f73d7b2d7a1f9176))
### 🛠️ Code Refactor
* **gp connect:** gp connect as was added to long live services ([635635b](https://git.tuffraid.net/cowch/lst_v3/commits/635635b356e1262e1c0b063408fe2209e6a8d4ec))
* **reprints:** changes the module and submodule around to be more accurate ([97f93a1](https://git.tuffraid.net/cowch/lst_v3/commits/97f93a1830761437118863372108df810ce9977a))
* **send email:** changes the error message to show the true message in the error ([995b1dd](https://git.tuffraid.net/cowch/lst_v3/commits/995b1dda7cdfebf4367d301ccac38fd339fab6dd))
## [0.0.1-alpha.2](https://git.tuffraid.net/cowch/lst_v3/compare/v0.0.1-alpha.1...v0.0.1-alpha.2) (2026-04-08)
### 📈 Project Builds
* **release:** docker and release corrections ([103ae77](https://git.tuffraid.net/cowch/lst_v3/commits/103ae77e9f82fc008a8ae143b6feccc3ce802f8c))
## [0.0.1-alpha.1](https://git.tuffraid.net/cowch/lst_v3/compare/v0.0.1-alpha.0...v0.0.1-alpha.1) (2026-04-08)
* **notifcaion:** style changes to the notificaion card and started the table ([7d6c2db](https://git.tuffraid.net/cowch/lst_v3/commits/7d6c2db89cae1f137f126f5814dccd373f7ccb76))
### 🌟 Enhancements
* **notification:** base notifcaiton sub and admin compelted ([5865ac3](https://git.tuffraid.net/cowch/lst_v3/commits/5865ac3b99d60005c4245740369b0e0789c8fbbd))
* **notification:** reprint added ([a17787e](https://git.tuffraid.net/cowch/lst_v3/commits/a17787e85217f1fa4a5e5389e29c33ec09c286c5))
* **puchase history:** purhcase history changed to long running no notification ([34b0aba](https://git.tuffraid.net/cowch/lst_v3/commits/34b0abac36f645d0fe5f508881ddbef81ff04b7c))
* **purchase:** historical data capture for alpla purchase ([42861cc](https://git.tuffraid.net/cowch/lst_v3/commits/42861cc69e8d4aba5a9670aaed55417efda2b505))
* **user notifications:** added the ability for users to sub to notifications and add multi email ([637de85](https://git.tuffraid.net/cowch/lst_v3/commits/637de857f99499a41f7175181523f5d809d95d7e))
### 🐛 Bug fixes
* **build:** issue with how i wrote the release token ([fe889ca](https://git.tuffraid.net/cowch/lst_v3/commits/fe889ca75731af08c42ec714b7f2abf17cd1ee40))
* **build:** type in how we pushed the header over ([83a94ca](https://git.tuffraid.net/cowch/lst_v3/commits/83a94cacf3fc87287cdc0c0cc861b339e72e4b83))
* **build:** typo ([860207a](https://git.tuffraid.net/cowch/lst_v3/commits/860207a60b6e04b15736cba631be6c7eab74d020))
* **i suck:** more learning experance ([9ceba8b](https://git.tuffraid.net/cowch/lst_v3/commits/9ceba8b5bba17959f27b16b28f50a83c044863fb))
* **lala:** something here ([17aed6c](https://git.tuffraid.net/cowch/lst_v3/commits/17aed6cb89f8220570f6c66f78dba6bb202c1aaa))
* **release:** typo that caused errors ([76747cf](https://git.tuffraid.net/cowch/lst_v3/commits/76747cf91738bd0d0530afcf7b4f51f0db11ca98))
* **typo:** more dam typos ([079478f](https://git.tuffraid.net/cowch/lst_v3/commits/079478f93217dea31c9a1e8ffed85d2381a6977d))
* **wrelease:** forgot to save ([3775760](https://git.tuffraid.net/cowch/lst_v3/commits/377576073449e95d315defb913dc317759cc3f43))
### 📝 Chore
* **release:** 0.1.0-alpha.10 ([98e408c](https://git.tuffraid.net/cowch/lst_v3/commits/98e408cb8577da18e24821b55474198439434f3e))
* **release:** 0.1.0-alpha.11 ([d6d5b45](https://git.tuffraid.net/cowch/lst_v3/commits/d6d5b451cd9aeba642ef94654ca20f4acd0b827c))
* **release:** 0.1.0-alpha.12 ([1ad789b](https://git.tuffraid.net/cowch/lst_v3/commits/1ad789b2b91a20a2f5a8dc9e6f39af2e19ec9cdc))
* **release:** 0.1.0-alpha.9 ([8f59bba](https://git.tuffraid.net/cowch/lst_v3/commits/8f59bba614a8eaa3105bb56f0db36013d5e68485))
* **release:** version packages ([fb2c560](https://git.tuffraid.net/cowch/lst_v3/commits/fb2c5609aa12ea7823783c364d5bd029c48a64bd))
* **release:** version packages ([b02b93b](https://git.tuffraid.net/cowch/lst_v3/commits/b02b93b83f488fbcee6d24db080ad0d1fe1c5f59))
* **release:** version packages ([2c0dbf9](https://git.tuffraid.net/cowch/lst_v3/commits/2c0dbf95c7b8dfd2c98b476d3f44bc8929668c88))
* **release:** version packages ([5c64600](https://git.tuffraid.net/cowch/lst_v3/commits/5c6460012aa70d336fbc9702240b4f19262a6b41))
* **release:** version packages ([0ce3790](https://git.tuffraid.net/cowch/lst_v3/commits/0ce3790675bc408762eafe76cbd5ab496fd06e73))
* **release:** version packages ([4caaf74](https://git.tuffraid.net/cowch/lst_v3/commits/4caaf745693d4df847aefd3721ac5d0ae792114a))
* **release:** version packages ([699c124](https://git.tuffraid.net/cowch/lst_v3/commits/699c124b0efba8282e436210619504bda8878e90))
* **release:** version packages ([c4fd74f](https://git.tuffraid.net/cowch/lst_v3/commits/c4fd74fc93226cffd9e39602f507a05cd8ea628b))
### 📚 Documentation
* **readme:** updated progress data ([92ba3ef](https://git.tuffraid.net/cowch/lst_v3/commits/92ba3ef5121afd0d82d4f40a5a985e1fdc081011))
* **sop:** added more info ([be1d408](https://git.tuffraid.net/cowch/lst_v3/commits/be1d4081e07b0982b355a270b7850a852a4398f5))
### 🛠️ Code Refactor
* **build:** added in more info to the relase section ([5854889](https://git.tuffraid.net/cowch/lst_v3/commits/5854889eb5398feebda50a5d256ce7aec39ce112))
* **build:** changes to auto release when we cahnge version ([643d12f](https://git.tuffraid.net/cowch/lst_v3/commits/643d12ff182827e724e1569a583bd625a0d1dd0c))
* **build:** changes to the way we do release so it builds as well ([7d55c5f](https://git.tuffraid.net/cowch/lst_v3/commits/7d55c5f43173edb48d8709adcb972b7d8fbc3ebd))
* **changelog:** reverted back to commit-chagnelog, like more than changeset for solo dev ([ed052df](https://git.tuffraid.net/cowch/lst_v3/commits/ed052dff3c81a7064660a7d25685e0505065252c))
* **notification:** reprint - removed a console log as it shouldnt bc there ([5f3d683](https://git.tuffraid.net/cowch/lst_v3/commits/5f3d683a13c831229674166cced699e373131316))
* **notification:** select menu looks propper now ([74262be](https://git.tuffraid.net/cowch/lst_v3/commits/74262beb6596ddc971971cc9214a2688accf3a8e))
* **opendock refactor on how releases are posted:** this was a bug maybe just a better refactory ([0880298](https://git.tuffraid.net/cowch/lst_v3/commits/0880298cf53d83e487c706e73854e0874ae2d9da))
* **queries:** changed dev version to be 1500ms vs 5000ms ([f3b8dd9](https://git.tuffraid.net/cowch/lst_v3/commits/f3b8dd94e5ebae0cc4dd0a2689a19051942e94b8))
* **release:** changes to only have the changelog in the release ([6e85991](https://git.tuffraid.net/cowch/lst_v3/commits/6e8599106298ed13febd069d6fda8b354efb5b7b))
* **userprofile:** changes to have the table be blank and say nothing subscribed ([3ecf5fb](https://git.tuffraid.net/cowch/lst_v3/commits/3ecf5fb916d5dc1b1ffb224e2142d94f7a9cb126))
### 📈 Project Builds
* **agent:** added westbend into the flow ([28c226d](https://git.tuffraid.net/cowch/lst_v3/commits/28c226ddbc37ab85cd6a9a6aec091def3e5623d6))
* **changelog:** reset the change log after all crap testing ([0059b9b](https://git.tuffraid.net/cowch/lst_v3/commits/0059b9b850c9647695a3fecaf5927c2e3ee7b192))

View File

@@ -1,42 +1,50 @@
FROM node:24-alpine AS deps
###########
# Stage 1 #
###########
# Build stage with all dependencies
FROM node:24.12-alpine as build
WORKDIR /app
COPY package.json ./
RUN ls -la /app
#RUN mkdir frontend
#RUN mkdir lstDocs
#RUN mkdir controller
#COPY frontend/package*.json ./frontend
#COPY lstDocs/package*.json ./lstDocs
#COPY controller/index.html ./controller
RUN npm install
#RUN npm run install:front
#RUN npm run install:docs
# Copy package files
COPY . .
# build backend
RUN npm ci
RUN npm run build:docker
# build frontend
RUN npm --prefix frontend ci
RUN npm --prefix frontend run build
###########
# Stage 2 #
###########
# Small final image with only whats needed to run
FROM node:24.12-alpine AS production
# Build the Next.js app
FROM node:24-alpine AS builder
WORKDIR /app
COPY --from=deps /app/node_modules ./node_modules
#COPY --from=deps /app/frontend/node_modules ./frontend/node_modules
#COPY --from=deps /app/lstDocs/node_modules ./lstDocs/node_modules
#COPY --from=deps /app/controller/index.html ./controller/index.html
#COPY . ./
RUN npm run build:app
#RUN npm run build:front
#RUN npm run build:docs
# Final stage
FROM node:24-alpine
WORKDIR /app
COPY --from=builder /app/node_modules ./node_modules
COPY --from=builder /app/dist ./dist
#COPY --from=builder /app/frontend/dist ./frontend/dist
#COPY --from=builder /app/lstDocs/build ./lstDocs/build
#COPY --from=deps /app/controller/index.html ./controller/index.html
# Copy package files first to install runtime deps
COPY package*.json ./
# curl install
RUN apk add --no-cache curl
# Only install production dependencies
RUN npm ci --omit=dev
ENV NODE_ENV=production
COPY --from=build /app/dist ./dist
COPY --from=build /app/frontend/dist ./frontend/dist
# TODO add in drizzle migrates
ENV RUNNING_IN_DOCKER=true
ENV PORT=3000
EXPOSE 3000
CMD ["node", "dist/index.js"]
# start the app up
CMD ["npm", "run", "start:docker"]
# Add health check
HEALTHCHECK --interval=30s --timeout=5s --start-period=10s --retries=3 \
CMD curl -f http://localhost:3000/lst/api/stats || exit 1

View File

@@ -1,3 +1,50 @@
# lst_v3
# Logistics support tool
The tool that supports us in our everyday alplaprod adventures
> The support tool for ALPLA Prod
## Overview
Quick summary of current rewrite/migration goal.
- **Phase:** Backend rewrite
- **Last updated:** 2026-04-06
---
## Feature Status
| Feature | Description | Status |
|----------|--------------|--------|
| User Authentication | ~~Login~~, ~~Signup~~, API Key | 🟨 In Progress |
| User Profile | ~~Edit profile~~, upload avatar | 🟨 In Progress |
| User Admin | Edit user, create user, remove user, alplaprod user integration | ⏳ Not Started |
| Notifications | ~~Subscribe~~, ~~Create~~, ~~Update~~, ~~~~Remove~~, Manual Trigger | 🟨 In Progress |
| Datamart | ~~Create~~, ~~Update~~, ~~Run~~, Deactivate | 🟨 In Progress |
| Frontend | Analytics and charts | ⏳ Not Started |
| Docs | Instructions and trouble shooting | ⏳ Not Started |
| One Click Print | Get printers, monitor printers, label process, material process, Special processes | ⏳ Not Started |
| Silo Adjustments | Create, History, Comments | ⏳ Not Started |
| Demand Management | Orders, Forecast, Special Mappings, Create trucks, Load Trucks (tablet scanning) | ⏳ Not Started |
| Open Docks | Integrations | ⏳ Not Started |
| Transport Insight | Integrations | ⏳ Not Started |
| Quality Request Tool | Add Pallet, Monitor for moved, status changes, alerts | ⏳ Not Started |
| Logistics | Consume material, return and print, label info, relocate | ⏳ Not Started |
| EOM | Endpoints, Report Pull for finance | ⏳ Not Started |
| OCME | Custom integration | ⏳ Not Started |
| API Migration | Moving to new REST endpoints | 🔧 In Progress |
| System | Tests,Builds, Updates, Remote Logging, DB Backups, Alerting | ⏳ Not Started |
_Status legend:_
✅ Complete🟨 In Progress ⏳ Not Started
---
## Setup / Installation
How to run the current version of the app.
```bash
git clone https://git.tuffraid.net/cowch/lst_v3.git
cd lst_v3
npm install
npm run dev

View File

@@ -1,30 +1,56 @@
import { dirname, join } from "node:path";
import { fileURLToPath } from "node:url";
import { toNodeHandler } from "better-auth/node";
import express from "express";
import morgan from "morgan";
import { createLogger } from "./src/logger/logger.controller.js";
import { connectProdSql } from "./src/prodSql/prodSqlConnection.controller.js";
import { setupRoutes } from "./src/routeHandler.routes.js";
import { createLogger } from "./logger/logger.controller.js";
import { setupRoutes } from "./routeHandler.routes.js";
import { auth } from "./utils/auth.utils.js";
import { lstCors } from "./utils/cors.utils.js";
const port = Number(process.env.PORT);
const startApp = async () => {
const createApp = async () => {
const log = createLogger({ module: "system", subModule: "main start" });
const app = express();
let baseUrl = "/";
let baseUrl = "";
// global env that run only in dev
if (process.env.NODE_ENV?.trim() !== "production") {
app.use(morgan("tiny"));
baseUrl = "/lst";
}
// start the connection to the prod sql server
connectProdSql();
// if we are running un docker lets use this.
if (process.env.RUNNING_IN_DOCKER) {
baseUrl = "/lst";
}
const __filename = fileURLToPath(import.meta.url);
const __dirname = dirname(__filename);
// well leave this active so we can monitor it to validate
app.use(morgan("dev"));
app.set("trust proxy", true);
app.use(lstCors());
app.all(`${baseUrl}/api/auth/*splat`, toNodeHandler(auth));
app.use(express.json());
setupRoutes(baseUrl, app);
app.listen(port, () => {
log.info(`Listening on port ${port}`);
app.use(
`${baseUrl}/app`,
express.static(join(__dirname, "../frontend/dist")),
);
app.get(`${baseUrl}/app/*splat`, (_, res) => {
res.sendFile(join(__dirname, "../frontend/dist/index.html"));
});
app.all("*foo", (_, res) => {
res.status(400).json({
message:
"You have encountered a route that dose not exist, please check the url and try again",
});
});
log.info("Lst app created");
return { app, baseUrl };
};
startApp();
export default createApp;

View File

@@ -0,0 +1,9 @@
import type { Express } from "express";
import login from "./login.route.js";
import register from "./register.route.js";
export const setupAuthRoutes = (baseUrl: string, app: Express) => {
//setup all the routes
app.use(`${baseUrl}/api/authentication/login`, login);
app.use(`${baseUrl}/api/authentication/register`, register);
};

185
backend/auth/login.route.ts Normal file
View File

@@ -0,0 +1,185 @@
import { APIError } from "better-auth/api";
import { fromNodeHeaders } from "better-auth/node";
import { eq } from "drizzle-orm";
import { Router } from "express";
import z from "zod";
import { db } from "../db/db.controller.js";
import { user } from "../db/schema/auth.schema.js";
import { auth } from "../utils/auth.utils.js";
import { apiReturn } from "../utils/returnHelper.utils.js";
// interface EmailLoginRequest {
// email: string;
// password: string;
// }
// interface LoginResponse {
// redirect: boolean;
// token: string;
// user: {
// name: string;
// email: string;
// emailVerified: boolean;
// image: string | null;
// createdAt: string;
// updatedAt: string;
// role: string;
// banned: boolean;
// banReason: string | null;
// banExpires: string | null;
// username: string;
// displayUsername: string;
// lastLogin: string;
// id: string;
// };
// }
const base = {
password: z.string().min(8, "Password must be at least 8 characters"),
};
const signin = z.union([
z.object({
...base,
email: z.email(),
username: z.undefined(),
}),
z.object({
...base,
username: z.string(),
email: z.undefined(),
}),
]);
const r = Router();
r.post("/", async (req, res) => {
let login: unknown | any;
try {
const validated = signin.parse(req.body);
if ("email" in validated) {
login = await auth.api.signInEmail({
body: {
email: validated.email as string,
password: validated.password,
},
headers: fromNodeHeaders(req.headers),
});
}
if ("username" in validated) {
const getEmail = await db
.select({ email: user.email })
.from(user)
.where(eq(user.username, validated.username as string));
if (getEmail.length === 0) {
return apiReturn(res, {
success: false,
level: "error", //connect.success ? "info" : "error",
module: "routes",
subModule: "auth",
message: `${validated.username} dose not appear to be a valid username please try again`,
data: [],
status: 401, //connect.success ? 200 : 400,
});
}
// do the login with email
login = await auth.api.signInEmail({
body: {
email: getEmail[0]?.email as string,
password: validated.password,
},
headers: fromNodeHeaders(req.headers),
asResponse: true,
});
if (login.status === 401) {
return apiReturn(res, {
success: false,
level: "error", //connect.success ? "info" : "error",
module: "routes",
subModule: "auth",
message: `Incorrect username or password please try again`,
data: [],
status: 401, //connect.success ? 200 : 400,
});
}
login.headers.forEach((value: string, key: string) => {
if (key.toLowerCase() === "set-cookie") {
res.append("set-cookie", value);
} else {
res.setHeader(key, value);
}
});
}
// make sure we update the lastLogin
// if (login?.user?.id) {
// const updated = await db
// .update(user)
// .set({ lastLogin: sql`NOW()` })
// .where(eq(user.id, login.user.id))
// .returning({ lastLogin: user.lastLogin });
// const lastLoginTimestamp = updated[0]?.lastLogin;
// console.log("Updated lastLogin:", lastLoginTimestamp);
// } else
// console.warn("User ID unavailable — skipping lastLogin update");
return apiReturn(res, {
success: true,
level: "info", //connect.success ? "info" : "error",
module: "routes",
subModule: "auth",
message: `Welcome back ${validated.username}`,
data: [login],
status: 200, //connect.success ? 200 : 400,
});
} catch (err) {
if (err instanceof z.ZodError) {
const flattened = z.flattenError(err);
// return res.status(400).json({
// error: "Validation failed",
// details: flattened,
// });
return apiReturn(res, {
success: false,
level: "error", //connect.success ? "info" : "error",
module: "routes",
subModule: "auth",
message: "Validation failed",
data: [flattened.fieldErrors],
status: 400, //connect.success ? 200 : 400,
});
}
if (err instanceof APIError) {
return apiReturn(res, {
success: false,
level: "error", //connect.success ? "info" : "error",
module: "routes",
subModule: "auth",
message: err.message,
data: [err.status],
status: 400, //connect.success ? 200 : 400,
});
}
return apiReturn(res, {
success: false,
level: "error",
module: "routes",
subModule: "auth",
message: "System Error",
data: [err],
status: 400,
});
}
});
export default r;

View File

@@ -0,0 +1,128 @@
import { APIError } from "better-auth";
import { count, eq, sql } from "drizzle-orm";
import { Router } from "express";
import z from "zod";
import { db } from "../db/db.controller.js";
import { user } from "../db/schema/auth.schema.js";
import { auth } from "../utils/auth.utils.js";
import { apiReturn } from "../utils/returnHelper.utils.js";
const r = Router();
const registerSchema = z.object({
email: z.email(),
name: z.string().min(2).max(100),
password: z.string().min(8, "Password must be at least 8 characters"),
username: z
.string()
.min(3)
.max(32)
.regex(/^[a-zA-Z0-9._]+$/, "Only alphanumeric, _, and ."),
displayUsername: z
.string()
.min(2)
.max(100)
.optional()
.describe("if you leave blank it will be the same as your username"),
role: z
.enum(["user"])
.optional()
.describe("What roles are available to use."),
data: z
.record(z.string(), z.unknown())
.optional()
.describe(
"This allows us to add extra fields to the data to parse against",
),
});
r.post("/", async (req, res) => {
// check if we are the first user so we can add as system admin to all modules
const totalUsers = await db.select({ count: count() }).from(user);
const userCount = totalUsers[0]?.count ?? 0;
try {
// validate the body is correct before accepting it
let validated = registerSchema.parse(req.body);
validated = {
...validated,
data: { lastLogin: new Date(Date.now()) },
username: validated.username,
};
// Call Better Auth signUp
const newUser = await auth.api.signUpEmail({
body: validated,
});
// if we have no users yet lets make this new one the admin
if (userCount === 0) {
// make this user an admin
await db
.update(user)
.set({ role: "admin", updatedAt: sql`NOW()` })
.where(eq(user.id, newUser.user.id));
}
apiReturn(res, {
success: true,
level: "info", //connect.success ? "info" : "error",
module: "routes",
subModule: "auth",
message: `${validated.username} was just created`,
data: [newUser],
status: 200, //connect.success ? 200 : 400,
});
} catch (err) {
if (err instanceof z.ZodError) {
const flattened = z.flattenError(err);
// return res.status(400).json({
// error: "Validation failed",
// details: flattened,
// });
return apiReturn(res, {
success: false,
level: "error", //connect.success ? "info" : "error",
module: "routes",
subModule: "auth",
message: "Validation failed",
data: [flattened.fieldErrors],
status: 400, //connect.success ? 200 : 400,
});
}
if (err instanceof APIError) {
return apiReturn(res, {
success: false,
level: "error", //connect.success ? "info" : "error",
module: "routes",
subModule: "auth",
message: err.message,
data: [err.status],
status: 400, //connect.success ? 200 : 400,
});
}
return apiReturn(res, {
success: false,
level: "error", //connect.success ? "info" : "error",
module: "routes",
subModule: "auth",
message: "Internal Server Error creating user",
data: [err],
status: 400, //connect.success ? 200 : 400,
});
}
// apiReturn(res, {
// success: true,
// level: "info", //connect.success ? "info" : "error",
// module: "routes",
// subModule: "auth",
// message: "Testing route",
// data: [],
// status: 200, //connect.success ? 200 : 400,
// });
});
export default r;

View File

@@ -0,0 +1,23 @@
import type sql from "mssql";
const username = "gpviewer";
const password = "gp$$ViewOnly!";
export const gpSqlConfig: sql.config = {
server: `USMCD1VMS011`,
database: `ALPLA`,
user: username,
password: password,
options: {
encrypt: true,
trustServerCertificate: true,
},
requestTimeout: 90000, // how long until we kill the query and fail it
pool: {
max: 20, // Maximum number of connections in the pool
min: 0, // Minimum number of connections in the pool
idleTimeoutMillis: 10000, // How long a connection is allowed to be idle before being released
reapIntervalMillis: 1000, // how often to check for idle resources to destroy
acquireTimeoutMillis: 100000, // How long until a complete timeout happens
},
};

View File

@@ -5,12 +5,18 @@ import type { Express } from "express";
//const __filename = fileURLToPath(import.meta.url);
// const __dirname = path.dirname(__filename);
import os from "node:os";
import { apiReference } from "@scalar/express-api-reference";
// const port = 3000;
import type { OpenAPIV3_1 } from "openapi-types";
import { cronerActiveJobs } from "../scaler/cronerActiveJobs.spec.js";
import { cronerStatusChange } from "../scaler/cronerStatusChange.spec.js";
import { prodLoginSpec } from "../scaler/login.spec.js";
import { openDockApt } from "../scaler/opendockGetRelease.spec.js";
import { prodRestartSpec } from "../scaler/prodSqlRestart.spec.js";
import { prodStartSpec } from "../scaler/prodSqlStart.spec.js";
import { prodStopSpec } from "../scaler/prodSqlStop.spec.js";
import { prodRegisterSpec } from "../scaler/register.spec.js";
// all the specs
import { statusSpec } from "../scaler/stats.spec.js";
@@ -23,10 +29,12 @@ export const openApiBase: OpenAPIV3_1.Document = {
},
servers: [
{
url: `http://localhost:3000${process.env.NODE_ENV?.trim() !== "production" ? "/lst" : "/"}`,
// TODO: change this to the https:// if we are in production and port if not.
url: `http://${os.hostname()}:3000${process.env.NODE_ENV?.trim() !== "production" ? "/lst" : "/"}`,
description: "Development server",
},
],
components: {
securitySchemes: {
bearerAuth: {
@@ -34,6 +42,22 @@ export const openApiBase: OpenAPIV3_1.Document = {
scheme: "bearer",
bearerFormat: "JWT",
},
ApiKeyAuth: {
type: "apiKey",
description: "API key required for authentication",
name: "api_key",
in: "header",
},
basicAuth: {
type: "http",
scheme: "basic",
description: "Basic authentication using username and password",
},
cookieAuth: {
type: "apiKey",
in: "cookie",
name: "better-auth.session_token",
},
},
// schemas: {
// Error: {
@@ -43,18 +67,54 @@ export const openApiBase: OpenAPIV3_1.Document = {
// message: { type: "string" },
// },
// },
// },
// },.
},
// security: [
// {
// cookieAuth: [],
// basicAuth: [],
// },
// ],
tags: [
// { name: "Health", description: "Health check endpoints" },
// { name: "Printing", description: "Label printing operations" },
// { name: "Silo", description: "Silo management" },
{
name: "Auth",
description:
"Authentication section where you get and create users and api keys",
},
{
name: "System",
description: "All system endpoints that will be available to run",
},
{
name: "Utils",
description: "All routes related to the utilities on the server",
},
{
name: "Open Dock",
description: "All routes related to the opendock on the server",
},
// { name: "TMS", description: "TMS integration" },
],
paths: {}, // Will be populated
};
export const setupApiDocsRoutes = (baseUrl: string, app: Express) => {
// const mergedDatamart = {
// "/api/datamart": {
// ...(cronerActiveJobs["/api/datamart"] ?? {}),
// ...(datamartAddSpec["/api/datamart"] ?? {}),
// ...(datamartUpdateSpec["/api/datamart"] ?? {}),
// },
// "/api/datamart/{name}": getDatamartSpec["/api/datamart/{name}"],
// };
// const mergeUtils = {
// "/api/utils/croner": {
// ...(cronerActiveJobs["/api/utils/croner"] ?? {}),
// },
// "/api/utils/{name}": cronerActiveJobs["/api/utils/{name}"],
// };
const fullSpec = {
...openApiBase,
paths: {
@@ -62,6 +122,12 @@ export const setupApiDocsRoutes = (baseUrl: string, app: Express) => {
...prodStartSpec,
...prodStopSpec,
...prodRestartSpec,
...prodLoginSpec,
...prodRegisterSpec,
//...mergedDatamart,
...cronerActiveJobs,
...cronerStatusChange,
...openDockApt,
// Add more specs here as you build features
},
@@ -75,7 +141,9 @@ export const setupApiDocsRoutes = (baseUrl: string, app: Express) => {
apiReference({
url: `${baseUrl}/api/docs.json`,
theme: "purple",
darkMode: true,
persistAuth: true,
authentication: {
securitySchemes: {
httpBasic: {
@@ -88,6 +156,7 @@ export const setupApiDocsRoutes = (baseUrl: string, app: Express) => {
targetKey: "node",
clientKey: "axios",
},
documentDownloadType: "json",
hideClientButton: true,
hiddenClients: {
@@ -96,7 +165,7 @@ export const setupApiDocsRoutes = (baseUrl: string, app: Express) => {
// Clojure
clojure: ["clj_http"],
// C#
csharp: ["httpclient", "restsharp"],
// csharp: ["httpclient", "restsharp"],
// Dart
dart: ["http"],
// F#

View File

@@ -0,0 +1,300 @@
/**
* each endpoint will be something like
* /api/datamart/{name}?{criteria}
*
* when getting the current queries we will need to map through the available queries we currently have and send back.
* example
*{
* "name": "getopenorders",
* "endpoint": "/api/datamart/getopenorders",
* "description": "Returns open orders based on day count sent over, sDay 15 days in the past eDay 5 days in the future, can be left empty for this default days",
* "options": "sDay,eDay"
* },
*
* when a criteria is password over we will handle it by counting how many were passed up to 3 then deal with each one respectively
*/
import { and, between, inArray, notInArray } from "drizzle-orm";
import { db } from "../db/db.controller.js";
import { invHistoricalData } from "../db/schema/historicalInv.schema.js";
import { prodQuery } from "../prodSql/prodSqlQuery.controller.js";
import {
type SqlQuery,
sqlQuerySelector,
} from "../prodSql/prodSqlQuerySelector.utils.js";
import { returnFunc } from "../utils/returnHelper.utils.js";
import { tryCatch } from "../utils/trycatch.utils.js";
import { datamartData } from "./datamartData.utlis.js";
type Data = {
name: string;
options: any;
optionsRequired?: boolean;
howManyOptionsRequired?: number;
};
const lstDbRun = async (data: Data) => {
if (data.options) {
if (data.name === "psiInventory") {
const ids = data.options.articles.split(",").map((id: any) => id.trim());
const whse = data.options.whseToInclude
? data.options.whseToInclude
.split(",")
.map((w: any) => w.trim())
.filter(Boolean)
: [];
const locations = data.options.exludeLanes
? data.options.exludeLanes
.split(",")
.map((l: any) => l.trim())
.filter(Boolean)
: [];
const conditions = [
inArray(invHistoricalData.article, ids),
between(
invHistoricalData.histDate,
data.options.startDate,
data.options.endDate,
),
];
// only add the warehouse condition if there are any whse values
if (whse.length > 0) {
conditions.push(inArray(invHistoricalData.whseId, whse));
}
// locations we dont want in the system
if (locations.length > 0) {
conditions.push(notInArray(invHistoricalData.location, locations));
}
return await db
.select()
.from(invHistoricalData)
.where(and(...conditions));
}
}
return [];
};
export const runDatamartQuery = async (data: Data) => {
// search the query db for the query by name
const considerLstDBRuns = ["psiInventory"];
if (considerLstDBRuns.includes(data.name)) {
const lstDB = await lstDbRun(data);
return returnFunc({
success: true,
level: "info",
module: "datamart",
subModule: "lstDBrn",
message: `Data for: ${data.name}`,
data: lstDB,
notify: false,
});
}
const sqlQuery = sqlQuerySelector(`datamart.${data.name}`) as SqlQuery;
const getDataMartInfo = datamartData.filter((x) => x.endpoint === data.name);
// const optionsMissing =
// !data.options || Object.keys(data.options).length === 0;
const isValid =
Object.keys(data.options ?? {}).length >=
(getDataMartInfo[0]?.howManyOptionsRequired ?? 0);
if (getDataMartInfo[0]?.optionsRequired && !isValid) {
return returnFunc({
success: false,
level: "error",
module: "datamart",
subModule: "query",
message: `This query is required to have ${getDataMartInfo[0]?.howManyOptionsRequired} option(s) set in order use it, please add in your option(s) data and try again.`,
data: [getDataMartInfo[0].options],
notify: false,
});
}
if (!sqlQuery.success) {
return returnFunc({
success: false,
level: "error",
module: "datamart",
subModule: "query",
message: `Error getting ${data.name} info`,
data: [sqlQuery.message],
notify: false,
});
}
// create the query with no changed just to have it here
let datamartQuery = sqlQuery?.query || "";
// split the criteria by "," then and then update the query
if (data.options) {
switch (data.name) {
case "activeArticles":
break;
case "deliveryByDateRange":
datamartQuery = datamartQuery
.replace("[startDate]", `${data.options.startDate}`)
.replace("[endDate]", `${data.options.endDate}`);
break;
case "customerInventory":
datamartQuery = datamartQuery
.replace(
"--and IdAdressen",
`and IdAdressen in (${data.options.customer})`,
)
.replace(
"--and x.IdWarenlager in (0)",
`${data.options.whseToInclude ? `and x.IdWarenlager in (${data.options.whseToInclude})` : `--and x.IdWarenlager in (0)`}`,
);
break;
case "openOrders":
datamartQuery = datamartQuery
.replace("[startDay]", `${data.options.startDay}`)
.replace("[endDay]", `${data.options.endDay}`);
break;
case "inventory":
datamartQuery = datamartQuery
.replaceAll(
"--,l.RunningNumber",
`${data.options.includeRunningNumbers ? `,l.RunningNumber` : `--,l.RunningNumber`}`,
)
.replaceAll(
"--,l.MachineLocation,l.MachineName,l.ProductionLotRunningNumber as lot",
`${data.options.lots ? `,l.MachineLocation,l.MachineName,l.ProductionLotRunningNumber as lot` : `--,l.MachineLocation,l.MachineName,l.ProductionLotRunningNumber as lot`}`,
)
.replaceAll(
"--,l.WarehouseDescription,l.LaneDescription",
`${data.options.locations ? `,l.WarehouseDescription,l.LaneDescription` : `--,l.WarehouseDescription,l.LaneDescription`}`,
);
// adding in a test for historical check.
if (data.options.historical) {
datamartQuery = datamartQuery
.replace(
"--,l.ProductionLotRunningNumber as lot,l.warehousehumanreadableid as warehouseId,l.WarehouseDescription as warehouseDescription,l.lanehumanreadableid as locationId,l.lanedescription as laneDescription",
",l.ProductionLotRunningNumber as lot,l.warehousehumanreadableid as warehouseId,l.WarehouseDescription as warehouseDescription,l.lanehumanreadableid as locationId,l.lanedescription as laneDescription",
)
.replace(
"--,l.ProductionLotRunningNumber,l.warehousehumanreadableid,l.WarehouseDescription,l.lanehumanreadableid,l.lanedescription",
",l.ProductionLotRunningNumber,l.warehousehumanreadableid,l.WarehouseDescription,l.lanehumanreadableid,l.lanedescription",
);
}
break;
case "fakeEDIUpdate":
datamartQuery = datamartQuery.replace(
"--AND h.CustomerHumanReadableId in (0)",
`${data.options.address ? `AND h.CustomerHumanReadableId in (${data.options.address})` : `--AND h.CustomerHumanReadableId in (0)`}`,
);
break;
case "forecast":
datamartQuery = datamartQuery.replace(
"where DeliveryAddressHumanReadableId in ([customers])",
data.options.customers
? `where DeliveryAddressHumanReadableId in (${data.options.customers})`
: "--where DeliveryAddressHumanReadableId in ([customers])",
);
break;
case "activeArticles2":
datamartQuery = datamartQuery.replace(
"and a.HumanReadableId in ([articles])",
data.options.articles
? `and a.HumanReadableId in (${data.options.articles})`
: "--and a.HumanReadableId in ([articles])",
);
break;
case "psiDeliveryData":
datamartQuery = datamartQuery
.replace("[startDate]", `${data.options.startDate}`)
.replace("[endDate]", `${data.options.endDate}`)
.replace(
"and IdArtikelVarianten in ([articles])",
data.options.articles
? `and IdArtikelVarianten in (${data.options.articles})`
: "--and IdArtikelVarianten in ([articles])",
);
break;
case "productionData":
datamartQuery = datamartQuery
.replace("[startDate]", `${data.options.startDate}`)
.replace("[endDate]", `${data.options.endDate}`)
.replace(
"and ArticleHumanReadableId in ([articles])",
data.options.articles
? `and ArticleHumanReadableId in (${data.options.articles})`
: "--and ArticleHumanReadableId in ([articles])",
);
break;
case "psiPlanningData":
datamartQuery = datamartQuery
.replace("[startDate]", `${data.options.startDate}`)
.replace("[endDate]", `${data.options.endDate}`)
.replace(
"and p.IdArtikelvarianten in ([articles])",
data.options.articles
? `and p.IdArtikelvarianten in (${data.options.articles})`
: "--and p.IdArtikelvarianten in ([articles])",
);
break;
default:
return returnFunc({
success: false,
level: "error",
module: "datamart",
subModule: "query",
message: `${data.name} encountered an error as it might not exist in LST please contact support if this continues to happen`,
data: [sqlQuery.message],
notify: true,
});
}
}
const { data: queryRun, error } = await tryCatch(
prodQuery(datamartQuery, `Running datamart query: ${data.name}`),
);
if (error) {
return returnFunc({
success: false,
level: "error",
module: "datamart",
subModule: "query",
message: `Data for: ${data.name} encountered an error while trying to get it`,
data: [error],
notify: false,
});
}
if (!queryRun.success) {
return returnFunc({
success: false,
level: "error",
module: "datamart",
subModule: "query",
message: queryRun.message,
data: queryRun.data,
notify: false,
});
}
return returnFunc({
success: true,
level: "info",
module: "datamart",
subModule: "query",
message: `Data for: ${data.name}`,
data: queryRun.data,
notify: false,
});
};

View File

@@ -0,0 +1,60 @@
import type { Express } from "express";
import { apiReturn } from "../utils/returnHelper.utils.js";
import { datamartData } from "./datamartData.utlis.js";
import runQuery from "./getDatamart.route.js";
export const setupDatamartRoutes = (baseUrl: string, app: Express) => {
// the sync callback.
// app.get(`${baseUrl}/api/datamart/sync`, async (req, res) => {
// const { time } = req.query;
// const now = new Date();
// const minutes = parseInt(time as string, 10) || 15;
// const cutoff = new Date(now.getTime() - minutes * 60 * 1000);
// const results = await db
// .select()
// .from(datamart)
// .where(time ? gte(datamart.upd_date, cutoff) : sql`true`);
// return apiReturn(res, {
// success: true,
// level: "info",
// module: "datamart",
// subModule: "query",
// message: `All Queries older than ${parseInt(process.env.QUERY_CHECK?.trim() || "15", 10)}min `,
// data: results,
// status: 200,
// });
// });
//setup all the routes
app.use(`${baseUrl}/api/datamart`, runQuery);
// just sending a get on datamart will return all the queries that we can call.
app.get(`${baseUrl}/api/datamart`, async (_, res) => {
// const queries = await db
// .select({
// id: datamart.id,
// name: datamart.name,
// description: datamart.description,
// options: datamart.options,
// version: datamart.version,
// upd_date: datamart.upd_date,
// })
// .from(datamart)
// .where(and(eq(datamart.active, true), eq(datamart.public, true)));
return apiReturn(res, {
success: true,
level: "info",
module: "datamart",
subModule: "query",
message: "All active queries we can run",
data: datamartData,
status: 200,
});
});
};

View File

@@ -0,0 +1,60 @@
/**
* will store and maintain all queries for datamart here.
* this way they can all be easily maintained and updated as we progress with the changes and updates to v3
*
* for options when putting them into the docs we will show examples on how to pull this
*/
export const datamartData = [
{
name: "Active articles",
endpoint: "activeArticles",
description: "returns all active articles for the server with custom data",
options: "",
optionsRequired: false,
},
{
name: "Delivery by date range",
endpoint: "deliveryByDateRange",
description: `Returns all Deliveries in selected date range IE: 1/1/${new Date(Date.now()).getFullYear()} to 1/31/${new Date(Date.now()).getFullYear()}`,
options: "startDate,endDate",
optionsRequired: true,
howManyOptionsRequired: 2,
},
{
name: "Get Customer Inventory",
endpoint: "customerInventory",
description: `Returns specific customer inventory based on there address ID, IE: 8,12,145. \nWith option to include specific warehousesIds, IE 36,41,5. \nNOTES: *leaving warehouse blank will just pull everything for the customer, Inventory dose not include PPOO or INV`,
options: "customer,whseToInclude",
optionsRequired: true,
howManyOptionsRequired: 1,
},
{
name: "Get open order",
endpoint: "openOrders",
description: `Returns open orders based on day count sent over, IE: startDay 15 days in the past endDay 5 days in the future, can be left empty for this default days`,
options: "startDay,endDay",
optionsRequired: true,
howManyOptionsRequired: 2,
},
{
name: "Get inventory",
endpoint: "inventory",
description: `Returns all inventory, excludes inv location. adding an x in one of the options will enable it.`,
options: "includeRunningNumbers,locations,lots",
},
{
name: "Fake EDI Update",
endpoint: "fakeEDIUpdate",
description: `Returns all open orders to correct and resubmit via lst demand mgt, leaving blank will get everything putting an address only returns the specified address. \nNOTE: only orders that were created via edi will populate here.`,
options: "address",
},
{
name: "Production Data",
endpoint: "productionData",
description: `Returns all production data from the date range with the option to have 1 to many avs to search by.`,
options: "startDate,endDate,articles",
optionsRequired: true,
howManyOptionsRequired: 2,
},
];

View File

@@ -0,0 +1,28 @@
import { Router } from "express";
import { apiReturn } from "../utils/returnHelper.utils.js";
import { runDatamartQuery } from "./datamart.controller.js";
const r = Router();
type Options = {
name: string;
value: string;
};
r.get("/:name", async (req, res) => {
const { name } = req.params;
const options = req.query as Options;
const dataRan = await runDatamartQuery({ name, options });
return apiReturn(res, {
success: dataRan.success,
level: "info",
module: "datamart",
subModule: "query",
message: dataRan.message,
data: dataRan.data,
status: 200,
});
});
export default r;

View File

@@ -0,0 +1,16 @@
import { drizzle } from "drizzle-orm/postgres-js";
import postgres from "postgres";
const dbURL = `postgres://${process.env.DATABASE_USER}:${process.env.DATABASE_PASSWORD}@${process.env.DATABASE_HOST}:${process.env.DATABASE_PORT}/${process.env.DATABASE_DB}`;
const queryClient = postgres(dbURL, {
max: 10,
idle_timeout: 60,
connect_timeout: 30,
max_lifetime: 1000 * 6 * 5,
onnotice: (n) => {
console.info("PG notice: ", n.message);
},
});
export const db = drizzle({ client: queryClient });

View File

@@ -0,0 +1,6 @@
/**
* while in client mode we will be connected directly to the postgres and do a nightly backup.
* we will only keep tables relevant, like silo data, inv history, manualPrinting, notifications, printerData,prodlabels, quality request, rfid tags, roles, serverData,...etc
* keeping only the last 7 backups
*
*/

View File

@@ -0,0 +1,72 @@
import { createLogger } from "../logger/logger.controller.js";
import { delay } from "../utils/delay.utils.js";
import { db } from "./db.controller.js";
type DBCount = {
count: string;
};
const tableMap = {
logs: "logs",
jobs: "job_audit_log",
opendockApt: "opendock_apt",
} as const;
type CleanupTable = keyof typeof tableMap;
/**
* We will clean up the db based on age.
* @param name database to run the cleanup on
* @param daysToKeep optional default will be 90 days
*/
export const dbCleanup = async (name: CleanupTable, daysToKeep?: number) => {
const log = createLogger({ module: "db", subModule: "cleanup" });
// TODO: send backup of this to another server, via post or something maybe have to reduce the limit but well tackle that later.
if (!daysToKeep) {
daysToKeep = 90;
}
const limit = 1000;
const delayTime = 250;
let rowsDeleted: number;
const dbCount = (await db.execute(
`select count(*) from public.${tableMap[name]} WHERE created_at < NOW() - INTERVAL '${daysToKeep} days'`,
)) as DBCount[];
const loopCount = Math.ceil(
parseInt(dbCount[0]?.count ?? `${limit}`, 10) / limit,
);
if (parseInt(dbCount[0]?.count ?? `${limit}`, 10) > 1) {
log.info(
`Table clean up for: ${name}, that are older than ${daysToKeep} day, will be removed, There is ${loopCount} loops to be completed, Approx time: ${((loopCount * delayTime) / 1000 / 60).toFixed(2)} min(s).`,
);
} else {
log.info(`Table clean up for: ${name}, Currently has nothing to clean up.`);
return;
}
do {
// cleanup logs
const deleted = await db.execute(`
DELETE FROM public.${tableMap[name]}
WHERE id IN (
SELECT id
FROM public.${tableMap[name]}
WHERE created_at < NOW() - INTERVAL '${daysToKeep} days'
ORDER BY created_at
LIMIT ${limit}
)
RETURNING id;
`);
rowsDeleted = deleted.length;
if (rowsDeleted > 0) {
await delay(delayTime);
}
} while (rowsDeleted === limit);
log.info(`Table clean up for: ${name}, Has completed.`);
};

View File

@@ -0,0 +1,39 @@
import {
integer,
jsonb,
pgTable,
text,
timestamp,
uuid,
} from "drizzle-orm/pg-core";
import { createInsertSchema, createSelectSchema } from "drizzle-zod";
import type { z } from "zod";
export const alplaPurchaseHistory = pgTable("alpla_purchase_history", {
id: uuid("id").defaultRandom().primaryKey(),
apo: integer("apo"),
revision: integer("revision"),
confirmed: integer("confirmed"),
status: integer("status"),
statusText: text("status_text"),
journalNum: integer("journal_num"),
add_date: timestamp("add_date").defaultNow(),
add_user: text("add_user"),
upd_user: text("upd_user"),
upd_date: timestamp("upd_date").defaultNow(),
remark: text("remark"),
approvedStatus: text("approved_status").default("new"),
position: jsonb("position").default([]),
createdAt: timestamp("created_at").defaultNow(),
updatedAt: timestamp("updated_at").defaultNow(),
});
export const alplaPurchaseHistorySchema =
createSelectSchema(alplaPurchaseHistory);
export const newAlplaPurchaseHistorySchema =
createInsertSchema(alplaPurchaseHistory);
export type AlplaPurchaseHistory = z.infer<typeof alplaPurchaseHistorySchema>;
export type NewAlplaPurchaseHistory = z.infer<
typeof newAlplaPurchaseHistorySchema
>;

View File

@@ -0,0 +1,41 @@
import {
index,
integer,
jsonb,
pgTable,
text,
timestamp,
uuid,
} from "drizzle-orm/pg-core";
import { createInsertSchema, createSelectSchema } from "drizzle-zod";
import type { z } from "zod";
export const jobAuditLog = pgTable(
"job_audit_log",
{
id: uuid("id").defaultRandom().primaryKey(),
jobName: text("job_name"),
startedAt: timestamp("start_at"),
finishedAt: timestamp("finished_at"),
durationMs: integer("duration_ms"),
status: text("status"), //success | error
errorMessage: text("error_message"),
errorStack: text("error_stack"),
metadata: jsonb("meta_data"),
createdAt: timestamp("created_at").defaultNow(),
},
(table) => {
return {
cleanupIdx: index("idx_job_audit_logs_cleanup").on(
table.startedAt,
table.id,
),
};
},
);
export const jobAuditLogSchema = createSelectSchema(jobAuditLog);
export const newJobAuditLogSchema = createInsertSchema(jobAuditLog);
export type JobAuditLog = z.infer<typeof jobAuditLogSchema>;
export type NewJobAuditLog = z.infer<typeof newJobAuditLogSchema>;

View File

@@ -0,0 +1,156 @@
import { relations } from "drizzle-orm";
import {
boolean,
index,
integer,
pgTable,
text,
timestamp,
} from "drizzle-orm/pg-core";
export const user = pgTable("user", {
id: text("id").primaryKey(),
name: text("name").notNull(),
email: text("email").notNull().unique(),
emailVerified: boolean("email_verified").default(false).notNull(),
image: text("image"),
createdAt: timestamp("created_at").defaultNow().notNull(),
updatedAt: timestamp("updated_at")
.defaultNow()
.$onUpdate(() => /* @__PURE__ */ new Date())
.notNull(),
role: text("role"),
banned: boolean("banned").default(false),
banReason: text("ban_reason"),
banExpires: timestamp("ban_expires"),
username: text("username").unique(),
displayUsername: text("display_username"),
});
export const session = pgTable(
"session",
{
id: text("id").primaryKey(),
expiresAt: timestamp("expires_at").notNull(),
token: text("token").notNull().unique(),
createdAt: timestamp("created_at").defaultNow().notNull(),
updatedAt: timestamp("updated_at")
.$onUpdate(() => /* @__PURE__ */ new Date())
.notNull(),
ipAddress: text("ip_address"),
userAgent: text("user_agent"),
userId: text("user_id")
.notNull()
.references(() => user.id, { onDelete: "cascade" }),
impersonatedBy: text("impersonated_by"),
},
(table) => [index("session_userId_idx").on(table.userId)],
);
export const account = pgTable(
"account",
{
id: text("id").primaryKey(),
accountId: text("account_id").notNull(),
providerId: text("provider_id").notNull(),
userId: text("user_id")
.notNull()
.references(() => user.id, { onDelete: "cascade" }),
accessToken: text("access_token"),
refreshToken: text("refresh_token"),
idToken: text("id_token"),
accessTokenExpiresAt: timestamp("access_token_expires_at"),
refreshTokenExpiresAt: timestamp("refresh_token_expires_at"),
scope: text("scope"),
password: text("password"),
createdAt: timestamp("created_at").defaultNow().notNull(),
updatedAt: timestamp("updated_at")
.$onUpdate(() => /* @__PURE__ */ new Date())
.notNull(),
},
(table) => [index("account_userId_idx").on(table.userId)],
);
export const verification = pgTable(
"verification",
{
id: text("id").primaryKey(),
identifier: text("identifier").notNull(),
value: text("value").notNull(),
expiresAt: timestamp("expires_at").notNull(),
createdAt: timestamp("created_at").defaultNow().notNull(),
updatedAt: timestamp("updated_at")
.defaultNow()
.$onUpdate(() => /* @__PURE__ */ new Date())
.notNull(),
},
(table) => [index("verification_identifier_idx").on(table.identifier)],
);
export const jwks = pgTable("jwks", {
id: text("id").primaryKey(),
publicKey: text("public_key").notNull(),
privateKey: text("private_key").notNull(),
createdAt: timestamp("created_at").notNull(),
expiresAt: timestamp("expires_at"),
});
export const apikey = pgTable(
"apikey",
{
id: text("id").primaryKey(),
name: text("name"),
start: text("start"),
prefix: text("prefix"),
key: text("key").notNull(),
userId: text("user_id")
.notNull()
.references(() => user.id, { onDelete: "cascade" }),
refillInterval: integer("refill_interval"),
refillAmount: integer("refill_amount"),
lastRefillAt: timestamp("last_refill_at"),
enabled: boolean("enabled").default(true),
rateLimitEnabled: boolean("rate_limit_enabled").default(true),
rateLimitTimeWindow: integer("rate_limit_time_window").default(86400000),
rateLimitMax: integer("rate_limit_max").default(10),
requestCount: integer("request_count").default(0),
remaining: integer("remaining"),
lastRequest: timestamp("last_request"),
expiresAt: timestamp("expires_at"),
createdAt: timestamp("created_at").notNull(),
updatedAt: timestamp("updated_at").notNull(),
permissions: text("permissions"),
metadata: text("metadata"),
},
(table) => [
index("apikey_key_idx").on(table.key),
index("apikey_userId_idx").on(table.userId),
],
);
export const userRelations = relations(user, ({ many }) => ({
sessions: many(session),
accounts: many(account),
apikeys: many(apikey),
}));
export const sessionRelations = relations(session, ({ one }) => ({
user: one(user, {
fields: [session.userId],
references: [user.id],
}),
}));
export const accountRelations = relations(account, ({ one }) => ({
user: one(user, {
fields: [account.userId],
references: [user.id],
}),
}));
export const apikeyRelations = relations(apikey, ({ one }) => ({
user: one(user, {
fields: [apikey.userId],
references: [user.id],
}),
}));

View File

@@ -0,0 +1,31 @@
import {
boolean,
integer,
pgTable,
text,
timestamp,
uuid,
} from "drizzle-orm/pg-core";
import { createInsertSchema, createSelectSchema } from "drizzle-zod";
import type { z } from "zod";
export const datamart = pgTable("datamart", {
id: uuid("id").defaultRandom().primaryKey(),
name: text("name").unique(),
description: text("description").notNull(),
query: text("query"),
version: integer("version").default(1).notNull(),
active: boolean("active").default(true),
options: text("options").default(""),
public: boolean("public_access").default(false),
add_date: timestamp("add_date").defaultNow(),
add_user: text("add_user").default("lst-system"),
upd_date: timestamp("upd_date").defaultNow(),
upd_user: text("upd_user").default("lst-system"),
});
export const datamartSchema = createSelectSchema(datamart);
export const newDataMartSchema = createInsertSchema(datamart);
export type Datamart = z.infer<typeof datamartSchema>;
export type NewDatamart = z.infer<typeof newDataMartSchema>;

View File

@@ -0,0 +1,30 @@
import { date, pgTable, text, timestamp, uuid } from "drizzle-orm/pg-core";
import { createInsertSchema, createSelectSchema } from "drizzle-zod";
import type z from "zod";
export const invHistoricalData = pgTable("inv_historical_data", {
inv: uuid("id").defaultRandom().primaryKey(),
histDate: date("hist_date").notNull(), // this date should always be yesterday when we post it.
plantToken: text("plant_token"),
article: text("article").notNull(),
articleDescription: text("article_description").notNull(),
materialType: text("material_type"),
total_QTY: text("total_QTY"),
available_QTY: text("available_QTY"),
coa_QTY: text("coa_QTY"),
held_QTY: text("held_QTY"),
consignment_QTY: text("consignment_qty"),
lot_Number: text("lot_number"),
locationId: text("location_id"),
location: text("location"),
whseId: text("whse_id").default(""),
whseName: text("whse_name").default("missing whseName"),
upd_user: text("upd_user").default("lst-system"),
upd_date: timestamp("upd_date").defaultNow(),
});
export const invHistoricalDataSchema = createSelectSchema(invHistoricalData);
export const newInvHistoricalDataSchema = createInsertSchema(invHistoricalData);
export type InvHistoricalData = z.infer<typeof invHistoricalDataSchema>;
export type NewInvHistoricalData = z.infer<typeof newInvHistoricalDataSchema>;

View File

@@ -0,0 +1,28 @@
import {
boolean,
jsonb,
pgTable,
text,
timestamp,
uuid,
} from "drizzle-orm/pg-core";
import { createInsertSchema, createSelectSchema } from "drizzle-zod";
import type { z } from "zod";
export const logs = pgTable("logs", {
id: uuid("id").defaultRandom().primaryKey(),
level: text("level"),
module: text("module").notNull(),
subModule: text("subModule"),
message: text("message").notNull(),
stack: jsonb("stack").default([]),
checked: boolean("checked").default(false),
hostname: text("hostname"),
createdAt: timestamp("created_at").defaultNow(),
});
export const logSchema = createSelectSchema(logs);
export const newLogSchema = createInsertSchema(logs);
export type Log = z.infer<typeof logSchema>;
export type NewLog = z.infer<typeof newLogSchema>;

View File

@@ -0,0 +1,29 @@
import {
boolean,
jsonb,
pgTable,
text,
uniqueIndex,
uuid,
} from "drizzle-orm/pg-core";
import { createInsertSchema, createSelectSchema } from "drizzle-zod";
import type z from "zod";
export const notifications = pgTable(
"notifications",
{
id: uuid("id").defaultRandom().primaryKey(),
name: text("name").notNull(),
description: text("description").notNull(),
active: boolean("active").default(false),
interval: text("interval").default("5"),
options: jsonb("options").default([]),
},
(table) => [uniqueIndex("notify_name").on(table.name)],
);
export const notificationSchema = createSelectSchema(notifications);
export const newNotificationSchema = createInsertSchema(notifications);
export type Notification = z.infer<typeof notificationSchema>;
export type NewNotification = z.infer<typeof newNotificationSchema>;

View File

@@ -0,0 +1,30 @@
import { pgTable, text, unique, uuid } from "drizzle-orm/pg-core";
import { createInsertSchema, createSelectSchema } from "drizzle-zod";
import type z from "zod";
import { user } from "./auth.schema.js";
import { notifications } from "./notifications.schema.js";
export const notificationSub = pgTable(
"notification_sub",
{
id: uuid("id").defaultRandom().primaryKey(),
userId: text("user_id")
.notNull()
.references(() => user.id, { onDelete: "cascade" }),
notificationId: uuid("notification_id")
.notNull()
.references(() => notifications.id, { onDelete: "cascade" }),
emails: text("emails").array().default([]),
},
(table) => ({
userNotificationUnique: unique(
"notification_sub_user_notification_unique",
).on(table.userId, table.notificationId),
}),
);
export const notificationSubSchema = createSelectSchema(notificationSub);
export const newNotificationSubSchema = createInsertSchema(notificationSub);
export type NotificationSub = z.infer<typeof notificationSubSchema>;
export type NewNotificationSub = z.infer<typeof newNotificationSubSchema>;

View File

@@ -0,0 +1,35 @@
import {
index,
integer,
jsonb,
pgTable,
text,
timestamp,
uuid,
} from "drizzle-orm/pg-core";
import { createInsertSchema, createSelectSchema } from "drizzle-zod";
import type { z } from "zod";
export const opendockApt = pgTable(
"opendock_apt",
{
id: uuid("id").defaultRandom().primaryKey(),
release: integer("release").notNull().unique(),
openDockAptId: text("open_dock_apt_id").notNull(),
appointment: jsonb("appointment").notNull().default([]),
upd_date: timestamp("upd_date").notNull().defaultNow(),
createdAt: timestamp("created_at").notNull().defaultNow(),
},
(table) => ({
releaseIdx: index("opendock_apt_release_idx").on(table.release),
openDockAptIdIdx: index("opendock_apt_opendock_id_idx").on(
table.openDockAptId,
),
}),
);
export const opendockAptSchema = createSelectSchema(opendockApt);
export const newOpendockAptSchema = createInsertSchema(opendockApt);
export type OpendockApt = z.infer<typeof opendockAptSchema>;
export type NewOpendockApt = z.infer<typeof newOpendockAptSchema>;

View File

@@ -0,0 +1,11 @@
import { integer, pgTable, text, timestamp } from "drizzle-orm/pg-core";
export const printerLog = pgTable("printer_log", {
id: integer().primaryKey().generatedAlwaysAsIdentity(),
name: text("name"),
ip: text("ip"),
printerSN: text("printer_sn"),
condition: text("condition").notNull(),
message: text("message"),
createdAt: timestamp("created_at").defaultNow(),
});

View File

@@ -0,0 +1,44 @@
import {
boolean,
integer,
jsonb,
pgTable,
text,
timestamp,
uniqueIndex,
uuid,
} from "drizzle-orm/pg-core";
import { createInsertSchema, createSelectSchema } from "drizzle-zod";
import type z from "zod";
export const printerData = pgTable(
"printer_data",
{
id: uuid("id").defaultRandom().primaryKey(),
humanReadableId: text("humanReadable_id").unique().notNull(),
name: text("name").notNull(),
ipAddress: text("ipAddress"),
port: integer("port"),
status: text("status"),
statusText: text("statusText"),
printerSN: text("printer_sn"),
lastTimePrinted: timestamp("last_time_printed").notNull().defaultNow(),
assigned: boolean("assigned").default(false),
remark: text("remark"),
printDelay: integer("printDelay").default(90),
processes: jsonb("processes").default([]),
printDelayOverride: boolean("print_delay_override").default(false), // this will be more for if we have the lot time active but want to over ride this single line for some reason
add_Date: timestamp("add_Date").defaultNow(),
upd_date: timestamp("upd_date").defaultNow(),
},
(table) => [
//uniqueIndex("emailUniqueIndex").on(sql`lower(${table.email})`),
uniqueIndex("printer_id").on(table.humanReadableId),
],
);
export const printerSchema = createSelectSchema(printerData);
export const newPrinterSchema = createInsertSchema(printerData);
export type Printer = z.infer<typeof printerSchema>;
export type NewPrinter = z.infer<typeof newPrinterSchema>;

View File

@@ -0,0 +1,53 @@
import {
boolean,
integer,
jsonb,
pgEnum,
pgTable,
text,
timestamp,
uniqueIndex,
uuid,
} from "drizzle-orm/pg-core";
import { createInsertSchema, createSelectSchema } from "drizzle-zod";
import { z } from "zod";
export const settingType = pgEnum("setting_type", [
"feature", // when changed deals with triggering the croner related to this
"system", // when changed fires a system restart but this should be rare and all these settings should be in the env
"standard", // will be effected by the next process, either croner or manual trigger
]);
export const settings = pgTable(
"settings",
{
id: uuid("settings_id").defaultRandom().primaryKey(),
name: text("name").notNull(),
value: text("value").notNull(), // this is used in junction with active, only needed if the setting isn't a bool
description: text("description"),
moduleName: text("moduleName"), // what part of lst dose it belong to this is used to split the settings out later
active: boolean("active").default(true),
roles: jsonb("roles").$type<string[]>().notNull().default(["systemAdmin"]), // role or roles to see this goes along with the moduleName, need to have a x role in module to see this setting.
settingType: settingType(),
seedVersion: integer("seed_version").default(1), // this is intended for if we want to update the settings.
add_User: text("add_User").default("LST_System").notNull(),
add_Date: timestamp("add_Date").defaultNow(),
upd_user: text("upd_User").default("LST_System").notNull(),
upd_date: timestamp("upd_date").defaultNow(),
},
(table) => [
// uniqueIndex('emailUniqueIndex').on(sql`lower(${table.email})`),
uniqueIndex("name").on(table.name),
],
);
export const settingSchema = createSelectSchema(settings);
export const newSettingSchema = createInsertSchema(settings, {
name: z.string().min(3, {
message: "The name of the setting must be longer than 3 letters",
}),
});
export type Setting = z.infer<typeof settingSchema>;
export type NewSetting = z.infer<typeof newSettingSchema>;

View File

@@ -0,0 +1,10 @@
import type { InferSelectModel } from "drizzle-orm";
import { integer, pgTable, text, timestamp } from "drizzle-orm/pg-core";
export const serverStats = pgTable("stats", {
id: text("id").primaryKey().default("serverStats"),
build: integer("build").notNull().default(1),
lastUpdate: timestamp("last_update").defaultNow(),
});
export type ServerStats = InferSelectModel<typeof serverStats>;

View File

@@ -0,0 +1,17 @@
import { type Express, Router } from "express";
import { requireAuth } from "../middleware/auth.middleware.js";
import restart from "./gpSqlRestart.route.js";
import start from "./gpSqlStart.route.js";
import stop from "./gpSqlStop.route.js";
export const setupGPSqlRoutes = (baseUrl: string, app: Express) => {
//setup all the routes
// Apply auth to entire router
const router = Router();
router.use(requireAuth);
router.use(start);
router.use(stop);
router.use(restart);
app.use(`${baseUrl}/api/system/gpSql`, router);
};

View File

@@ -0,0 +1,148 @@
import sql from "mssql";
import { gpSqlConfig } from "../configs/gpSql.config.js";
import { createLogger } from "../logger/logger.controller.js";
import { checkHostnamePort } from "../utils/checkHost.utils.js";
import { returnFunc } from "../utils/returnHelper.utils.js";
export let pool2: sql.ConnectionPool;
export let connected: boolean = false;
export let reconnecting = false;
// start the delay out as 2 seconds
let delayStart = 2000;
let attempt = 0;
const maxAttempts = 10;
export const connectGPSql = async () => {
const serverUp = await checkHostnamePort(`USMCD1VMS011:1433`);
if (!serverUp) {
// we will try to reconnect
connected = false;
reconnectToSql;
return returnFunc({
success: false,
level: "error",
module: "system",
subModule: "db",
message: "GP server is offline or unreachable.",
});
}
// if we are trying to click restart from the api for some reason we want to kick back and say no
if (connected) {
return returnFunc({
success: false,
level: "error",
module: "system",
subModule: "db",
message: "The Sql server is already connected.",
});
}
// try to connect to the sql server
try {
pool2 = new sql.ConnectionPool(gpSqlConfig);
await pool2.connect();
connected = true;
return returnFunc({
success: true,
level: "info",
module: "system",
subModule: "db",
message: `${gpSqlConfig.server} is connected to ${gpSqlConfig.database}`,
data: [],
notify: false,
});
} catch (error) {
reconnectToSql;
return returnFunc({
success: false,
level: "error",
module: "system",
subModule: "db",
message: "Failed to connect to the prod sql server.",
data: [error],
notify: false,
});
}
};
export const closePool = async () => {
if (!connected) {
return returnFunc({
success: false,
level: "error",
module: "system",
subModule: "db",
message: "There is no connection to the prod server currently.",
});
}
try {
await pool2.close();
connected = false;
return returnFunc({
success: true,
level: "info",
module: "system",
subModule: "db",
message: "The sql connection has been closed.",
});
} catch (error) {
connected = false;
return returnFunc({
success: false,
level: "error",
module: "system",
subModule: "db",
message: "There was an error closing the sql connection",
data: [error],
});
}
};
export const reconnectToSql = async () => {
const log = createLogger({
module: "system",
subModule: "db",
});
if (reconnecting) return;
//set reconnecting to true while we try to reconnect
reconnecting = true;
while (!connected && attempt < maxAttempts) {
attempt++;
log.info(
`Reconnect attempt ${attempt}/${maxAttempts} in ${delayStart / 1000}s ...`,
);
await new Promise((res) => setTimeout(res, delayStart));
const serverUp = await checkHostnamePort(`${process.env.PROD_SERVER}:1433`);
if (!serverUp) {
delayStart = Math.min(delayStart * 2, 30000); // exponential backoff until up to 30000
continue;
}
try {
pool2 = await sql.connect(gpSqlConfig);
reconnecting = false;
connected = true;
log.info(`${gpSqlConfig.server} is connected to ${gpSqlConfig.database}`);
} catch (error) {
delayStart = Math.min(delayStart * 2, 30000);
log.error({ error }, "Failed to reconnect to the prod sql server.");
}
}
if (!connected && attempt >= maxAttempts) {
log.error(
{ notify: true },
"Max reconnect attempts reached on the prodSql server. Stopping retries.",
);
reconnecting = false;
// TODO: exit alert someone here
}
};

View File

@@ -0,0 +1,78 @@
import { returnFunc } from "../utils/returnHelper.utils.js";
import { connected, pool2 } from "./gpSqlConnection.controller.js";
interface SqlError extends Error {
code?: string;
originalError?: {
info?: { message?: string };
};
}
/**
* Run a prod query
* just pass over the query as a string and the name of the query.
* Query should be like below.
* * select * from AlplaPROD_test1.dbo.table
* You must use test1 always as it will be changed via query
*/
export const gpQuery = async (queryToRun: string, name: string) => {
if (!connected) {
return returnFunc({
success: false,
level: "error",
module: "system",
subModule: "gpSql",
message: `${process.env.PROD_PLANT_TOKEN} is offline or attempting to reconnect`,
data: [],
notify: false,
});
}
//change to the correct server
const query = queryToRun.replaceAll(
"test1",
`${process.env.PROD_PLANT_TOKEN}`,
);
try {
const result = await pool2.request().query(query);
return {
success: true,
message: `Query results for: ${name}`,
data: result.recordset ?? [],
};
} catch (error: unknown) {
const err = error as SqlError;
if (err.code === "ETIMEOUT") {
return returnFunc({
success: false,
module: "system",
subModule: "gpSql",
level: "error",
message: `${name} did not run due to a timeout.`,
notify: false,
data: [],
});
}
if (err.code === "EREQUEST") {
return returnFunc({
success: false,
module: "system",
subModule: "gpSql",
level: "error",
message: `${name} encountered an error ${err.originalError?.info?.message || "undefined error"}`,
data: [],
});
}
return returnFunc({
success: false,
module: "system",
subModule: "gpSql",
level: "error",
message: `${name} encountered an unknown error.`,
data: [],
});
}
};

View File

@@ -0,0 +1,29 @@
import { readFileSync } from "node:fs";
export type SqlGPQuery = {
query: string;
success: boolean;
message: string;
};
export const sqlGpQuerySelector = (name: string) => {
try {
const queryFile = readFileSync(
new URL(`../gpSql/queries/${name}.sql`, import.meta.url),
"utf8",
);
return {
success: true,
message: `Query for: ${name}`,
query: queryFile,
};
} catch (e) {
console.error(e);
return {
success: false,
message:
"Error getting the query file, please make sure you have the correct name.",
};
}
};

View File

@@ -0,0 +1,23 @@
import { Router } from "express";
import { apiReturn } from "../utils/returnHelper.utils.js";
import { closePool, connectGPSql } from "./gpSqlConnection.controller.js";
const r = Router();
r.post("/restart", async (_, res) => {
await closePool();
await new Promise((r) => setTimeout(r, 2000));
const connect = await connectGPSql();
apiReturn(res, {
success: connect.success,
level: connect.success ? "info" : "error",
module: "routes",
subModule: "prodSql",
message: "Sql Server has been restarted",
data: connect.data,
status: connect.success ? 200 : 400,
});
});
export default r;

View File

@@ -0,0 +1,20 @@
import { Router } from "express";
import { apiReturn } from "../utils/returnHelper.utils.js";
import { connectGPSql } from "./gpSqlConnection.controller.js";
const r = Router();
r.post("/start", async (_, res) => {
const connect = await connectGPSql();
apiReturn(res, {
success: connect.success,
level: connect.success ? "info" : "error",
module: "routes",
subModule: "prodSql",
message: connect.message,
data: connect.data,
status: connect.success ? 200 : 400,
});
});
export default r;

View File

@@ -0,0 +1,20 @@
import { Router } from "express";
import { apiReturn } from "../utils/returnHelper.utils.js";
import { closePool } from "./gpSqlConnection.controller.js";
const r = Router();
r.post("/stop", async (_, res) => {
const connect = await closePool();
apiReturn(res, {
success: connect.success,
level: connect.success ? "info" : "error",
module: "routes",
subModule: "prodSql",
message: connect.message,
data: connect.data,
status: connect.success ? 200 : 400,
});
});
export default r;

View File

@@ -0,0 +1,39 @@
USE [ALPLA]
SELECT Distinct r.[POPRequisitionNumber] as req,
r.[ApprovalStatus] as approvalStatus,
r.[Requested By] requestedBy,
format(t.[Created Date], 'yyyy-MM-dd') as createdAt,
format(r.[Requisition Date], 'MM/dd/yyyy') as expectedDate,
r.[Requisition Amount] as glAccount,
case when r.[Account Segment 2] is null or r.[Account Segment 2] = '' then '999' else cast(r.[Account Segment 2] as varchar) end as plant
,t.Status as status
,t.[Document Status] as docStatus
,t.[Workflow Status] as reqState
,CASE
WHEN [Workflow Status] = 'Completed'
THEN 'Pending APO convertion'
WHEN [Workflow Status] = 'Pending User Action'
AND r.[ApprovalStatus] = 'Pending Approval'
THEN 'Pending plant approver'
WHEN [Workflow Status] = ''
AND r.[ApprovalStatus] = 'Not Submitted'
THEN 'Req not submited'
ELSE 'Unknown reason'
END AS approvedStatus
FROM [dbo].[PORequisitions] r (nolock)
left join
[dbo].[PurchaseRequisitions] as t (nolock) on
t.[Requisition Number] = r.[POPRequisitionNumber]
--where ApprovalStatus = 'Pending Approval'
--and [Account Segment 2] = 80
where r.POPRequisitionNumber in ([reqsToCheck])
Order By r.POPRequisitionNumber

View File

@@ -0,0 +1,69 @@
import build from "pino-abstract-transport";
import { db } from "../db/db.controller.js";
import { logs } from "../db/schema/logs.schema.js";
import { tryCatch } from "../utils/trycatch.utils.js";
const pinoLogLevels: Record<number, string> = {
10: "trace",
20: "debug",
30: "info",
40: "warn",
50: "error",
60: "fatal",
};
// Create a custom transport function
export default async function () {
//const {username, service, level, msg, ...extra} = log;
try {
return build(async (source) => {
for await (const obj of source) {
// convert to the name to make it more easy to find later :P
const levelName = pinoLogLevels[obj.level] || "unknown";
const res = await tryCatch(
db.insert(logs).values({
level: levelName,
module: obj?.module?.toLowerCase(),
subModule: obj?.subModule?.toLowerCase(),
hostname: obj?.hostname?.toLowerCase(),
message: obj.msg,
stack: obj?.stack,
}),
);
if (res.error) {
console.error(res.error);
}
}
});
} catch (err) {
console.error("Error inserting log into database:", err);
}
}
// export const dbStream = {
// write: async (logString: string) => {
// try {
// const obj = JSON.parse(logString);
// const levelName = pinoLogLevels[obj.level] || "unknown";
// const res = await tryCatch(
// db.insert(logs).values({
// level: levelName,
// module: obj?.module?.toLowerCase(),
// subModule: obj?.subModule?.toLowerCase(),
// hostname: obj?.hostname?.toLowerCase(),
// message: obj.msg,
// stack: obj?.stack,
// }),
// );
// if (res.error) {
// console.error("DB log error:", res.error);
// }
// } catch (err) {
// console.error("Error parsing/inserting log:", err);
// }
// },
// };

View File

@@ -0,0 +1,95 @@
import { Writable } from "node:stream";
import pino, { type Logger } from "pino";
import { db } from "../db/db.controller.js";
import { logs } from "../db/schema/logs.schema.js";
import { emitToRoom } from "../socket.io/roomEmitter.socket.js";
import { tryCatch } from "../utils/trycatch.utils.js";
import { notifySystemIssue } from "./logger.notify.js";
//import build from "pino-abstract-transport";
export const logLevel = process.env.LOG_LEVEL || "info";
const pinoLogLevels: Record<number, string> = {
10: "trace",
20: "debug",
30: "info",
40: "warn",
50: "error",
60: "fatal",
};
// ✅ Custom DB writable stream
const dbStream = new Writable({
objectMode: true,
async write(chunk, _enc, callback) {
try {
const obj = JSON.parse(chunk.toString());
const levelName = pinoLogLevels[obj.level] || "unknown";
const res = await tryCatch(
db
.insert(logs)
.values({
level: levelName,
module: obj?.module?.toLowerCase(),
subModule: obj?.subModule?.toLowerCase(),
hostname: obj?.hostname?.toLowerCase(),
message: obj.msg,
stack: obj?.stack,
})
.returning(),
);
if (res.error) {
console.error(res.error);
}
if (obj.notify) {
notifySystemIssue(obj);
}
if (obj.room) {
emitToRoom(obj.room, res.data ? res.data[0] : obj);
}
emitToRoom("logs", res.data ? res.data[0] : obj);
callback();
} catch (err) {
console.error("DB log insert error:", err);
callback();
}
},
});
const rootLogger: Logger = pino(
{
level: logLevel,
redact: { paths: ["email", "password"], remove: true },
},
pino.multistream([
{
level: logLevel,
stream: pino.transport({
target: "pino-pretty",
options: {
colorize: true,
singleLine: true,
},
}),
},
{
level: logLevel,
stream: dbStream,
},
]),
);
/**
*
*
* example data to put in as a reference
* rooms logs | labels | etc
*/
export const createLogger = (bindings: Record<string, unknown>): Logger => {
return rootLogger.child(bindings);
};

View File

@@ -0,0 +1,44 @@
/**
* For all logging that has notify set to true well send an email to the system admins, if we have a discord webhook set well send it there as well
*/
import { eq } from "drizzle-orm";
import { db } from "../db/db.controller.js";
import { user } from "../db/schema/auth.schema.js";
import { sendEmail } from "../utils/sendEmail.utils.js";
type NotifyData = {
module: string;
submodule: string;
hostname: string;
msg: string;
stack: unknown[];
};
export const notifySystemIssue = async (data: NotifyData) => {
// build the email out
const formattedError = Array.isArray(data.stack)
? data.stack.map((e: any) => e.error || e)
: data.stack;
const sysAdmin = await db
.select()
.from(user)
.where(eq(user.role, "systemAdmin"));
await sendEmail({
email: sysAdmin.map((r) => r.email).join("; ") ?? "cowchmonkey@gmail.com", // change to pull in system admin emails
subject: `${data.hostname} has encountered a critical issue.`,
template: "serverCritialIssue",
context: {
plant: data.hostname,
module: data.module,
subModule: data.submodule,
message: data.msg,
error: JSON.stringify(formattedError, null, 2),
},
});
// TODO: add discord
};

View File

@@ -0,0 +1,220 @@
import { format } from "date-fns";
import { eq, sql } from "drizzle-orm";
import { runDatamartQuery } from "../datamart/datamart.controller.js";
import { db } from "../db/db.controller.js";
import { invHistoricalData } from "../db/schema/historicalInv.schema.js";
import { prodQuery } from "../prodSql/prodSqlQuery.controller.js";
import {
type SqlQuery,
sqlQuerySelector,
} from "../prodSql/prodSqlQuerySelector.utils.js";
import { createCronJob } from "../utils/croner.utils.js";
import { returnFunc } from "../utils/returnHelper.utils.js";
import { tryCatch } from "../utils/trycatch.utils.js";
type Inventory = {
article: string;
alias: string;
materialType: string;
total_palletQTY: string;
available_QTY: string;
coa_QTY: string;
held_QTY: string;
consignment_qty: string;
lot: string;
locationId: string;
laneDescription: string;
warehouseId: string;
warehouseDescription: string;
};
const historicalInvImport = async () => {
const today = new Date();
const { data, error } = await tryCatch(
db
.select()
.from(invHistoricalData)
.where(eq(invHistoricalData.histDate, format(today, "yyyy-MM-dd"))),
);
if (error) {
return returnFunc({
success: false,
level: "error",
module: "system",
subModule: "query",
message: `Error getting historical inv info`,
data: error as any,
notify: false,
});
}
if (data?.length === 0) {
const avSQLQuery = sqlQuerySelector(`datamart.activeArticles`) as SqlQuery;
if (!avSQLQuery.success) {
return returnFunc({
success: false,
level: "error",
module: "logistics",
subModule: "inv",
message: `Error getting Article info`,
data: [avSQLQuery.message],
notify: true,
});
}
const { data: inv, error: invError } = await tryCatch(
//prodQuery(sqlQuery.query, "Inventory data"),
runDatamartQuery({ name: "inventory", options: { historical: "x" } }),
);
const { data: av, error: avError } = (await tryCatch(
runDatamartQuery({ name: "activeArticles", options: {} }),
)) as any;
if (invError) {
return returnFunc({
success: false,
level: "error",
module: "logistics",
subModule: "inv",
message: `Error getting inventory info from prod query`,
data: invError as any,
notify: false,
});
}
if (avError) {
return returnFunc({
success: false,
level: "error",
module: "logistics",
subModule: "inv",
message: `Error getting article info from prod query`,
data: invError as any,
notify: false,
});
}
// shape the data to go into our table
const plantToken = process.env.PROD_PLANT_TOKEN ?? "test1";
const importInv = (inv.data ? inv.data : []) as Inventory[];
const importData = importInv.map((i) => {
return {
histDate: sql`(NOW())::date`,
plantToken: plantToken,
article: i.article,
articleDescription: i.alias,
materialType:
av.data.filter((a: any) => a.article === i.article).length > 0
? av.data.filter((a: any) => a.article === i.article)[0]
?.TypeOfMaterial
: "Item not defined",
total_QTY: i.total_palletQTY ?? "0.00",
available_QTY: i.available_QTY ?? "0.00",
coa_QTY: i.coa_QTY ?? "0.00",
held_QTY: i.held_QTY ?? "0.00",
consignment_QTY: i.consignment_qty ?? "0.00",
lot_Number: i.lot ?? "0",
locationId: i.locationId ?? "0",
location: i.laneDescription ?? "Missing lane",
whseId: i.warehouseId ?? "0",
whseName: i.warehouseDescription ?? "Missing warehouse",
};
});
const { data: dataImport, error: errorImport } = await tryCatch(
db.insert(invHistoricalData).values(importData),
);
if (errorImport) {
return returnFunc({
success: false,
level: "error",
module: "logistics",
subModule: "inv",
message: `Error adding historical data to lst db`,
data: errorImport as any,
notify: true,
});
}
if (dataImport) {
return returnFunc({
success: false,
level: "info",
module: "logistics",
subModule: "inv",
message: `Historical data was added to lst :D`,
data: [],
notify: false,
});
}
} else {
return returnFunc({
success: false,
level: "info",
module: "logistics",
subModule: "inv",
message: `Historical Data for: ${format(today, "yyyy-MM-dd")}, is already added and nothing to do.`,
data: [],
notify: false,
});
}
return returnFunc({
success: false,
level: "info",
module: "logistics",
subModule: "inv",
message: `Some weird crazy error just happened and didnt get captured during the historical inv check.`,
data: [],
notify: true,
});
};
export const historicalSchedule = async () => {
// running the history in case my silly ass dose an update around the shift change time lol, this will prevent loss data. it might be off a little but no one cares
historicalInvImport();
const sqlQuery = sqlQuerySelector(`shiftChange`) as SqlQuery;
if (!sqlQuery.success) {
return returnFunc({
success: false,
level: "error",
module: "logistics",
subModule: "query",
message: `Error getting shiftChange sql file`,
data: [sqlQuery.message],
notify: false,
});
}
const { data, error } = await tryCatch(
prodQuery(sqlQuery.query, "Shift Change data"),
);
if (error) {
return returnFunc({
success: false,
level: "error",
module: "logistics",
subModule: "query",
message: `Error getting shiftChange info`,
data: error as any,
notify: false,
});
}
// shift split
const shiftTimeSplit = data?.data[0]?.shiftChange.split(":");
const cronSetup = `0 ${
shiftTimeSplit?.length > 0 ? `${parseInt(shiftTimeSplit[1])}` : "0"
} ${
shiftTimeSplit?.length > 0 ? `${parseInt(shiftTimeSplit[0])}` : "7"
} * * *`;
createCronJob("historicalInv", cronSetup, () => historicalInvImport());
};

View File

@@ -0,0 +1,58 @@
import { fromNodeHeaders } from "better-auth/node";
import type { NextFunction, Request, Response } from "express";
import { auth } from "../utils/auth.utils.js";
declare global {
namespace Express {
interface Request {
user?: {
id: string;
email?: string;
roles?: string | null | undefined; //Record<string, string[]>;
username?: string | null | undefined;
};
}
}
}
// function toWebHeaders(nodeHeaders: Request["headers"]): Headers {
// const h = new Headers();
// for (const [key, value] of Object.entries(nodeHeaders)) {
// if (Array.isArray(value)) {
// value.forEach((v) => h.append(key, v));
// } else if (value !== undefined) {
// h.set(key, value);
// }
// }
// return h;
// }
export const requireAuth = async (
req: Request,
res: Response,
next: NextFunction,
) => {
try {
const session = await auth.api.getSession({
headers: fromNodeHeaders(req.headers),
//query: { disableCookieCache: true },
});
if (!session) {
return res.status(401).json({ error: "Unauthorized" });
}
//console.log(session);
req.user = {
id: session.user.id,
email: session.user.email,
roles: session.user.role,
username: session.user.username,
};
next();
} catch {
return res.status(401).json({ error: "Unauthorized" });
}
};

View File

@@ -0,0 +1,52 @@
import type { NextFunction, Request, Response } from "express";
import { auth } from "../utils/auth.utils.js";
type PermissionMap = Record<string, string[]>;
declare global {
namespace Express {
interface Request {
authz?: {
success: boolean;
permissions: PermissionMap;
};
}
}
}
function normalizeRoles(roles: unknown): string {
if (Array.isArray(roles)) return roles.join(",");
if (typeof roles === "string") return roles;
return "";
}
export function requirePermission(permissions: PermissionMap) {
return async (req: Request, res: Response, next: NextFunction) => {
try {
const role = normalizeRoles(req.user?.roles) as any;
const result = await auth.api.userHasPermission({
body: {
role,
permissions,
},
});
req.authz = {
success: !!result?.success,
permissions,
};
if (!result?.success) {
return res.status(403).json({
ok: false,
message: "You do not have permission to perform this action.",
});
}
next();
} catch (error) {
next(error);
}
};
}

View File

@@ -0,0 +1,37 @@
import { and, eq } from "drizzle-orm";
import type { NextFunction, Request, Response } from "express";
import { db } from "../db/db.controller.js";
import { settings } from "../db/schema/settings.schema.js";
import { tryCatch } from "../utils/trycatch.utils.js";
/**
*
* @param moduleName name of the module we are checking if is enabled or not.
*/
export const featureCheck = (moduleName: string) => {
// get the features from the settings
return async (_req: Request, res: Response, next: NextFunction) => {
const { data: sData, error: sError } = await tryCatch(
db
.select()
.from(settings)
.where(
and(
eq(settings.settingType, "feature"),
eq(settings.name, moduleName),
),
),
);
if (sError) {
return res.status(403).json({ error: "Internal Error" });
}
if (!sData?.length || !sData[0]?.active) {
return res.status(403).json({ error: "Feature disabled" });
}
next();
};
};

View File

@@ -0,0 +1,113 @@
import { eq } from "drizzle-orm";
import { db } from "../db/db.controller.js";
import { notifications } from "../db/schema/notifications.schema.js";
import { prodQuery } from "../prodSql/prodSqlQuery.controller.js";
import {
type SqlQuery,
sqlQuerySelector,
} from "../prodSql/prodSqlQuerySelector.utils.js";
import { returnFunc } from "../utils/returnHelper.utils.js";
import { sendEmail } from "../utils/sendEmail.utils.js";
import { tryCatch } from "../utils/trycatch.utils.js";
/**
*
*/
const func = async (data: any, emails: string) => {
// get the actual notification as items will be updated between intervals if no one touches
const { data: l, error: le } = (await tryCatch(
db.select().from(notifications).where(eq(notifications.id, data.id)),
)) as any;
if (le) {
return returnFunc({
success: false,
level: "error",
module: "notification",
subModule: "query",
message: `${data.name} encountered an error while trying to get initial info`,
data: [le],
notify: true,
});
}
// search the query db for the query by name
const sqlQuery = sqlQuerySelector(`${data.name}`) as SqlQuery;
// create the ignore audit logs ids
const ignoreIds = l[0].options[0]?.auditId
? `${l[0].options[0]?.auditId}`
: "0";
// run the check
const { data: queryRun, error } = await tryCatch(
prodQuery(
sqlQuery.query
.replace("[intervalCheck]", l[0].interval)
.replace("[ignoreList]", ignoreIds),
`Running notification query: ${l[0].name}`,
),
);
if (error) {
return returnFunc({
success: false,
level: "error",
module: "notification",
subModule: "query",
message: `Data for: ${l[0].name} encountered an error while trying to get it`,
data: [error],
notify: true,
});
}
if (queryRun.data.length > 0) {
// update the latest audit id
const { error: dbe } = await tryCatch(
db
.update(notifications)
.set({ options: [{ auditId: `${queryRun.data[0].id}` }] })
.where(eq(notifications.id, data.id)),
);
if (dbe) {
return returnFunc({
success: false,
level: "error",
module: "notification",
subModule: "query",
message: `Data for: ${l[0].name} encountered an error while trying to get it`,
data: [dbe],
notify: true,
});
}
// send the email
const sentEmail = await sendEmail({
email: emails,
subject: "Alert! Label Reprinted",
template: "reprintLabels",
context: {
items: queryRun.data,
},
});
if (!sentEmail?.success) {
return returnFunc({
success: false,
level: "error",
module: "email",
subModule: "notification",
message: `${l[0].name} failed to send the email`,
data: [sentEmail],
notify: true,
});
}
} else {
console.log("doing nothing as there is nothing to do.");
}
// TODO send the error to systemAdmin users so they do not always need to be on the notifications.
// these errors are defined per notification.
};
export default func;

View File

@@ -0,0 +1,153 @@
import { eq } from "drizzle-orm";
import { db } from "../db/db.controller.js";
import { notifications } from "../db/schema/notifications.schema.js";
import { notificationSub } from "../db/schema/notifications.sub.schema.js";
import { createLogger } from "../logger/logger.controller.js";
import { minutesToCron } from "../utils/croner.minConvert.js";
import { createCronJob, stopCronJob } from "../utils/croner.utils.js";
import { tryCatch } from "../utils/trycatch.utils.js";
const log = createLogger({ module: "notifications", subModule: "start" });
export const startNotifications = async () => {
// get active notification
const { data, error } = await tryCatch(
db.select().from(notifications).where(eq(notifications.active, true)),
);
if (error) {
log.error(
{ error: error },
"There was an error when getting notifications.",
);
return;
}
if (data) {
if (data.length === 0) {
log.info(
{},
"There are know currently active notifications to start up.",
);
return;
}
// get the subs and see if we have any subs currently so we can fire up the notification
const { data: sub, error: subError } = await tryCatch(
db.select().from(notificationSub),
);
if (subError) {
log.error(
{ error: error },
"There was an error when getting subscriptions.",
);
return;
}
if (sub.length === 0) {
log.info({}, "There are know currently active subscriptions.");
return;
}
const emailString = [
...new Set(
sub.flatMap((e) =>
e.emails?.map((email) => email.trim().toLowerCase()),
),
),
].join(";");
for (const n of data) {
createCronJob(
n.name,
minutesToCron(parseInt(n.interval ?? "15", 10)),
async () => {
try {
const { default: runFun } = await import(
`./notification.${n.name.trim()}.js`
);
await runFun(n, emailString);
} catch (error) {
log.error(
{ error: error },
"There was an error starting the notification",
);
}
},
);
}
}
};
export const modifiedNotification = async (id: string) => {
// when a notifications subscribed to, updated, deleted we want to get the info and rerun the startup on the single notification.
const { data, error } = await tryCatch(
db.select().from(notifications).where(eq(notifications.id, id)),
);
if (error) {
log.error(
{ error: error },
"There was an error when getting notifications.",
);
return;
}
if (data) {
if (!data[0]?.active) {
stopCronJob(data[0]?.name ?? "");
return;
}
// get the subs for the specific id as we only want to up the modified one
const { data: sub, error: subError } = await tryCatch(
db
.select()
.from(notificationSub)
.where(eq(notificationSub.notificationId, id)),
);
if (subError) {
log.error(
{ error: error },
"There was an error when getting subscriptions.",
);
return;
}
if (sub.length === 0) {
log.info({}, "There are know currently active subscriptions.");
stopCronJob(data[0]?.name ?? "");
return;
}
const emailString = [
...new Set(
sub.flatMap((e) =>
e.emails?.map((email) => email.trim().toLowerCase()),
),
),
].join(";");
createCronJob(
data[0].name,
minutesToCron(parseInt(data[0].interval ?? "15", 10)),
async () => {
try {
const { default: runFun } = await import(
`./notification.${data[0]?.name.trim()}.js`
);
await runFun(data[0], emailString);
} catch (error) {
log.error(
{ error: error },
"There was an error starting the notification",
);
}
},
);
}
};

View File

@@ -0,0 +1,96 @@
import { eq } from "drizzle-orm";
import { type Response, Router } from "express";
import { db } from "../db/db.controller.js";
import { notifications } from "../db/schema/notifications.schema.js";
import { auth } from "../utils/auth.utils.js";
import { apiReturn } from "../utils/returnHelper.utils.js";
import { tryCatch } from "../utils/trycatch.utils.js";
const r = Router();
r.post("/", async (req, res: Response) => {
const hasPermissions = await auth.api.userHasPermission({
body: {
//userId: req?.user?.id,
role: req.user?.roles as any,
permissions: {
notifications: ["readAll"], // This must match the structure in your access control
},
},
});
if (!hasPermissions) {
return apiReturn(res, {
success: false,
level: "error",
module: "notification",
subModule: "post",
message: `You do not have permissions to be here`,
data: [],
status: 400,
});
}
const { data: nName, error: nError } = await tryCatch(
db
.select()
.from(notifications)
.where(eq(notifications.name, req.body.name)),
);
if (nError) {
return apiReturn(res, {
success: false,
level: "error",
module: "notification",
subModule: "get",
message: `There was an error getting the notifications `,
data: [nError],
status: 400,
});
}
const { data: sub, error: sError } = await tryCatch(
db
.select()
.from(notifications)
.where(eq(notifications.name, req.body.name)),
);
if (sError) {
return apiReturn(res, {
success: false,
level: "error",
module: "notification",
subModule: "get",
message: `There was an error getting the subs `,
data: [sError],
status: 400,
});
}
const emailString = [
...new Set(
sub.flatMap((e: any) =>
e.emails?.map((email: any) => email.trim().toLowerCase()),
),
),
].join(";");
console.log(emailString);
const { default: runFun } = await import(
`./notification.${req.body.name.trim()}.js`
);
const manual = await runFun(nName[0], "blake.matthes@alpla.com");
return apiReturn(res, {
success: true,
level: "info",
module: "notification",
subModule: "post",
message: `Manual Trigger ran`,
data: manual ?? [],
status: 200,
});
});
export default r;

View File

@@ -0,0 +1,114 @@
import { eq } from "drizzle-orm";
import { db } from "../db/db.controller.js";
import { notifications } from "../db/schema/notifications.schema.js";
import { prodQuery } from "../prodSql/prodSqlQuery.controller.js";
import {
type SqlQuery,
sqlQuerySelector,
} from "../prodSql/prodSqlQuerySelector.utils.js";
import { delay } from "../utils/delay.utils.js";
import { returnFunc } from "../utils/returnHelper.utils.js";
import { sendEmail } from "../utils/sendEmail.utils.js";
import { tryCatch } from "../utils/trycatch.utils.js";
import { v2QueryRun } from "../utils/pgConnectToLst.utils.js";
let shutoffv1 = false
const func = async (data: any, emails: string) => {
// TODO: remove this disable once all 17 plants are on this new lst
if (!shutoffv1){
v2QueryRun(`update public.notifications set active = false where name = '${data.name}'`)
shutoffv1 = true
}
const { data: l, error: le } = (await tryCatch(
db.select().from(notifications).where(eq(notifications.id, data.id)),
)) as any;
if (le) {
return returnFunc({
success: false,
level: "error",
module: "notification",
subModule: "query",
message: `${data.name} encountered an error while trying to get initial info`,
data: le as any,
notify: true,
});
}
// search the query db for the query by name
const sqlQuery = sqlQuerySelector(`${data.name}`) as SqlQuery;
// create the ignore audit logs ids
// get get the latest blocking order id that was sent
const blockingOrderId = l[0].options[0].lastBlockingOrderIdSent ?? 69;
// run the check
const { data: queryRun, error } = await tryCatch(
prodQuery(
sqlQuery.query.replace("[lastBlocking]", blockingOrderId),
`Running notification query: ${l[0].name}`,
),
);
if (error) {
return returnFunc({
success: false,
level: "error",
module: "notification",
subModule: "query",
message: `Data for: ${l[0].name} encountered an error while trying to get it`,
data: error as any,
notify: true,
});
}
if (queryRun.data.length > 0) {
for (const bo of queryRun.data) {
const sentEmail = await sendEmail({
email: emails,
subject: bo.subject,
template: "qualityBlocking",
context: {
items: bo,
},
});
if (!sentEmail?.success) {
return returnFunc({
success: false,
level: "error",
module: "notification",
subModule: "email",
message: `${l[0].name} failed to send the email`,
data: sentEmail?.data as any,
notify: true,
});
}
await delay(1500);
const { error: dbe } = await tryCatch(
db
.update(notifications)
.set({ options: [{ lastBlockingOrderIdSent: bo.blockingNumber }] })
.where(eq(notifications.id, data.id)),
);
if (dbe) {
return returnFunc({
success: false,
level: "error",
module: "notification",
subModule: "query",
message: `Data for: ${l[0].name} encountered an error while trying to get it`,
data: dbe as any,
notify: true,
});
}
}
}
};
export default func;

View File

@@ -0,0 +1,113 @@
import { eq } from "drizzle-orm";
import { db } from "../db/db.controller.js";
import { notifications } from "../db/schema/notifications.schema.js";
import { prodQuery } from "../prodSql/prodSqlQuery.controller.js";
import {
type SqlQuery,
sqlQuerySelector,
} from "../prodSql/prodSqlQuerySelector.utils.js";
import { returnFunc } from "../utils/returnHelper.utils.js";
import { sendEmail } from "../utils/sendEmail.utils.js";
import { tryCatch } from "../utils/trycatch.utils.js";
import { v2QueryRun } from "../utils/pgConnectToLst.utils.js";
let shutoffv1 = false
const func = async (data: any, emails: string) => {
// TODO: remove this disable once all 17 plants are on this new lst
if (!shutoffv1){
v2QueryRun(`update public.notifications set active = false where name = '${data.name}'`)
shutoffv1 = true
}
const { data: l, error: le } = (await tryCatch(
db.select().from(notifications).where(eq(notifications.id, data.id)),
)) as any;
if (le) {
return returnFunc({
success: false,
level: "error",
module: "notification",
subModule: "query",
message: `${data.name} encountered an error while trying to get initial info`,
data: le as any,
notify: true,
});
}
// search the query db for the query by name
const sqlQuery = sqlQuerySelector(`${data.name}`) as SqlQuery;
// create the ignore audit logs ids
const ignoreIds = l[0].options[0]?.auditId
? `${l[0].options[0]?.auditId}`
: "0";
// run the check
const { data: queryRun, error } = await tryCatch(
prodQuery(
sqlQuery.query
.replace("[intervalCheck]", l[0].interval)
.replace("[ignoreList]", ignoreIds),
`Running notification query: ${l[0].name}`,
),
);
if (error) {
return returnFunc({
success: false,
level: "error",
module: "notification",
subModule: "query",
message: `Data for: ${l[0].name} encountered an error while trying to get it`,
data: error as any,
notify: true,
});
}
if (queryRun.data.length > 0) {
// update the latest audit id
const { error: dbe } = await tryCatch(
db
.update(notifications)
.set({ options: [{ auditId: `${queryRun.data[0].id}` }] })
.where(eq(notifications.id, data.id)),
);
if (dbe) {
return returnFunc({
success: false,
level: "error",
module: "notification",
subModule: "query",
message: `Data for: ${l[0].name} encountered an error while trying to get it`,
data: dbe as any,
notify: true,
});
}
// send the email
const sentEmail = await sendEmail({
email: emails,
subject: "Alert! Label Reprinted",
template: "reprintLabels",
context: {
items: queryRun.data,
},
});
if (!sentEmail?.success) {
return returnFunc({
success: false,
level: "error",
module: "notification",
subModule: "email",
message: `${l[0].name} failed to send the email`,
data: sentEmail?.data as any,
notify: true,
});
}
}
};
export default func;

View File

@@ -0,0 +1,55 @@
import { eq } from "drizzle-orm";
import { type Response, Router } from "express";
import { db } from "../db/db.controller.js";
import { notifications } from "../db/schema/notifications.schema.js";
import { auth } from "../utils/auth.utils.js";
import { apiReturn } from "../utils/returnHelper.utils.js";
import { tryCatch } from "../utils/trycatch.utils.js";
const r = Router();
r.get("/", async (req, res: Response) => {
const hasPermissions = await auth.api.userHasPermission({
body: {
//userId: req?.user?.id,
role: req.user?.roles as any,
permissions: {
notifications: ["readAll"], // This must match the structure in your access control
},
},
});
const { data: nName, error: nError } = await tryCatch(
db
.select()
.from(notifications)
.where(
!hasPermissions.success ? eq(notifications.active, true) : undefined,
)
.orderBy(notifications.name),
);
if (nError) {
return apiReturn(res, {
success: false,
level: "error",
module: "notification",
subModule: "get",
message: `There was an error getting the notifications `,
data: [nError],
status: 400,
});
}
return apiReturn(res, {
success: true,
level: "info",
module: "notification",
subModule: "get",
message: `All current notifications`,
data: nName ?? [],
status: 200,
});
});
export default r;

View File

@@ -0,0 +1,22 @@
import type { Express } from "express";
import { requireAuth } from "../middleware/auth.middleware.js";
import manual from "./notification.manualTrigger.js";
import getNotifications from "./notification.route.js";
import updateNote from "./notification.update.route.js";
import deleteSub from "./notificationSub.delete.route.js";
import subs from "./notificationSub.get.route.js";
import newSub from "./notificationSub.post.route.js";
import updateSub from "./notificationSub.update.route.js";
export const setupNotificationRoutes = (baseUrl: string, app: Express) => {
//stats will be like this as we dont need to change this
app.use(`${baseUrl}/api/notification`, requireAuth, getNotifications);
app.use(`${baseUrl}/api/notification`, requireAuth, updateNote);
app.use(`${baseUrl}/api/notification/manual`, requireAuth, manual);
app.use(`${baseUrl}/api/notification/sub`, requireAuth, subs);
app.use(`${baseUrl}/api/notification/sub`, requireAuth, newSub);
app.use(`${baseUrl}/api/notification/sub`, requireAuth, updateSub);
app.use(`${baseUrl}/api/notification/sub`, requireAuth, deleteSub);
// all other system should be under /api/system/*
};

View File

@@ -0,0 +1,81 @@
import { eq } from "drizzle-orm";
import { type Response, Router } from "express";
import z from "zod";
import { db } from "../db/db.controller.js";
import { notifications } from "../db/schema/notifications.schema.js";
import { requirePermission } from "../middleware/auth.requiredPerms.middleware.js";
import { apiReturn } from "../utils/returnHelper.utils.js";
import { tryCatch } from "../utils/trycatch.utils.js";
import { modifiedNotification } from "./notification.controller.js";
const r = Router();
const updateNote = z.object({
description: z.string().optional(),
active: z.boolean().optional(),
interval: z.string().optional(),
options: z.array(z.record(z.string(), z.unknown())).optional(),
});
r.patch(
"/:id",
requirePermission({ notifications: ["update"] }),
async (req, res: Response) => {
const { id } = req.params;
try {
const validated = updateNote.parse(req.body);
const { data: nName, error: nError } = await tryCatch(
db
.update(notifications)
.set(validated)
.where(eq(notifications.id, id as string))
.returning(),
);
await modifiedNotification(id as string);
if (nError) {
return apiReturn(res, {
success: false,
level: "error",
module: "notification",
subModule: "update",
message: `There was an error getting the notifications `,
data: [nError],
status: 400,
});
}
return apiReturn(res, {
success: true,
level: "info",
module: "notification",
subModule: "update",
message: `Notification was updated`,
data: nName ?? [],
status: 200,
});
} catch (err) {
if (err instanceof z.ZodError) {
const flattened = z.flattenError(err);
// return res.status(400).json({
// error: "Validation failed",
// details: flattened,
// });
return apiReturn(res, {
success: false,
level: "error", //connect.success ? "info" : "error",
module: "routes",
subModule: "notification",
message: "Validation failed",
data: [flattened.fieldErrors],
status: 400, //connect.success ? 200 : 400,
});
}
}
},
);
export default r;

View File

@@ -0,0 +1,103 @@
import { and, eq } from "drizzle-orm";
import { type Response, Router } from "express";
import z from "zod";
import { db } from "../db/db.controller.js";
import { notificationSub } from "../db/schema/notifications.sub.schema.js";
import { auth } from "../utils/auth.utils.js";
import { apiReturn } from "../utils/returnHelper.utils.js";
import { tryCatch } from "../utils/trycatch.utils.js";
import { modifiedNotification } from "./notification.controller.js";
const newSubscribe = z.object({
userId: z.string().describe("User id."),
notificationId: z.string().describe("Notification id"),
});
const r = Router();
r.delete("/", async (req, res: Response) => {
const hasPermissions = await auth.api.userHasPermission({
body: {
//userId: req?.user?.id,
role: req.user?.roles as any,
permissions: {
notifications: ["readAll"], // This must match the structure in your access control
},
},
});
try {
const validated = newSubscribe.parse(req.body);
const { data, error } = await tryCatch(
db
.delete(notificationSub)
.where(
and(
eq(
notificationSub.userId,
hasPermissions ? validated.userId : (req?.user?.id ?? ""),
), // allows the admin to delete this
//eq(notificationSub.userId, req?.user?.id ?? ""),
eq(notificationSub.notificationId, validated.notificationId),
),
)
.returning(),
);
await modifiedNotification(validated.notificationId);
if (error) {
return apiReturn(res, {
success: false,
level: "error",
module: "notification",
subModule: "post",
message: `There was an error deleting the subscription `,
data: [error],
status: 400,
});
}
if (data.length <= 0) {
return apiReturn(res, {
success: false,
level: "info",
module: "notification",
subModule: "post",
message: `Subscription was not deleted invalid data sent over`,
data: data ?? [],
status: 200,
});
}
return apiReturn(res, {
success: true,
level: "info",
module: "notification",
subModule: "post",
message: `Subscription deleted`,
data: data ?? [],
status: 200,
});
} catch (err) {
if (err instanceof z.ZodError) {
const flattened = z.flattenError(err);
// return res.status(400).json({
// error: "Validation failed",
// details: flattened,
// });
return apiReturn(res, {
success: false,
level: "error", //connect.success ? "info" : "error",
module: "routes",
subModule: "notification",
message: "Validation failed",
data: [flattened.fieldErrors],
status: 400, //connect.success ? 200 : 400,
});
}
}
});
export default r;

View File

@@ -0,0 +1,61 @@
import { eq } from "drizzle-orm";
import { type Response, Router } from "express";
import { db } from "../db/db.controller.js";
import { notificationSub } from "../db/schema/notifications.sub.schema.js";
import { auth } from "../utils/auth.utils.js";
import { apiReturn } from "../utils/returnHelper.utils.js";
import { tryCatch } from "../utils/trycatch.utils.js";
const r = Router();
r.get("/", async (req, res: Response) => {
const { userId } = req.query;
const hasPermissions = await auth.api.userHasPermission({
body: {
//userId: req?.user?.id,
role: req.user?.roles as any,
permissions: {
notifications: ["readAll"], // This must match the structure in your access control
},
},
});
if (userId) {
hasPermissions.success = false;
}
const { data, error } = await tryCatch(
db
.select()
.from(notificationSub)
.where(
!hasPermissions.success
? eq(notificationSub.userId, `${req?.user?.id ?? ""}`)
: undefined,
),
);
if (error) {
return apiReturn(res, {
success: false,
level: "error",
module: "notification",
subModule: "post",
message: `There was an error getting subscriptions `,
data: [error],
status: 400,
});
}
return apiReturn(res, {
success: true,
level: "info",
module: "notification",
subModule: "post",
message: `Subscriptions`,
data: data ?? [],
status: 200,
});
});
export default r;

View File

@@ -0,0 +1,92 @@
import { type Response, Router } from "express";
import z from "zod";
import { db } from "../db/db.controller.js";
import { notificationSub } from "../db/schema/notifications.sub.schema.js";
import { apiReturn } from "../utils/returnHelper.utils.js";
import { tryCatch } from "../utils/trycatch.utils.js";
import { modifiedNotification } from "./notification.controller.js";
const newSubscribe = z.object({
emails: z
.email()
.array()
.describe("An array of emails"),
userId: z.string().describe("User id."),
notificationId: z
.string()
.describe("Notification id"),
});
const r = Router();
r.post("/", async (req, res: Response) => {
try {
const validated = newSubscribe.parse(req.body);
const emails = validated.emails
.map((e) => e.trim().toLowerCase())
.filter(Boolean);
const uniqueEmails = [...new Set(emails)];
const { data, error } = await tryCatch(
db
.insert(notificationSub)
.values({
userId: req?.user?.id ?? "",
notificationId: validated.notificationId,
emails: uniqueEmails,
})
.onConflictDoUpdate({
target: [notificationSub.userId, notificationSub.notificationId],
set: { emails: uniqueEmails },
})
.returning(),
);
await modifiedNotification(validated.notificationId);
if (error) {
return apiReturn(res, {
success: false,
level: "error",
module: "notification",
subModule: "post",
message: `There was an error getting the notifications `,
data: [error],
status: 400,
});
}
return apiReturn(res, {
success: true,
level: "info",
module: "notification",
subModule: "post",
message: `Subscribed to notification`,
data: data ?? [],
status: 200,
});
} catch (err) {
if (err instanceof z.ZodError) {
const flattened = z.flattenError(err);
// return res.status(400).json({
// error: "Validation failed",
// details: flattened,
// });
return apiReturn(res, {
success: false,
level: "error", //connect.success ? "info" : "error",
module: "routes",
subModule: "notification",
message: "Validation failed",
data: [flattened.fieldErrors],
status: 400, //connect.success ? 200 : 400,
});
}
}
});
export default r;

View File

@@ -0,0 +1,84 @@
import { and, eq } from "drizzle-orm";
import { type Response, Router } from "express";
import z from "zod";
import { db } from "../db/db.controller.js";
import { notificationSub } from "../db/schema/notifications.sub.schema.js";
import { apiReturn } from "../utils/returnHelper.utils.js";
import { tryCatch } from "../utils/trycatch.utils.js";
import { modifiedNotification } from "./notification.controller.js";
const newSubscribe = z.object({
emails: z.email().array().describe("An array of emails"),
userId: z.string().describe("User id."),
notificationId: z.string().describe("Notification id"),
});
const r = Router();
r.patch("/", async (req, res: Response) => {
try {
const validated = newSubscribe.parse(req.body);
const emails = validated.emails
.map((e) => e.trim().toLowerCase())
.filter(Boolean);
const uniqueEmails = [...new Set(emails)];
const { data, error } = await tryCatch(
db
.update(notificationSub)
.set({ emails: uniqueEmails })
.where(
and(
eq(notificationSub.userId, validated.userId),
eq(notificationSub.notificationId, validated.notificationId),
),
)
.returning(),
);
await modifiedNotification(validated.notificationId);
if (error) {
return apiReturn(res, {
success: false,
level: "error",
module: "notification",
subModule: "update",
message: `There was an error updating the notifications `,
data: [error],
status: 400,
});
}
return apiReturn(res, {
success: true,
level: "info",
module: "notification",
subModule: "update",
message: `Subscription updated`,
data: data ?? [],
status: 200,
});
} catch (err) {
if (err instanceof z.ZodError) {
const flattened = z.flattenError(err);
// return res.status(400).json({
// error: "Validation failed",
// details: flattened,
// });
return apiReturn(res, {
success: false,
level: "error", //connect.success ? "info" : "error",
module: "routes",
subModule: "notification",
message: "Validation failed",
data: [flattened.fieldErrors],
status: 400, //connect.success ? 200 : 400,
});
}
}
});
export default r;

View File

@@ -0,0 +1,70 @@
import { sql } from "drizzle-orm";
import { db } from "../db/db.controller.js";
import {
type NewNotification,
notifications,
} from "../db/schema/notifications.schema.js";
import { createLogger } from "../logger/logger.controller.js";
import { tryCatch } from "../utils/trycatch.utils.js";
const note: NewNotification[] = [
{
name: "reprintLabels",
description:
"Monitors the labels that are printed and returns a there data, if one falls withing the time frame.",
active: false,
interval: "10",
options: [{ auditId: [0] }],
},
{
name: "qualityBlocking",
description:
"Checks for new blocking orders that have been entered, recommend to get the most recent order in here before activating.",
active: false,
interval: "10",
options: [{ lastBlockingOrderIdSent: 1 }],
},
{
name: "alplaPurchaseHistory",
description:
"Will check the alpla purchase data for any changes, if the req has not been sent already then we will send this, for a po or fresh order we will ignore. ",
active: false,
interval: "5",
options: [
{ sentReqs: [{ timeStamp: "0", req: 1, approved: false }] },
{ sentAPOs: [{ timeStamp: "0", apo: 1 }] },
{ sentRCT: [{ timeStamp: "0", rct: 1 }] },
],
},
];
export const createNotifications = async () => {
const log = createLogger({ module: "notifications", subModule: "create" });
const { data, error } = await tryCatch(
db
.insert(notifications)
.values(note)
.onConflictDoUpdate({
target: notifications.name,
set: {
description: sql`excluded.description`,
},
// where: sql`
// settings.seed_version IS NULL
// OR settings.seed_version < excluded.seed_version
// `,
})
.returning(),
);
if (error) {
log.error(
{ error: error },
"There was an error when adding or updating the notifications.",
);
}
if (data) {
log.info({}, "All notifications were added/updated");
}
};

View File

@@ -0,0 +1,98 @@
/**
* the route that listens for the printers post.
*
* and http-post alert should be setup on each printer pointing to at min you will want to make the alert for
* pause printer, you can have all on here as it will also monitor and do things on all messages
*
* http://{serverIP}:2222/lst/api/ocp/printer/listener/{printerName}
*
* the messages will be sent over to the db for logging as well as specific ones will do something
*
* pause will validate if can print
* close head will repause the printer so it wont print a label
* power up will just repause the printer so it wont print a label
*/
import { Router } from "express";
import multer from "multer";
import { db } from "../db/db.controller.js";
import { printerLog } from "../db/schema/printerLogs.schema.js";
import { apiReturn } from "../utils/returnHelper.utils.js";
import { tryCatch } from "../utils/trycatch.utils.js";
type PrinterEvent = {
name: string;
condition: string;
message: string;
};
const r = Router();
const upload = multer();
const parseZebraAlert = (body: any): PrinterEvent => {
const name = body.uniqueId || "unknown";
const decoded = decodeURIComponent(body.alertMsg || "");
const [conditionRaw, ...rest] = decoded.split(":");
const condition = conditionRaw?.toLowerCase()?.trim() || "unknown";
const message = rest.join(":").trim();
return {
name,
condition,
message,
};
};
r.post("/printer/listener/:printer", upload.any(), async (req, res) => {
const { printer: printerName } = req.params;
const event: PrinterEvent = parseZebraAlert(req.body);
const rawIp =
req.headers["x-forwarded-for"]?.toString().split(",")[0]?.trim() ||
req.socket.remoteAddress ||
req.ip;
const ip = rawIp?.replace("::ffff:", "");
// post the new message
const { data, error } = await tryCatch(
db
.insert(printerLog)
.values({
ip: ip?.replace("::ffff:", ""),
name: printerName,
printerSN: event.name,
condition: event.condition,
message: event.message,
})
.returning(),
);
if (error) {
return apiReturn(res, {
success: false,
level: "info",
module: "ocp",
subModule: "printing",
message: `${printerName} encountered an error posting the log`,
data: error as any,
status: 400,
});
}
if (data) {
// TODO: send message over to the controller to decide what to do next with it
}
return apiReturn(res, {
success: true,
level: "info",
module: "ocp",
subModule: "printing",
message: `${printerName} just sent a message`,
data: req.body ?? [],
status: 200,
});
});
export default r;

View File

@@ -0,0 +1,332 @@
/**
* this will do a prod sync, update or add alerts to the printer, validate the next pm intervale as well as head replacement.
*
* if a printer is upcoming on a pm or head replacement send to the plant to address.
*
* a trigger on the printer table will have the ability to run this as well
*
* heat beats on all assigned printers
*
* printer status will live here this will be how we manage all the levels of status like 3 paused, 1 printing, 8 error, 10 power up, etc...
*/
import { eq } from "drizzle-orm";
import net from "net";
import { db } from "../db/db.controller.js";
import { printerData } from "../db/schema/printers.schema.js";
import { createLogger } from "../logger/logger.controller.js";
import { delay } from "../utils/delay.utils.js";
import { runProdApi } from "../utils/prodEndpoint.utils.js";
import { returnFunc } from "../utils/returnHelper.utils.js";
type Printer = {
name: string;
humanReadableId: string;
type: number;
ipAddress: string;
port: number;
default: boolean;
labelInstanceIpAddress: string;
labelInstancePort: number;
active: boolean;
remark: string;
processes: number[];
};
const log = createLogger({ module: "ocp", subModule: "printers" });
export const printerManager = async () => {};
export const printerHeartBeat = async () => {
// heat heats will be defaulted to 60 seconds no reason to allow anything else, and heart beats will only go to assigned printers no need to be monitoring non labeling printers
};
//export const printerStatus = async (statusNr: number, printerId: number) => {};
export const printerSync = async () => {
// pull the printers from alpla prod and update them in lst
const printers = await runProdApi({
method: "get",
endpoint: "/public/v1.0/Administration/Printers",
});
if (!printers?.success) {
return returnFunc({
success: false,
level: "error",
module: "ocp",
subModule: "printer",
message: printers?.message ?? "",
data: printers?.data ?? [],
notify: false,
});
}
if (printers?.success) {
const ignorePrinters = ["pdf24", "standard"];
const validPrinters =
printers.data.filter(
(n: any) =>
!ignorePrinters.includes(n.name.toLowerCase()) && n.ipAddress,
) ?? [];
if (validPrinters.length) {
for (const printer of validPrinters as Printer[]) {
// run an update for each printer, do on conflicts based on the printer id
log.debug({}, `Add/Updating ${printer.name}`);
if (printer.active) {
await db
.insert(printerData)
.values({
name: printer.name,
humanReadableId: printer.humanReadableId,
ipAddress: printer.ipAddress,
port: printer.port,
remark: printer.remark,
processes: printer.processes,
})
.onConflictDoUpdate({
target: printerData.humanReadableId,
set: {
name: printer.name,
humanReadableId: printer.humanReadableId,
ipAddress: printer.ipAddress,
port: printer.port,
remark: printer.remark,
processes: printer.processes,
},
})
.returning();
await tcpPrinter(printer);
}
if (!printer.active) {
log.warn({}, `${printer.name} is not active so removing from lst.`);
await db
.delete(printerData)
.where(eq(printerData.humanReadableId, printer.humanReadableId));
}
}
return returnFunc({
success: true,
level: "info",
module: "ocp",
subModule: "printer",
message: `${printers.data.length} printers were just synced, this includes new and old printers`,
data: [],
notify: false,
});
}
}
return returnFunc({
success: true,
level: "info",
module: "ocp",
subModule: "printer",
message: `No printers to update`,
data: [],
notify: false,
});
};
const tcpPrinter = (printer: Printer) => {
return new Promise<void>((resolve) => {
const socket = new net.Socket();
const timeoutMs = 15 * 1000;
const commands = [
{
key: "clearAlerts",
command: '! U1 setvar "alerts.configured" ""\r\n',
},
{
key: "addAlert",
command: `! U1 setvar "alerts.add" "ALL MESSAGES,HTTP-POST,Y,Y,http://${process.env.SERVER_IP}:${process.env.PORT}/lst/api/ocp/printer/listener/${printer.name},0,N,printer"\r\n`,
},
{
key: "setFriendlyName",
command: `! U1 setvar "device.friendly_name" "${printer.name}"\r\n`,
},
{
key: "getUniqueId",
command: '! U1 getvar "device.unique_id"\r\n',
},
] as const;
let currentCommandIndex = 0;
let awaitingSerial = false;
let settled = false;
const cleanup = () => {
socket.removeAllListeners();
socket.destroy();
};
const finish = (err?: unknown) => {
if (settled) return;
settled = true;
clearTimeout(timeout);
cleanup();
if (err) {
log.error(
{ err, printer: printer.name },
`Printer update failed for ${printer.name}: doing the name and alert add directly on the printer.`,
);
}
resolve();
};
const timeout = setTimeout(() => {
finish(`${printer.name} timed out while updating printer config`);
}, timeoutMs);
const sendNext = async () => {
if (currentCommandIndex >= commands.length) {
socket.end();
return;
}
const current = commands[currentCommandIndex];
if (!current) {
socket.end();
return;
}
awaitingSerial = current.key === "getUniqueId";
log.info(
{ printer: printer.name, command: current.key },
`Sending command to ${printer.name}`,
);
socket.write(current.command);
currentCommandIndex++;
// Small pause between commands so the printer has breathing room
if (currentCommandIndex < commands.length) {
await delay(1500);
await sendNext();
} else {
// last command was sent, now wait for final data/close
await delay(1500);
socket.end();
}
};
socket.connect(printer.port, printer.ipAddress, async () => {
log.info({}, `Connected to ${printer.name}`);
try {
await sendNext();
} catch (error) {
finish(
error instanceof Error
? error
: new Error(
`Unknown error while sending commands to ${printer.name}`,
),
);
}
});
socket.on("data", async (data) => {
const response = data.toString().trim().replaceAll('"', "");
log.info(
{ printer: printer.name, response },
`Received printer response from ${printer.name}`,
);
if (!awaitingSerial) return;
awaitingSerial = false;
try {
await db
.update(printerData)
.set({ printerSN: response })
.where(eq(printerData.humanReadableId, printer.humanReadableId));
} catch (error) {
finish(
error instanceof Error
? error
: new Error(`Failed to update printer SN for ${printer.name}`),
);
}
});
socket.on("close", () => {
log.info({}, `Closed connection to ${printer.name}`);
finish();
});
socket.on("error", (err) => {
finish(err);
});
});
};
// const tcpPrinter = async (printer: Printer) => {
// const p = new net.Socket();
// const commands = [
// '! U1 setvar "alerts.configured" ""\r\n', // clean install just remove all alerts
// `! U1 setvar "alerts.add" "ALL MESSAGES,HTTP-POST,Y,Y,http://${process.env.SERVER_IP}:${process.env.PORT}/lst/api/ocp/printer/listener/${printer.name},0,N,printer"\r\n`, // add in the all alert
// `! U1 setvar "device.friendly_name" "${printer.name}"\r\n`, // change the name to match the alplaprod name
// `! U1 getvar "device.unique_id"\r\n`, // this will get mapped into the printer as this is the one we will link to in the db.
// //'! U1 getvar "alerts.configured" ""\r\n',
// ];
// let index = 0;
// const sendNext = async () => {
// if (index >= commands.length) {
// p.end();
// return;
// }
// const cmd = commands[index] as string;
// p.write(cmd);
// return;
// };
// p.connect(printer.port, printer.ipAddress, async () => {
// log.info({}, `Connected to ${printer.name}`);
// while (index < commands.length) {
// await sendNext();
// await delay(2000);
// index++;
// }
// });
// p.on("data", async (data) => {
// // this is just the sn that comes over so we will update this printer.
// await db
// .update(printerData)
// .set({ printerSN: data.toString().trim().replaceAll('"', "") })
// .where(eq(printerData.humanReadableId, printer.humanReadableId));
// // get the name
// // p.write('! U1 getvar "device.friendly_name"\r\n');
// // p.write('! U1 getvar "device.unique_id"\r\n');
// // p.write('! U1 getvar "alerts.configured"\r\n');
// });
// p.on("close", () => {
// log.info({}, `Closed connection to ${printer.name}`);
// p.destroy();
// return;
// });
// p.on("error", (err) => {
// log.info(
// { stack: err },
// `${printer.name} encountered an error while trying to update`,
// );
// return;
// });
// };

View File

@@ -0,0 +1,38 @@
/**
* the route that listens for the printers post.
*
* and http-post alert should be setup on each printer pointing to at min you will want to make the alert for
* pause printer, you can have all on here as it will also monitor and do things on all messages
*
* http://{serverIP}:2222/lst/api/ocp/printer/listener/{printerName}
*
* the messages will be sent over to the db for logging as well as specific ones will do something
*
* pause will validate if can print
* close head will repause the printer so it wont print a label
* power up will just repause the printer so it wont print a label
*/
import { Router } from "express";
import { apiReturn } from "../utils/returnHelper.utils.js";
//import { tryCatch } from "../utils/trycatch.utils.js";
import { printerSync } from "./ocp.printer.manage.js";
const r = Router();
r.post("/printer/update", async (_, res) => {
printerSync();
return apiReturn(res, {
success: true,
level: "info",
module: "ocp",
subModule: "printing",
message:
"Printer update has been triggered to monitor progress please head to the logs.",
data: [],
status: 200,
});
});
export default r;

24
backend/ocp/ocp.routes.ts Normal file
View File

@@ -0,0 +1,24 @@
import { type Express, Router } from "express";
import { requireAuth } from "../middleware/auth.middleware.js";
import { featureCheck } from "../middleware/featureActive.middleware.js";
import listener from "./ocp.printer.listener.js";
import update from "./ocp.printer.update.js";
export const setupOCPRoutes = (baseUrl: string, app: Express) => {
//setup all the routes
const router = Router();
// is the feature even on?
router.use(featureCheck("ocp"));
// non auth routes up here
router.use(listener);
// auth routes below here
router.use(requireAuth);
router.use(update);
//router.use("");
app.use(`${baseUrl}/api/ocp`, router);
};

View File

@@ -0,0 +1,393 @@
import axios from "axios";
import { addHours } from "date-fns";
import { formatInTimeZone } from "date-fns-tz";
import { eq, sql } from "drizzle-orm";
import { db } from "../db/db.controller.js";
import { opendockApt } from "../db/schema/opendock.schema.js";
import { settings } from "../db/schema/settings.schema.js";
import { createLogger } from "../logger/logger.controller.js";
import { prodQuery } from "../prodSql/prodSqlQuery.controller.js";
import {
type SqlQuery,
sqlQuerySelector,
} from "../prodSql/prodSqlQuerySelector.utils.js";
import { createCronJob } from "../utils/croner.utils.js";
import { delay } from "../utils/delay.utils.js";
import { returnFunc } from "../utils/returnHelper.utils.js";
import { tryCatch } from "../utils/trycatch.utils.js";
import { getToken, odToken } from "./opendock.utils.js";
type Releases = {
ReleaseNumber: number;
DeliveryState: number;
DeliveryDate: Date;
LineItemHumanReadableId: number;
ArticleAlias: string;
LoadingUnits: string;
Quantity: number;
LineItemArticleWeight: number;
CustomerReleaseNumber: string;
};
const timeZone = process.env.TIMEZONE as string;
const TWENTY_FOUR_HOURS = 24 * 60 * 60 * 1000;
const log = createLogger({ module: "opendock", subModule: "releaseMonitor" });
// making the cron more safe when it comes to buffer stuff
let opendockSyncRunning = false;
let lastCheck = formatInTimeZone(
new Date().toISOString(),
timeZone,
"yyyy-MM-dd HH:mm:ss",
);
// const lastCheck = formatInTimeZone(
// new Date().toISOString(),
// `America/New_York`, //TODO: Pull timezone from the .env last as process.env.TIME_ZONE is not working so need to figure itout
// "yyyy-MM-dd HH:mm:ss",
// );
//const queue: unknown[] = [];
//const isProcessing: boolean = false;
// const parseDbDate = (value: string | Date) => {
// if (value instanceof Date) return value;
// // normalize "2026-04-08 13:10:43.280" -> "2026-04-08T13:10:43.280"
// const normalized = value.replace(" ", "T");
// // interpret that wall-clock time as America/New_York
// return fromZonedTime(normalized, timeZone);
// };
const postRelease = async (release: Releases) => {
if (!odToken.odToken) {
log.info({}, "Getting Auth Token");
await getToken();
}
if (
new Date(odToken.tokenDate || Date.now()).getTime() <
Date.now() - TWENTY_FOUR_HOURS
) {
log.info({}, "Refreshing Auth Token");
await getToken();
}
/**
* ReleaseState
* 0 = open
* 1 = planned
* 2 = CustomCanceled
* 4 = internally canceled
*/
/**
* DeliveryState
* 0 = open
* 1 = inprogress
* 2 = loading
* 3 = partly shipped
* 4 = delivered
*/
const newDockApt = {
status:
release.DeliveryState === 0 || release.DeliveryState === 1
? "Scheduled"
: release.DeliveryState === 2
? "InProgress"
: release.DeliveryState === 3 // this will consider finished and if a correction needs made to the bol we need to cancel and reactivate the order
? "Completed"
: release.DeliveryState === 4 && "Completed",
userId: process.env.DEFAULT_CARRIER, // this should be the carrierid
loadTypeId: process.env.DEFAULT_LOAD_TYPE, // well get this and make it a default one
dockId: process.env.DEFAULT_DOCK, // this the warehouse we want it in to start out
refNumbers: [release.ReleaseNumber],
//refNumber: release.ReleaseNumber,
start: release.DeliveryDate,
end: addHours(release.DeliveryDate, 1),
notes: "",
ccEmails: [""],
muteNotifications: true,
metadata: {
externalValidationFailed: false,
externalValidationErrorMessage: null,
},
units: null,
customFields: [
{
name: "strArticle",
type: "str",
label: "Article",
value: `${release.LineItemHumanReadableId} - ${release.ArticleAlias}`,
description: "What bottle are we sending ",
placeholder: "",
dropDownValues: [],
minLengthOrValue: 1,
hiddenFromCarrier: false,
requiredForCarrier: false,
requiredForWarehouse: false,
},
{
name: "intPallet Count",
type: "int",
label: "Pallet Count",
value: parseInt(release.LoadingUnits, 10), // do we really want to update this if its partial load as it should have been the full amount?
description: "How many pallets",
placeholder: "22",
dropDownValues: [],
minLengthOrValue: 1,
hiddenFromCarrier: false,
requiredForCarrier: false,
requiredForWarehouse: false,
},
{
name: "strTotal Weight",
type: "str",
label: "Total Weight",
value: `${(((release.Quantity * release.LineItemArticleWeight) / 1000) * 2.20462).toFixed(2)}`,
description: "What is the total weight of the load",
placeholder: "",
dropDownValues: [],
minLengthOrValue: 1,
hiddenFromCarrier: false,
requiredForCarrier: false,
requiredForWarehouse: false,
},
{
name: "strCustomer ReleaseNumber",
type: "str",
label: "Customer Release Number",
value: `${release.CustomerReleaseNumber}`,
description: "What is the customer release number",
placeholder: "",
dropDownValues: [],
minLengthOrValue: 1,
hiddenFromCarrier: false,
requiredForCarrier: false,
requiredForWarehouse: false,
},
],
};
// TODO: pull the current added releases from the db and if one matches then we want to get its id and run the update vs create
const { data: existingApt, error: aptError } = await tryCatch(
db
.select()
.from(opendockApt)
.where(eq(opendockApt.release, release.ReleaseNumber))
.limit(1),
);
if (aptError) {
log.error({ error: aptError }, "Error getting apt data");
// TODO: send an error email on this one as it will cause issues
return;
}
const existing = existingApt[0];
//console.log(releaseCheck);
if (existing) {
const id = existing.openDockAptId;
try {
const response = await axios.patch(
`${process.env.OPENDOCK_URL}/appointment/${id}`,
newDockApt,
{
headers: {
"content-type": "application/json; charset=utf-8",
Authorization: `Bearer ${odToken.odToken}`,
},
},
);
if (response.status === 400) {
log.error({}, response.data.data.message);
return;
}
// update the release in the db leaving as insert just incase something weird happened
try {
await db
.insert(opendockApt)
.values({
release: release.ReleaseNumber,
openDockAptId: response.data.data.id,
appointment: response.data.data,
})
.onConflictDoUpdate({
target: opendockApt.release,
set: {
openDockAptId: response.data.data.id,
appointment: response.data.data,
upd_date: sql`NOW()`,
},
})
.returning();
log.info({}, `${release.ReleaseNumber} was updated`);
} catch (e) {
log.error(
{ error: e },
`Error updating the release: ${release.ReleaseNumber}`,
);
}
// biome-ignore lint/suspicious/noExplicitAny: to many possibilities
} catch (e: any) {
//console.info(newDockApt);
log.error(
{ error: e.response.data },
`An error has occurred during patching of the release: ${release.ReleaseNumber}`,
);
return;
}
} else {
try {
const response = await axios.post(
`${process.env.OPENDOCK_URL}/appointment`,
newDockApt,
{
headers: {
"content-type": "application/json; charset=utf-8",
Authorization: `Bearer ${odToken.odToken}`,
},
},
);
// we need the id,release#,status from this response, store it in lst, check if we have a release so we can just update it.
// this will be utilized when we are listening for the changes to the apts. that way we can update the state to arrived. we will run our own checks on this guy during the incoming messages.
if (response.status === 400) {
log.error({}, response.data.data.message);
return;
}
// the response to make it simple we want response.data.id, response.data.relNumber, status will be defaulted to Scheduled if we created it here.
// TODO: add this release data to our db. but save it in json format and well parse it out. that way we future proof it and have everything in here vs just a few things
//console.info(response.data.data, "Was Created");
try {
await db
.insert(opendockApt)
.values({
release: release.ReleaseNumber,
openDockAptId: response.data.data.id,
appointment: response.data.data,
})
.onConflictDoUpdate({
target: opendockApt.release,
set: {
openDockAptId: response.data.data.id,
appointment: response.data.data,
upd_date: sql`NOW()`,
},
})
.returning();
log.info({}, `${release.ReleaseNumber} was created`);
} catch (e) {
log.error({ error: e }, "Error creating new release");
}
// biome-ignore lint/suspicious/noExplicitAny: to many possibilities
} catch (e: any) {
log.error(
{ error: e?.response?.data },
"Error posting new release to opendock",
);
return;
}
}
await delay(750); // rate limit protection
};
export const monitorReleaseChanges = async () => {
// TODO: validate if the setting for opendocks is active and start / stop the system based on this
// if it changes we set to false and the next loop will stop.
const openDockMonitor = await db
.select()
.from(settings)
.where(eq(settings.name, "opendock_sync"));
// console.info("Starting release monitor", lastCheck);
const sqlQuery = sqlQuerySelector(`releaseChecks`) as SqlQuery;
if (!sqlQuery.success) {
return returnFunc({
success: false,
level: "error",
module: "datamart",
subModule: "query",
message: `Error getting releaseChecks info`,
data: [sqlQuery.message],
notify: false,
});
}
if (openDockMonitor[0]?.active) {
// const BUFFER_MS =
// Math.floor(parseInt(openDockMonitor[0]?.value, 10) || 30) * 1.5 * 1000; // this should be >= to the interval we set in the cron TODO: should pull the buffer from the setting and give it an extra 10% then round to nearest int.
createCronJob(
"opendock_sync",
`*/${parseInt(openDockMonitor[0]?.value, 10) || 30} * * * * *`,
async () => {
if (opendockSyncRunning) {
log.warn(
{},
"Skipping opendock_sync because previous run is still active",
);
return;
}
opendockSyncRunning = true;
try {
// set this to the latest time.
const result = await prodQuery(
sqlQuery.query.replace("[dateCheck]", `'${lastCheck}'`),
"Get release info",
);
log.debug(
{ lastCheck },
`${result.data.length} Changes to a release have been made`,
);
if (result.data.length) {
for (const release of result.data) {
await postRelease(release);
// add a 2 seconds to account for a massive influx of orders and when we dont finish in 1 go it wont try to grab the same amount again
const nDate = new Date(release.Upd_Date);
nDate.setSeconds(nDate.getSeconds() + 2);
lastCheck = formatInTimeZone(
nDate.toISOString(),
"UTC",
"yyyy-MM-dd HH:mm:ss",
);
log.debug({ lastCheck }, "Changes to a release have been made");
await delay(500);
}
}
} catch (e) {
console.error(
{ error: e },
"Error occurred while running the monitor job",
);
log.error(
{ error: e },
"Error occurred while running the monitor job",
);
} finally {
opendockSyncRunning = false;
}
},
"monitorReleaseChanges",
);
}
};

View File

@@ -0,0 +1,19 @@
import { type Express, Router } from "express";
import { requireAuth } from "../middleware/auth.middleware.js";
import { featureCheck } from "../middleware/featureActive.middleware.js";
import getApt from "./opendockGetRelease.route.js";
export const setupOpendockRoutes = (baseUrl: string, app: Express) => {
//setup all the routes
// Apply auth to entire router
const router = Router();
// is the feature even on?
router.use(featureCheck("opendock_sync"));
// we need to make sure we are authenticated to see the releases
router.use(requireAuth);
router.use(getApt);
app.use(`${baseUrl}/api/opendock`, router);
};

View File

@@ -0,0 +1,35 @@
import axios from "axios";
import { createLogger } from "../logger/logger.controller.js";
type ODToken = {
odToken: string | null;
tokenDate: Date | null;
};
export let odToken: ODToken = {
odToken: null,
tokenDate: new Date(),
};
export const getToken = async () => {
const log = createLogger({ module: "opendock", subModule: "releaseMonitor" });
try {
const { status, data } = await axios.post(
`${process.env.OPENDOCK_URL}/auth/login`,
{
email: "blake.matthes@alpla.com",
password: process.env.OPENDOCK_PASSWORD,
},
);
if (status === 400) {
log.error(data.message);
return;
}
odToken = { odToken: data.access_token, tokenDate: new Date() };
log.info({ odToken }, "Token added");
} catch (e) {
log.error({ error: e }, "Error getting/refreshing token");
}
};

View File

@@ -0,0 +1,40 @@
import { desc, gte, sql } from "drizzle-orm";
import { Router } from "express";
import { db } from "../db/db.controller.js";
import { opendockApt } from "../db/schema/opendock.schema.js";
import { apiReturn } from "../utils/returnHelper.utils.js";
import { tryCatch } from "../utils/trycatch.utils.js";
const r = Router();
r.get("/", async (_, res) => {
//const limit
const daysCreated = 30;
const { data } = await tryCatch(
db
.select()
.from(opendockApt)
.where(
gte(
opendockApt.createdAt,
sql.raw(`NOW() - INTERVAL '${daysCreated} days'`),
),
)
.orderBy(desc(opendockApt.createdAt))
.limit(500),
);
apiReturn(res, {
success: true,
level: "info",
module: "opendock",
subModule: "apt",
message: `The first ${data?.length} Apt(s) that were created in the last ${daysCreated} `,
data: data ?? [],
status: 200,
});
});
export default r;

View File

@@ -0,0 +1,69 @@
import { io, type Socket } from "socket.io-client";
import { createLogger } from "../logger/logger.controller.js";
import { systemSettings } from "../server.js";
import { getToken, odToken } from "./opendock.utils.js";
const log = createLogger({ module: "opendock", subModule: "releaseMonitor" });
const TWENTY_FOUR_HOURS = 24 * 60 * 60 * 1000;
let socket: Socket | null = null;
export const opendockSocketMonitor = async () => {
// checking if we actaully want to run this
if (!systemSettings.filter((n) => n.name === "opendock_sync")[0]?.active) {
log.info({}, "Opendock is not active");
}
if (!odToken.odToken) {
log.info({}, "Getting Auth Token");
await getToken();
}
if (
new Date(odToken.tokenDate || Date.now()).getTime() <
Date.now() - TWENTY_FOUR_HOURS
) {
log.info({}, "Refreshing Auth Token");
await getToken();
}
const baseSubspaceUrl = "wss://subspace.staging.opendock.com";
const url = `${baseSubspaceUrl}?token=${odToken.odToken}`;
socket = io(url, { transports: ["websocket"] }); // Enforce 'websocket' transport only.
socket.on("connect", () => {
console.log("Connected");
});
// socket.on("heartbeat", (data) => {
// console.log(data);
// });
socket.on("create-Appointment", () => {
//console.log("appt create:", data);
});
socket.on("update-Appointment", () => {
//console.log("appt update:", data);
});
socket.on("error", (data) => {
console.log("Error:", data);
});
// socket.onAny((event, ...args) => {
// console.log("Received event:", event, args);
// });
};
export const killOpendockSocket = () => {
if (!socket) {
console.log("No active socket to kill");
return;
}
console.log("🛑 Killing socket connection...");
socket.removeAllListeners(); // optional but clean
socket.disconnect();
socket = null;
console.log("✅ Socket killed");
};

View File

@@ -0,0 +1,17 @@
import { type Express, Router } from "express";
import { requireAuth } from "../middleware/auth.middleware.js";
import restart from "./prodSqlRestart.route.js";
import start from "./prodSqlStart.route.js";
import stop from "./prodSqlStop.route.js";
export const setupProdSqlRoutes = (baseUrl: string, app: Express) => {
//setup all the routes
// Apply auth to entire router
const router = Router();
router.use(requireAuth);
router.use(start);
router.use(stop);
router.use(restart);
app.use(`${baseUrl}/api/system/prodSql`, router);
};

View File

@@ -7,12 +7,17 @@ import { returnFunc } from "../utils/returnHelper.utils.js";
export let pool: sql.ConnectionPool;
export let connected: boolean = false;
export let reconnecting = false;
// start the delay out as 2 seconds
let delayStart = 2000;
let attempt = 0;
const maxAttempts = 10;
export const connectProdSql = async () => {
const serverUp = await checkHostnamePort(`${process.env.PROD_SERVER}:1433`);
if (!serverUp) {
// we will try to reconnect
connected = false;
reconnectToSql();
return returnFunc({
success: false,
level: "error",
@@ -35,7 +40,8 @@ export const connectProdSql = async () => {
// try to connect to the sql server
try {
pool = await sql.connect(prodSqlConfig);
pool = new sql.ConnectionPool(prodSqlConfig);
await pool.connect();
connected = true;
return returnFunc({
success: true,
@@ -47,6 +53,7 @@ export const connectProdSql = async () => {
notify: false,
});
} catch (error) {
reconnectToSql();
return returnFunc({
success: false,
level: "error",
@@ -103,11 +110,6 @@ export const reconnectToSql = async () => {
//set reconnecting to true while we try to reconnect
reconnecting = true;
// start the delay out as 2 seconds
let delayStart = 2000;
let attempt = 0;
const maxAttempts = 10;
while (!connected && attempt < maxAttempts) {
attempt++;
log.info(
@@ -120,7 +122,7 @@ export const reconnectToSql = async () => {
if (!serverUp) {
delayStart = Math.min(delayStart * 2, 30000); // exponential backoff until up to 30000
return;
continue;
}
try {
@@ -132,25 +134,18 @@ export const reconnectToSql = async () => {
);
} catch (error) {
delayStart = Math.min(delayStart * 2, 30000);
return returnFunc({
success: false,
level: "error",
module: "system",
subModule: "db",
message: "Failed to reconnect to the prod sql server.",
data: [error],
notify: false,
});
delayStart = Math.min(delayStart * 2, 30000);
log.error({ error }, "Failed to reconnect to the prod sql server.");
}
}
if (!connected) {
if (!connected && attempt >= maxAttempts) {
log.error(
{ notify: true },
"Max reconnect attempts reached on the prodSql server. Stopping retries.",
);
reconnecting = false;
// exit alert someone here
// TODO: exit alert someone here
}
};

View File

@@ -1,11 +1,5 @@
import { returnFunc } from "../utils/returnHelper.utils.js";
import {
closePool,
connected,
pool,
reconnecting,
reconnectToSql,
} from "./prodSqlConnection.controller.js";
import { connected, pool } from "./prodSqlConnection.controller.js";
interface SqlError extends Error {
code?: string;
@@ -23,29 +17,15 @@ interface SqlError extends Error {
*/
export const prodQuery = async (queryToRun: string, name: string) => {
if (!connected) {
reconnectToSql();
if (reconnecting) {
return returnFunc({
success: false,
level: "error",
module: "system",
subModule: "prodSql",
message: `The sql ${process.env.PROD_PLANT_TOKEN} is trying to reconnect already`,
data: [],
notify: false,
});
} else {
return returnFunc({
success: false,
level: "error",
module: "system",
subModule: "prodSql",
message: `${process.env.PROD_PLANT_TOKEN} is not connected, and failed to connect.`,
data: [],
notify: true,
});
}
return returnFunc({
success: false,
level: "error",
module: "system",
subModule: "prodSql",
message: `${process.env.PROD_PLANT_TOKEN} is offline or attempting to reconnect`,
data: [],
notify: false,
});
}
//change to the correct server
@@ -59,12 +39,11 @@ export const prodQuery = async (queryToRun: string, name: string) => {
return {
success: true,
message: `Query results for: ${name}`,
data: result.recordset,
data: result.recordset ?? [],
};
} catch (error: unknown) {
const err = error as SqlError;
if (err.code === "ETIMEOUT") {
closePool();
return returnFunc({
success: false,
module: "system",
@@ -77,7 +56,6 @@ export const prodQuery = async (queryToRun: string, name: string) => {
}
if (err.code === "EREQUEST") {
closePool();
return returnFunc({
success: false,
module: "system",

View File

@@ -0,0 +1,29 @@
import { readFileSync } from "node:fs";
export type SqlQuery = {
query: string;
success: boolean;
message: string;
};
export const sqlQuerySelector = (name: string) => {
try {
const queryFile = readFileSync(
new URL(`../prodSql/queries/${name}.sql`, import.meta.url),
"utf8",
);
return {
success: true,
message: `Query for: ${name}`,
query: queryFile,
};
} catch (e) {
console.error(e);
return {
success: false,
message:
"Error getting the query file, please make sure you have the correct name.",
};
}
};

View File

@@ -0,0 +1,63 @@
use AlplaPROD_test1
declare @intervalCheck as int = '[interval]'
/*
Monitors alpla purchase for thing new. this will not update unless the order status is updated.
this means if a user just reopens the order it will update but everything changed in the position will not be updated until the user reorders or cancels the po
*/
select
IdBestellung as apo
,po.revision as revision
,po.Bestaetigt as confirmed
,po.status
,case po.Status
when 1 then 'Created'
when 2 then 'Ordered'
when 22 then 'Reopened'
when 11 then 'Reopened'
when 4 then 'Planned'
when 5 then 'Partly Delivered'
when 6 then 'Delivered'
when 7 then 'Canceled'
when 8 then 'Closed'
else 'Unknown' end as statusText
,po.IdJournal as journalNum -- use this to validate if we used it already.
,po.Add_User as add_user
,po.Add_Date as add_date
,po.Upd_User as upd_user
,po.Upd_Date as upd_Date
,po.Bemerkung as remark
,po.IdJournal as journal -- use this to validate if we used it already.
,isnull((
select
o.IdArtikelVarianten as av
,a.Bezeichnung as alias
,Lieferdatum as deliveryDate
,cast(BestellMenge as decimal(18,2)) as qty
,cast(BestellMengeVPK as decimal(18,0)) as pkg
,cast(PreisProEinheit as decimal(18,0)) as price
,PositionsStatus
,case PositionsStatus
when 1 then 'Created'
when 2 then 'Ordered'
when 22 then 'Reopened'
when 4 then 'Planned'
when 5 then 'Partly Delivered'
when 6 then 'Delivered'
when 7 then 'Canceled'
when 8 then 'Closed'
else 'Unknown' end as statusText
,o.upd_user
,o.upd_date
from T_Bestellpositionen (nolock) as o
left join
T_Artikelvarianten as a on
a.IdArtikelvarianten = o.IdArtikelVarianten
where o.IdBestellung = po.IdBestellung
for json path
), '[]') as position
--,*
from T_Bestellungen (nolock) as po
where po.Upd_Date > dateadd(MINUTE, -@intervalCheck, getdate())

View File

@@ -0,0 +1,208 @@
use AlplaPROD_test1
SELECT V_Artikel.IdArtikelvarianten as article,
V_Artikel.Bezeichnung,
V_Artikel.ArtikelvariantenTypBez,
V_Artikel.PreisEinheitBez,
case when sales.price is null then 0 else sales.price end as salesPrice,
TypeOfMaterial=CASE
WHEN
V_Artikel.ArtikelvariantenTypBez LIKE'%Additive'
Then 'AD'
when V_Artikel.ArtikelvariantenTypBez Like '%Masterbatch'
THEN 'MB'
WHEN V_Artikel.ArtikelvariantenTypBez ='Pallet' or
V_Artikel.ArtikelvariantenTypBez ='Top' or
V_Artikel.ArtikelvariantenTypBez ='Bags' or
V_Artikel.ArtikelvariantenTypBez ='Bag' or
V_Artikel.ArtikelvariantenTypBez ='Stretch Wrap' or
V_Artikel.ArtikelvariantenTypBez ='Stretch Film' or
V_Artikel.ArtikelvariantenTypBez ='Banding Materials' or
V_Artikel.ArtikelvariantenTypBez ='Carton' or
V_Artikel.ArtikelvariantenTypBez ='Re-Shipper Box' or
V_Artikel.ArtikelvariantenTypBez ='Label' or
V_Artikel.ArtikelvariantenTypBez ='Pallet Label' or
V_Artikel.ArtikelvariantenTypBez ='Carton Label' or
V_Artikel.ArtikelvariantenTypBez ='Liner' or
V_Artikel.ArtikelvariantenTypBez ='Dose Cup' or
V_Artikel.ArtikelvariantenTypBez ='Metal Cage' or
V_Artikel.ArtikelvariantenTypBez ='Spout' or
V_Artikel.ArtikelvariantenTypBez = 'Slip Sheet' or
V_Artikel.ArtikelvariantenTypBez = 'Palet' or
V_Artikel.ArtikelvariantenTypBez = 'LID' or
V_Artikel.ArtikelvariantenTypBez= 'Metal' or
V_Artikel.ArtikelvariantenTypBez= 'Corner post' or
V_Artikel.ArtikelvariantenTypBez= 'Bottle Label' or
V_Artikel.ArtikelvariantenTypBez = 'Paper label' or
V_Artikel.ArtikelvariantenTypBez = 'Banding' or
V_Artikel.ArtikelvariantenTypBez = 'Glue' or
V_Artikel.ArtikelvariantenTypBez = 'Top Frame' or
V_Artikel.ArtikelvariantenTypBez = 'IML Label' or
V_Artikel.ArtikelvariantenTypBez = 'Purch EBM Bottle' or
V_Artikel.ArtikelvariantenTypBez = 'Purchased Spout' or
V_Artikel.ArtikelvariantenTypBez = 'Gaylord' or
V_Artikel.ArtikelvariantenTypBez = 'Misc. Packaging' or
V_Artikel.ArtikelvariantenTypBez = 'Sleeve' or
V_Artikel.ArtikelvariantenTypBez = 'Plastic Bag' or
V_Artikel.ArtikelvariantenTypBez = 'Purch Spout' or
V_Artikel.ArtikelvariantenTypBez = 'Seal' or
V_Artikel.ArtikelvariantenTypBez = 'Tape' or
V_Artikel.ArtikelvariantenTypBez = 'Box' or
V_Artikel.ArtikelvariantenTypBez = 'Label IML' or
V_Artikel.ArtikelvariantenTypBez = 'Pallet Runner'
THEN 'PKG'
WHEN V_Artikel.ArtikelvariantenTypBez='HD-PE' or
V_Artikel.ArtikelvariantenTypBez='HD-PE PCR' or
V_Artikel.ArtikelvariantenTypBez='HD-PP' or
V_Artikel.ArtikelvariantenTypBez= 'PP' or
V_Artikel.ArtikelvariantenTypBez LIKE '%PCR' or
V_Artikel.ArtikelvariantenTypBez= 'LDPE' or
V_Artikel.ArtikelvariantenTypBez= 'PP' or
V_Artikel.ArtikelvariantenTypBez= 'HDPE' or
V_Artikel.ArtikelvariantenTypBez= 'PET' or
V_Artikel.ArtikelvariantenTypBez= 'PET-P' or
V_Artikel.ArtikelvariantenTypBez= 'PET-G'
THEN 'MM'
WHEN
V_Artikel.ArtikelvariantenTypBez='HDPE-Waste' or
V_Artikel.ArtikelvariantenTypBez='$Waste Container' or
V_Artikel.ArtikelvariantenTypBez='Mixed-Waste' or
V_Artikel.ArtikelvariantenTypBez LIKE'%-Waste%'
THEN 'Waste'
WHEN
V_Artikel.ArtikelvariantenTypBez = 'Bottle' or
V_Artikel.ArtikelvariantenTypBez = 'SBM Bottle' or
V_Artikel.ArtikelvariantenTypBez = 'EBM Bottle' or
V_Artikel.ArtikelvariantenTypBez = 'ISBM Bottle' or
V_Artikel.ArtikelvariantenTypBez = 'Decorated Bottle'
THEN 'Bottle'
WHEN V_Artikel.ArtikelvariantenTypBez = 'Preform'
Then 'Preform'
When
V_Artikel.ArtikelvariantenTypBez = 'Purchased Preform' or
V_Artikel.ArtikelvariantenTypBez = 'Purchased Caps' or
V_Artikel.ArtikelvariantenTypBez = 'Purchased_preform'
THEN 'Purchased_preform'
When
V_Artikel.ArtikelvariantenTypBez = 'Closures' or
V_Artikel.ArtikelvariantenTypBez = 'Cap'
THEN 'Caps'
When
V_Artikel.ArtikelvariantenTypBez = 'Dummy'
THEN 'Not used'
ELSE 'Item not defined' END
,V_Artikel.IdArtikelvariantenTyp,
Round(V_Artikel.ArtikelGewicht, 3) as Article_Weight,
IdAdresse,
AdressBez,
AdressTypBez,
ProdBereichBez,
FG=case when
V_Artikel.ProdBereichBez = 'SBM' or
V_Artikel.ProdBereichBez = 'IM-Caps' or
V_Artikel.ProdBereichBez = 'IM-PET' or
V_Artikel.ProdBereichBez = 'PRINT OFFICE' or
V_Artikel.ProdBereichBez = 'EBM' or
V_Artikel.ProdBereichBez = 'ISBM' or
V_Artikel.ProdBereichBez = 'IM-Finishing'
Then 'FG'
Else 'not Defined Profit Center'
end,
V_Artikel.Umlaeufe as num_of_cycles,
V_FibuKonten_BASIS.FibuKontoNr as CostsCenterId,
V_FibuKonten_BASIS.Bezeichnung as CostCenterDescription,
sales.[KdArtNr] as CustomerArticleNumber,
sales.[KdArtBez] as CustomerArticleDescription,
round(V_Artikel.Zyklus, 2) as CycleTime,
Sypronummer as salesAgreement,
V_Artikel.ProdArtikelBez as ProductFamily
--,REPLACE(pur.UOM,'UOM:','')
,Case when LEFT(
LTRIM(REPLACE(pur.UOM,'UOM:','')),
CHARINDEX(' ', LTRIM(REPLACE(REPLACE(pur.UOM,'UOM:',''), CHAR(13)+CHAR(10), ' ')) + ' ') - 1
) is null then '1' else LEFT(
LTRIM(REPLACE(pur.UOM,'UOM:','')),
CHARINDEX(' ', LTRIM(REPLACE(REPLACE(pur.UOM,'UOM:',''), CHAR(13)+CHAR(10), ' ')) + ' ') - 1
) end AS UOM
--,*
FROM dbo.V_Artikel (nolock)
join
dbo.V_Artikelvarianten (nolock) on dbo.V_Artikel.IdArtikelvarianten =
dbo.V_Artikelvarianten.IdArtikelvarianten
join
dbo.V_FibuKonten_BASIS (nolock) on dbo.V_Artikelvarianten.IdFibuKonto =
dbo.V_FibuKonten_BASIS.IdFibuKonto
-- adding in the sales price
left join
(select * from
(select
ROW_NUMBER() OVER (PARTITION BY IdArtikelvarianten ORDER BY GueltigabDatum DESC) AS RN,
IdArtikelvarianten as av
,GueltigabDatum as validDate
,VKPreis as price
,[KdArtNr]
,[KdArtBez]
--,*
from dbo.T_HistoryVK (nolock)
where
--GueltigabDatum > getDate() - 120
--and
Aktiv = 1
and StandardKunde = 1 -- default address
) a
where RN = 1) as sales
on dbo.V_Artikel.IdArtikelvarianten = sales.av
/* adding the purchase price info */
left join
(select * from
(select
ROW_NUMBER() OVER (PARTITION BY IdArtikelvarianten ORDER BY GueltigabDatum DESC) AS RN,
IdArtikelvarianten as av
,GueltigabDatum as validDate
,EKPreis as price
,LiefArtNr as supplierNr
--,CASE
-- WHEN Bemerkung IS NOT NULL AND Bemerkung LIKE '%UOM:%'
-- THEN
-- -- incase there is something funny going on in the remark well jsut check for new lines and what not
-- LEFT(
-- REPLACE(REPLACE(Bemerkung, CHAR(13)+CHAR(10), ' '), CHAR(10), ' '),
-- CASE
-- WHEN CHARINDEX(' ', REPLACE(REPLACE(Bemerkung, CHAR(13)+CHAR(10), ' '), CHAR(10), ' ')) > 0
-- THEN CHARINDEX(' ', REPLACE(REPLACE(Bemerkung, CHAR(13)+CHAR(10), ' '), CHAR(10), ' ')) - 1
-- ELSE LEN(Bemerkung)
-- END
-- )
-- ELSE 'UOM:1'
-- END AS UOM
,CASE
WHEN Bemerkung IS NOT NULL AND Bemerkung LIKE '%UOM:%'
THEN
LTRIM(
SUBSTRING(
Bemerkung,
CHARINDEX('UOM:', UPPER(Bemerkung)) + LEN('UOM:'),
LEN(Bemerkung)
)
)
ELSE
'UOM:1'
END AS UOM
,Bemerkung
--,*
from dbo.T_HistoryEK (nolock)
where
StandardLieferant = 1 -- default address
) a
where RN = 1) as pur
on dbo.V_Artikel.IdArtikelvarianten = pur.av
where V_Artikel.aktiv = 1 --and dbo.V_Artikel.IdArtikelvarianten = 1445
order by V_Artikel.IdArtikelvarianten /*, TypeOfMaterial */

View File

@@ -0,0 +1,43 @@
/**
This will be replacing activeArticles once all data is remapped into this query.
make a note in the docs this activeArticles will go stale sooner or later.
**/
use [test1_AlplaPROD2.0_Read]
select a.Id,
a.HumanReadableId as av,
a.Alias as alias,
p.LoadingUnitsPerTruck as loadingUnitsPerTruck,
p.LoadingUnitsPerTruck * p.LoadingUnitPieces as qtyPerTruck,
p.LoadingUnitPieces,
case when i.MinQuantity IS NOT NULL then round(cast(i.MinQuantity as float), 2) else 0 end as min,
case when i.MaxQuantity IS NOT NULL then round(cast(i.MaxQuantity as float),2) else 0 end as max
from masterData.Article (nolock) as a
/* sales price */
left join
(select *
from (select
id,
PackagingId,
ArticleId,
DefaultCustomer,
ROW_NUMBER() OVER (PARTITION BY ArticleId ORDER BY ValidAfter DESC) AS RowNum
from masterData.SalesPrice (nolock)
where DefaultCustomer = 1) as x
where RowNum = 1
) as s
on a.id = s.ArticleId
/* pkg instructions */
left join
masterData.PackagingInstruction (nolock) as p
on s.PackagingId = p.id
/* stock limits */
left join
masterData.StockLimit (nolock) as i
on a.id = i.ArticleId
where a.active = 1
and a.HumanReadableId in ([articles])

View File

@@ -0,0 +1,45 @@
select x.idartikelVarianten as av
,ArtikelVariantenAlias as Alias
--x.Lfdnr as RunningNumber,
--,round(sum(EinlagerungsMengeVPKSum),0) as Total_Pallets
--,sum(EinlagerungsMengeSum) as Total_PalletQTY
,round(sum(VerfuegbareMengeVPKSum),0) as Avalible_Pallets
,sum(VerfuegbareMengeSum) as Avaliable_PalletQTY
,sum(case when c.Description LIKE '%COA%' then GesperrteMengeVPKSum else 0 end) as COA_Pallets
,sum(case when c.Description LIKE '%COA%' then GesperrteMengeSum else 0 end) as COA_QTY
--,sum(case when c.Description NOT LIKE '%COA%' then GesperrteMengeVPKSum else 0 end) as Held_Pallets
--,sum(case when c.Description NOT LIKE '%COA%' then GesperrteMengeSum else 0 end) as Held_QTY
,IdProdPlanung as Lot
--,IdAdressen
--,x.AdressBez
--,*
from [AlplaPROD_test1].dbo.[V_LagerPositionenBarcodes] (nolock) x
left join
[AlplaPROD_test1].dbo.T_EtikettenGedruckt (nolock) on
x.Lfdnr = T_EtikettenGedruckt.Lfdnr AND T_EtikettenGedruckt.Lfdnr > 1
left join
(SELECT *
FROM [AlplaPROD_test1].[dbo].[T_BlockingDefects] (nolock) where Active = 1) as c
on x.IdMainDefect = c.IdBlockingDefect
/*
The data below will be controlled by the user in excell by default everything will be passed over
IdAdressen = 3
*/
where
--IdArtikelTyp = 1
x.IdWarenlager not in (6, 1)
--and IdAdressen
--and x.IdWarenlager in (0)
group by x.IdArtikelVarianten
,ArtikelVariantenAlias
,IdProdPlanung
--,c.Description
,IdAdressen
,x.AdressBez
--, x.Lfdnr
order by x.IdArtikelVarianten

View File

@@ -0,0 +1,74 @@
use [test1_AlplaPROD2.0_Read]
DECLARE @StartDate DATE = '[startDate]' -- 2025-1-1
DECLARE @EndDate DATE = '[endDate]' -- 2025-1-31
SELECT
r.[ArticleHumanReadableId]
,[ReleaseNumber]
,h.CustomerOrderNumber
,x.CustomerLineItemNumber
,[CustomerReleaseNumber]
,[ReleaseState]
,[DeliveryState]
,ea.JournalNummer as BOL_Number
,[ReleaseConfirmationState]
,[PlanningState]
--,format(r.[OrderDate], 'yyyy-MM-dd HH:mm') as OrderDate
,r.[OrderDate]
--,FORMAT(r.[DeliveryDate], 'yyyy-MM-dd HH:mm') as DeliveryDate
,r.[DeliveryDate]
--,FORMAT(r.[LoadingDate], 'yyyy-MM-dd HH:mm') as LoadingDate
,r.[LoadingDate]
,[Quantity]
,[DeliveredQuantity]
,r.[AdditionalInformation1]
,r.[AdditionalInformation2]
,[TradeUnits]
,[LoadingUnits]
,[Trucks]
,[LoadingToleranceType]
,[SalesPrice]
,[Currency]
,[QuantityUnit]
,[SalesPriceRemark]
,r.[Remark]
,[Irradiated]
,r.[CreatedByEdi]
,[DeliveryAddressHumanReadableId]
,DeliveryAddressDescription
,[CustomerArtNo]
,[TotalPrice]
,r.[ArticleAlias]
FROM [order].[Release] (nolock) as r
left join
[order].LineItem as x on
r.LineItemId = x.id
left join
[order].Header as h on
x.HeaderId = h.id
--bol stuff
left join
AlplaPROD_test1.dbo.V_LadePlanungenLadeAuftragAbruf (nolock) as zz
on zz.AbrufIdAuftragsAbruf = r.ReleaseNumber
left join
(select * from (SELECT
ROW_NUMBER() OVER (PARTITION BY IdJournal ORDER BY add_date DESC) AS RowNum
,*
FROM [AlplaPROD_test1].[dbo].[T_Lieferungen] (nolock)) x
where RowNum = 1) as ea on
zz.IdLieferschein = ea.IdJournal
where
--r.ArticleHumanReadableId in ([articles])
--r.ReleaseNumber = 1452
r.DeliveryDate between @StartDate AND @EndDate
and DeliveredQuantity > 0
--and Journalnummer = 169386

View File

@@ -0,0 +1,29 @@
use [test1_AlplaPROD2.0_Read]
select
customerartno as CustomerArticleNumber
,h.CustomerOrderNumber as CustomerOrderNumber
,l.CustomerLineItemNumber as CustomerLineNumber
,r.CustomerReleaseNumber as CustomerRealeaseNumber
,r.Quantity
,format(r.DeliveryDate, 'MM/dd/yyyy HH:mm') as DeliveryDate
,h.CustomerHumanReadableId as CustomerID
,r.Remark
--,*
from [order].[Release] as r (nolock)
left join
[order].LineItem as l (nolock) on
l.id = r.LineItemId
left join
[order].Header as h (nolock) on
h.id = l.HeaderId
WHERE releaseState not in (1, 2, 3, 4)
AND h.CreatedByEdi = 1
AND r.deliveryDate < getdate() + 1
--AND h.CustomerHumanReadableId in (0)
order by r.deliveryDate

View File

@@ -0,0 +1,8 @@
SELECT format(RequirementDate, 'yyyy-MM-dd') as requirementDate
,ArticleHumanReadableId
,CustomerArticleNumber
,ArticleDescription
,Quantity
FROM [test1_AlplaPROD2.0_Read].[forecast].[Forecast]
where DeliveryAddressHumanReadableId in ([customers])
order by RequirementDate

View File

@@ -0,0 +1,64 @@
use [test1_AlplaPROD2.0_Read]
select
ArticleHumanReadableId as article
,ArticleAlias as alias
,round(sum(QuantityLoadingUnits),2) total_pallets
,round(sum(Quantity),2) as total_palletQTY
,round(sum(case when State = 0 then QuantityLoadingUnits else 0 end),2) available_Pallets
,round(sum(case when State = 0 then Quantity else 0 end),2) available_QTY
,round(sum(case when b.HumanReadableId = 864 then QuantityLoadingUnits else 0 end),2) as coa_Pallets
,round(sum(case when b.HumanReadableId = 864 then Quantity else 0 end),2) as coa_QTY
,round(sum(case when b.HumanReadableId <> 864 then QuantityLoadingUnits else 0 end),2) as held_Pallets
,round(sum(case when b.HumanReadableId <> 864 then Quantity else 0 end),2) as held_QTY
,round(sum(case when w.type = 7 then QuantityLoadingUnits else 0 end),2) as consignment_Pallets
,round(sum(case when w.type = 7 then Quantity else 0 end),2) as consignment_qty
--,l.RunningNumber
/** datamart include lot number **/
--,l.MachineLocation,l.MachineName,l.ProductionLotRunningNumber as lot
/** data mart include location data **/
--,l.WarehouseDescription,l.LaneDescription
/** historical section **/
--,l.ProductionLotRunningNumber as lot,l.warehousehumanreadableid as warehouseId,l.WarehouseDescription as warehouseDescription,l.lanehumanreadableid as locationId,l.lanedescription as laneDescription
,articleTypeName
FROM [warehousing].[WarehouseUnit] as l (nolock)
left join
(
SELECT [Id]
,[HumanReadableId]
,d.[Description]
,[DefectGroupId]
,[IsActive]
FROM [blocking].[BlockingDefect] as g (nolock)
left join
[AlplaPROD_test1].dbo.[T_BlockingDefects] as d (nolock) on
d.IdGlobalBlockingDefect = g.HumanReadableId
) as b on
b.id = l.MainDefectId
left join
[warehousing].[warehouse] as w (nolock) on
w.id = l.warehouseid
where LaneHumanReadableId not in (20000,21000)
group by ArticleHumanReadableId,
ArticleAlias,
ArticleTypeName
--,l.RunningNumber
/** datamart include lot number **/
--,l.MachineLocation,l.MachineName,l.ProductionLotRunningNumber
/** data mart include location data **/
--,l.WarehouseDescription,l.LaneDescription
/** historical section **/
--,l.ProductionLotRunningNumber,l.warehousehumanreadableid,l.WarehouseDescription,l.lanehumanreadableid,l.lanedescription
order by ArticleHumanReadableId

View File

@@ -0,0 +1,33 @@
use [test1_AlplaPROD2.0_Read]
select
customerartno
,r.ArticleHumanReadableId as article
,r.ArticleAlias as articleAlias
,ReleaseNumber
,h.CustomerOrderNumber as header
,l.CustomerLineItemNumber as lineItem
,r.CustomerReleaseNumber as releaseNumber
,r.LoadingUnits
,r.Quantity
,r.TradeUnits
,h.CustomerHumanReadableId
,r.DeliveryAddressDescription
,format(r.LoadingDate, 'MM/dd/yyyy HH:mm') as loadingDate
,format(r.DeliveryDate, 'MM/dd/yyyy HH:mm') as deliveryDate
,r.Remark
--,*
from [order].[Release] as r (nolock)
left join
[order].LineItem as l (nolock) on
l.id = r.LineItemId
left join
[order].Header as h (nolock) on
h.id = l.HeaderId
WHERE releasestate not in (1, 2, 4)
AND r.deliverydate between getDate() + -[startDay] and getdate() + [endDay]
order by r.deliverydate

View File

@@ -0,0 +1,19 @@
use [test1_AlplaPROD2.0_Reporting]
declare @startDate nvarchar(30) = '[startDate]' --'2024-12-30'
declare @endDate nvarchar(30) = '[endDate]' --'2025-08-09'
select MachineLocation,
ArticleHumanReadableId as article,
sum(Quantity) as Produced,
count(Quantity) as palletsProdued,
FORMAT(convert(date, ProductionDay), 'M/d/yyyy') as ProductionDay,
ProductionLotHumanReadableId as productionLot
from [reporting_productionControlling].[ScannedUnit] (nolock)
where convert(date, ProductionDay) between @startDate and @endDate
and ArticleHumanReadableId in ([articles])
and BookedOut is null
group by MachineLocation, ArticleHumanReadableId,ProductionDay, ProductionLotHumanReadableId

View File

@@ -0,0 +1,23 @@
use AlplaPROD_test1
/**
move this over to the delivery date range query once we have the shift data mapped over correctly.
update the psi stuff on this as well.
**/
declare @start_date nvarchar(30) = '[startDate]' --'2025-01-01'
declare @end_date nvarchar(30) = '[endDate]' --'2025-08-09'
select IdArtikelVarianten,
ArtikelVariantenBez,
sum(Menge) totalDelivered,
case when convert(time, upd_date) between '00:00' and '07:00' then convert(date, upd_date - 1) else convert(date, upd_date) end as ShippingDate
from dbo.V_LadePlanungenLadeAuftragAbruf (nolock)
where upd_date between CONVERT(datetime, @start_date + ' 7:00') and CONVERT(datetime, @end_date + ' 7:00')
and IdArtikelVarianten in ([articles])
group by IdArtikelVarianten, upd_date,
ArtikelVariantenBez

View File

@@ -0,0 +1,32 @@
use AlplaPROD_test1
declare @start_date nvarchar(30) = '[startDate]' --'2025-01-01'
declare @end_date nvarchar(30) = '[endDate]' --'2025-08-09'
/*
articles will need to be passed over as well as the date structure we want to see
*/
select x.IdArtikelvarianten As Article,
ProduktionAlias as Description,
standort as MachineId,
MaschinenBezeichnung as MachineName,
--MaschZyklus as PlanningCycleTime,
x.IdProdPlanung as LotNumber,
FORMAT(ProdTag, 'MM/dd/yyyy') as ProductionDay,
x.planMenge as TotalPlanned,
ProduktionMenge as QTYPerDay,
round(ProduktionMengeVPK, 2) PalDay,
Status as finished
--MaschStdAuslastung as nee
from dbo.V_ProdLosProduktionJeProdTag_PLANNING (nolock) as x
left join
dbo.V_ProdPlanung (nolock) as p on
x.IdProdPlanung = p.IdProdPlanung
where ProdTag between @start_date and @end_date
and p.IdArtikelvarianten in ([articles])
--and V_ProdLosProduktionJeProdTag_PLANNING.IdKunde = 10
--and IdProdPlanung = 18442
order by ProdTag desc

View File

@@ -1,4 +1,4 @@
export const prodSqlServerStats = `
DECLARE @UptimeSeconds INT;
DECLARE @StartTime DATETIME;
@@ -13,4 +13,4 @@ SELECT
(@UptimeSeconds % 86400) / 3600 AS [Hours],
(@UptimeSeconds % 3600) / 60 AS [Minutes],
(@UptimeSeconds % 60) AS [Seconds];
`;

View File

@@ -0,0 +1,44 @@
use [test1_AlplaPROD2.0_Read]
SELECT
'Alert! new blocking order: #' + cast(bo.HumanReadableId as varchar) + ' - ' + bo.ArticleVariantDescription as subject
,cast(bo.[HumanReadableId] as varchar) as blockingNumber
,bo.[ArticleVariantDescription] as article
,cast(bo.[CustomerHumanReadableId] as varchar) + ' - ' + bo.[CustomerDescription] as customer
,convert(varchar(10), bo.[BlockingDate], 101) + ' ' + convert(varchar(5), bo.[BlockingDate], 108) as blockingDate
,cast(ArticleVariantHumanReadableId as varchar) + ' - ' + ArticleVariantDescription as av
,case when bo.Remark = '' or bo.Remark is NULL then 'Please reach out to quality for the reason this was placed on hold as a remark was not entered during the blocking processs' else bo.Remark end as remark
,cast(FORMAT(TotalAmountOfPieces, '###,###') as varchar) + ' / ' + cast(LoadingUnit as varchar) as peicesAndLoadingUnits
,bo.ProductionLotHumanReadableId as lotNumber
,cast(osd.IdBlockingDefectsGroup as varchar) + ' - ' + osd.Description as mainDefectGroup
,cast(df.HumanReadableId as varchar) + ' - ' + os.Description as mainDefect
,lot.MachineLocation as line
--,*
FROM [blocking].[BlockingOrder] (nolock) as bo
/*** get the defect details ***/
join
[blocking].[BlockingDefect] (nolock) AS df
on df.id = bo.MainDefectId
/*** pull description from 1.0 ***/
left join
[AlplaPROD_test1].[dbo].[T_BlockingDefects] (nolock) as os
on os.IdGlobalBlockingDefect = df.HumanReadableId
/*** join in 1.0 defect group ***/
left join
[AlplaPROD_test1].[dbo].[T_BlockingDefectsGroups] (nolock) as osd
on osd.IdBlockingDefectsGroup = os.IdBlockingDefectsGroup
left join
[productionControlling].[ProducedLot] (nolock) as lot
on lot.id = bo.ProductionLotId
where
bo.[BlockingDate] between getdate() - 2 and getdate() + 3 and
bo.BlockingTrigger = 1 -- so we only get the ir blocking and not coa
--and HumanReadableId NOT IN ([sentBlockingOrders])
and bo.HumanReadableId > [lastBlocking]

View File

@@ -0,0 +1,72 @@
SELECT
[Id]
,[ReleaseNumber]
,[CustomerReleaseNumber]
,[ReleaseState]
,[LineItemId]
,[BlanketOrderId]
,[DeliveryState]
,[ReleaseConfirmationState]
,[PlanningState]
,[OrderDate]
,cast([DeliveryDate] as datetime2) as DeliveryDate
,[LoadingDate]
,[Quantity]
,[DeliveredQuantity]
,[DeliveredQuantityTradeUnits]
,[DeliveredQuantityLoadingUnits]
,[PackagingId]
,[PackagingHumanReadableId]
,[PackagingDescription]
,[MainMaterialId]
,[MainMaterialHumanReadableId]
,[MainMaterialDescription]
,[AdditionalInformation1]
,[AdditionalInformation2]
,[D365SupplierLot]
,[TradeUnits]
,[LoadingUnits]
,[Trucks]
,[LoadingToleranceType]
,[UnderdeliveryDeviation]
,[OverdeliveryDeviation]
,[ArticleAccountRequirements_ArticleExact]
,[ArticleAccountRequirements_CustomerExact]
,[ArticleAccountRequirements_PackagingExact]
,[ArticleAccountRequirements_MainMaterialExact]
,[PriceLogicType]
,[AllowProductionLotMixing]
,[EnforceStrictPicking]
,[SalesPrice]
,[Currency]
,[QuantityUnit]
,[SalesPriceRemark]
,[DeliveryConditionId]
,[DeliveryConditionHumanReadableId]
,[DeliveryConditionDescription]
,[PaymentTermsId]
,[PaymentTermsHumanReadableId]
,[PaymentTermsDescription]
,[Remark]
,[DeliveryAddressId]
,[DeliveryAddressHumanReadableId]
,[DeliveryAddressDescription]
,[DeliveryStreetName]
,[DeliveryAddressZip]
,[DeliveryCity]
,[DeliveryCountry]
,[ReleaseDiscount]
,[CustomerArtNo]
,[LineItemHumanReadableId]
,[LineItemArticle]
,[LineItemArticleWeight]
,[LineItemQuantityType]
,[TotalPrice]
,[Add_User]
,[Add_Date]
,[Upd_User]
,cast([Upd_Date] as dateTime) as Upd_Date
,[VatRate]
,[ArticleAlias]
FROM [test1_AlplaPROD2.0_Reporting].[reporting_order].[Release] (nolock)
where format([Upd_Date], 'yyyy-MM-dd HH:mm:ss') > [dateCheck]

View File

@@ -0,0 +1,28 @@
use [test1_AlplaPROD2.0_Read]
SELECT
--JSON_VALUE(content, '$.EntityId') as labelId
a.id
,ActorName
,FORMAT(PrintDate, 'yyyy-MM-dd HH:mm') as printDate
,FORMAT(CreatedDateTime, 'yyyy-MM-dd HH:mm') createdDateTime
,l.ArticleHumanReadableId as av
,l.ArticleDescription as alias
,PrintedCopies
,p.name as printerName
,RunningNumber
--,*
FROM [support].[AuditLog] (nolock) as a
left join
[labelling].[InternalLabel] (nolock) as l on
l.id = JSON_VALUE(content, '$.EntityId')
left join
[masterData].[printer] (nolock) as p on
p.id = l.PrinterId
where message like '%reprint%'
and CreatedDateTime > DATEADD(minute, -[intervalCheck], SYSDATETIMEOFFSET())
and a.id > [ignoreList]
order by CreatedDateTime desc

Some files were not shown because too many files have changed in this diff Show More