59 Commits

Author SHA1 Message Date
ba3227545d chore(release): 0.0.1-alpha.4
All checks were successful
Build and Push LST Docker Image / docker (push) Successful in 2m4s
Release and Build Image / release (push) Successful in 12s
2026-04-15 07:31:49 -05:00
84909bfcf8 ci(service): changes to the script to allow running the powershell on execution palicy restrictions
Some checks failed
Build and Push LST Docker Image / docker (push) Has been cancelled
2026-04-15 07:31:06 -05:00
e0d0ac2077 feat(datamart): psi data has been added :D 2026-04-15 07:29:35 -05:00
52a6c821f4 fix(datamart): error when running build and crashed everything
All checks were successful
Build and Push LST Docker Image / docker (push) Successful in 1m34s
2026-04-14 20:30:34 -05:00
eccaf17332 feat(datamart): migrations completed remaining is the deactivation that will be ran by anylitics
Some checks failed
Build and Push LST Docker Image / docker (push) Failing after 39s
2026-04-14 20:25:20 -05:00
6307037985 feat(tcp crud): tcp server start, stop, restart endpoints + status check
All checks were successful
Build and Push LST Docker Image / docker (push) Successful in 1m30s
2026-04-13 17:30:47 -05:00
4b6061c478 ci(agent): added in sherman
All checks were successful
Build and Push LST Docker Image / docker (push) Successful in 1m36s
2026-04-13 15:36:50 -05:00
fc6dc82d84 refactor(services): added in examples for migration stuff 2026-04-13 15:36:29 -05:00
6ba905a887 docs(docs): removed docusorus as all docs will be inside lst now to better assist users 2026-04-13 15:36:02 -05:00
f33587a3d9 refactor(sql): corrections to the way we reconnect so the app can error out and be reactivated later 2026-04-13 15:35:12 -05:00
80189baf90 feat(ocp): printer sync and logging logic added 2026-04-13 15:34:18 -05:00
87f738702a docs(notifcations): docs for intro, notifcations, reprint added
All checks were successful
Build and Push LST Docker Image / docker (push) Successful in 2m25s
2026-04-10 21:35:12 -05:00
38a0b65e94 refactor(connection): corrected the connection to the old system 2026-04-10 21:33:55 -05:00
9a0ef8e51a refactor(notification): blocking added 2026-04-10 21:33:26 -05:00
dcb3f2dd13 refactor(server): added in serverCrash email 2026-04-10 21:32:25 -05:00
e47ea9ec52 ci(agent): added in jeff city 2026-04-10 21:31:57 -05:00
ca3425d327 docs(env example): updated the file 2026-04-10 21:30:46 -05:00
3bf024cfc9 refactor(agent): changed to have the test servers on there own push for better testing
production servers will soon pull a build from git rather and push the zip so splitting things up
now
2026-04-10 14:12:02 -05:00
9d39c13510 refactor(puchase): changes how the error handling works so a better email can be sent 2026-04-10 13:58:30 -05:00
c9eb59e2ad refactor(reprint): new query added to deactivate the old notifcation so no chance of duplicates 2026-04-10 13:57:52 -05:00
b0e5fd7999 feat(migrate): quality alert migrated 2026-04-10 13:57:15 -05:00
07ebf88806 refactor(templates): corrections for new notify process on critcal errors 2026-04-10 10:33:01 -05:00
79e653efa3 refactor(logging): when notify is true send the error to systemAdmins 2026-04-10 10:32:20 -05:00
d05a0ce930 chore(release): 0.0.1-alpha.3
All checks were successful
Build and Push LST Docker Image / docker (push) Successful in 2m1s
Release and Build Image / release (push) Successful in 11s
2026-04-10 08:22:16 -05:00
995b1dda7c refactor(send email): changes the error message to show the true message in the error
All checks were successful
Build and Push LST Docker Image / docker (push) Successful in 2m3s
2026-04-09 21:15:26 -05:00
97f93a1830 refactor(reprints): changes the module and submodule around to be more accurate 2026-04-09 21:14:36 -05:00
635635b356 refactor(gp connect): gp connect as was added to long live services 2026-04-09 21:13:38 -05:00
a691dc276e feat(puchase hist): finished up purhcase historical / gp updates 2026-04-09 21:12:43 -05:00
8dfcbc5720 chore(release): 0.0.1-alpha.2
All checks were successful
Build and Push LST Docker Image / docker (push) Successful in 2m29s
Release and Build Image / release (push) Successful in 17s
2026-04-08 16:13:38 -05:00
103ae77e9f build(release): docker and release corrections
Some checks failed
Build and Push LST Docker Image / docker (push) Has been cancelled
2026-04-08 16:12:54 -05:00
beeccc6e8d chore(release): 0.0.1-alpha.1
Some checks failed
Build and Push LST Docker Image / docker (push) Has been cancelled
Release and Build Image / release (push) Failing after 15s
2026-04-08 15:58:21 -05:00
0880298cf5 refactor(opendock refactor on how releases are posted): this was a bug maybe just a better refactory
Some checks failed
Build and Push LST Docker Image / docker (push) Has been cancelled
2026-04-08 15:57:20 -05:00
34b0abac36 feat(puchase history): purhcase history changed to long running no notification 2026-04-08 15:55:25 -05:00
28c226ddbc build(agent): added westbend into the flow 2026-04-07 22:33:38 -05:00
42861cc69e feat(purchase): historical data capture for alpla purchase 2026-04-07 22:33:11 -05:00
5f3d683a13 refactor(notification): reprint - removed a console log as it shouldnt bc there 2026-04-06 16:41:39 -05:00
a17787e852 feat(notification): reprint added
All checks were successful
Build and Push LST Docker Image / docker (push) Successful in 2m6s
2026-04-06 16:01:06 -05:00
5865ac3b99 feat(notification): base notifcaiton sub and admin compelted
All checks were successful
Build and Push LST Docker Image / docker (push) Successful in 1m59s
can now sub to a notification and user can remove them selfs plus an admin can remove,updates to add
new emails are good as well
2026-04-06 12:59:30 -05:00
637de857f9 feat(user notifications): added the ability for users to sub to notifications and add multi email 2026-04-06 09:29:46 -05:00
3ecf5fb916 refactor(userprofile): changes to have the table be blank and say nothing subscribed
All checks were successful
Build and Push LST Docker Image / docker (push) Successful in 2m32s
later we will leave this off the profile and add it once at least one notification is subscribed
2026-04-05 20:50:27 -05:00
92ba3ef512 docs(readme): updated progress data
All checks were successful
Build and Push LST Docker Image / docker (push) Successful in 1m18s
2026-04-05 20:44:49 -05:00
7d6c2db89c style(notifcaion): style changes to the notificaion card and started the table
All checks were successful
Build and Push LST Docker Image / docker (push) Successful in 1m49s
2026-04-03 17:16:58 -05:00
74262beb65 refactor(notification): select menu looks propper now 2026-04-03 17:16:31 -05:00
f3b8dd94e5 refactor(queries): changed dev version to be 1500ms vs 5000ms 2026-04-03 17:16:02 -05:00
0059b9b850 build(changelog): reset the change log after all crap testing 2026-04-03 17:15:22 -05:00
1ad789b2b9 chore(release): 0.1.0-alpha.12
All checks were successful
Build and Push LST Docker Image / docker (push) Successful in 1m45s
Release and Build Image / release (push) Successful in 10s
2026-04-03 16:54:44 -05:00
079478f932 fix(typo): more dam typos 2026-04-03 16:54:29 -05:00
d6d5b451cd chore(release): 0.1.0-alpha.11
All checks were successful
Build and Push LST Docker Image / docker (push) Successful in 1m45s
Release and Build Image / release (push) Successful in 10s
2026-04-03 16:49:20 -05:00
76747cf917 fix(release): typo that caused errors 2026-04-03 16:49:12 -05:00
6e85991062 refactor(release): changes to only have the changelog in the release 2026-04-03 16:43:17 -05:00
98e408cb85 chore(release): 0.1.0-alpha.10
All checks were successful
Build and Push LST Docker Image / docker (push) Successful in 1m48s
Release and Build Image / release (push) Successful in 1m22s
2026-04-03 15:30:02 -05:00
ed052dff3c refactor(changelog): reverted back to commit-chagnelog, like more than changeset for solo dev 2026-04-03 15:29:49 -05:00
8f59bba614 chore(release): 0.1.0-alpha.9
All checks were successful
Release and Build Image / release (push) Successful in 1m52s
2026-04-03 15:22:26 -05:00
fb2c5609aa chore(release): version packages
All checks were successful
Build and Push LST Docker Image / docker (push) Successful in 1m46s
Release and Build Image / release (push) Successful in 1m20s
2026-04-03 13:06:52 -05:00
17aed6cb89 fix(lala): something here 2026-04-03 13:06:14 -05:00
b02b93b83f chore(release): version packages
All checks were successful
Build and Push LST Docker Image / docker (push) Successful in 1m50s
Release and Build Image / release (push) Successful in 1m26s
2026-04-03 12:51:52 -05:00
9ceba8b5bb fix(i suck): more learning experance 2026-04-03 12:51:11 -05:00
2c0dbf95c7 chore(release): version packages
Some checks failed
Build and Push LST Docker Image / docker (push) Successful in 1m50s
Release and Build Image / release (push) Failing after 1m22s
2026-04-03 12:44:43 -05:00
860207a60b fix(build): typo 2026-04-03 12:44:16 -05:00
203 changed files with 30889 additions and 21344 deletions

View File

@@ -1,8 +0,0 @@
# Changesets
Hello and welcome! This folder has been automatically generated by `@changesets/cli`, a build tool that works
with multi-package repos, or single-package repos to help you version and publish your code. You can
find the full documentation for it [in our repository](https://github.com/changesets/changesets)
We have a quick list of common questions to get you started engaging with this project in
[our documentation](https://github.com/changesets/changesets/blob/main/docs/common-questions.md)

View File

@@ -1,5 +0,0 @@
---
"lst_v3": patch
---
sop stuff

View File

@@ -1,11 +0,0 @@
{
"$schema": "https://unpkg.com/@changesets/config/schema.json",
"changelog": "@changesets/cli/changelog",
"commit": false,
"fixed": [],
"linked": [],
"access": "restricted",
"baseBranch": "main",
"updateInternalDependencies": "patch",
"ignore": []
}

View File

@@ -1,5 +0,0 @@
---
"lst_v3": patch
---
changed the password to token

View File

@@ -1,5 +0,0 @@
---
"lst_v3": patch
---
build stuff

View File

@@ -1,16 +0,0 @@
{
"mode": "pre",
"tag": "alpha",
"initialVersions": {
"lst_v3": "0.0.1"
},
"changesets": [
"bold-ties-remain",
"lucky-dingos-brake",
"neat-years-unite",
"soft-onions-appear",
"strict-towns-grin",
"tall-cooks-rule",
"thirty-grapes-shine"
]
}

View File

@@ -1,5 +0,0 @@
---
"lst_v3": patch
---
external url added for docker

View File

@@ -1,7 +0,0 @@
---
"lst_v3": minor
---
more build stuff
### Build
- changes to now auto release when we push new v*

View File

@@ -1,5 +0,0 @@
---
"lst_v3": patch
---
more info in the change stuff

View File

@@ -1,10 +0,0 @@
---
"lst_v3": patch
---
Changes to the build process
# Build
- Added release flow
- when new release is in build the docker image
- latest will still be built as well

View File

@@ -1,32 +1,52 @@
NODE_ENV=development
# Server
PORT=3000
URL=http://localhost:3000
SERVER_IP=10.75.2.38
TIMEZONE=America/New_York
TCP_PORT=2222
# authentication
BETTER_AUTH_SECRET=""
# Better auth Secret
BETTER_AUTH_SECRET=
RESET_EXPIRY_SECONDS=3600
# logging
LOG_LEVEL=debug
LOG_LEVEL=
# prodServer
PROD_SERVER=usmcd1vms036
PROD_PLANT_TOKEN=test3
PROD_USER=alplaprod
PROD_PASSWORD=password
# SMTP password
SMTP_PASSWORD=
# opendock
OPENDOCK_URL=https://neutron.opendock.com
OPENDOCK_PASSWORD=
DEFAULT_DOCK=
DEFAULT_LOAD_TYPE=
DEFAULT_CARRIER=
# prodServer when ruining on an actual prod server use localhost this way we don't go out and back in.
PROD_SERVER=
PROD_PLANT_TOKEN=
PROD_USER=
PROD_PASSWORD=
# Tech user for alplaprod api
TEC_API_KEY=
# AD STUFF
# this is mainly used for purchase stuff to reference reqs
LDAP_URL=
# postgres connection
DATABASE_HOST=localhost
DATABASE_PORT=5433
DATABASE_USER=user
DATABASE_PASSWORD=password
DATABASE_DB=lst_dev
DATABASE_PORT=5432
DATABASE_USER=
DATABASE_PASSWORD=
DATABASE_DB=
# how is the app running server or client when in client mode you must provide the server
APP_RUNNING_IN=server
SERVER_NAME=localhost
# Gp connection
GP_USER=
GP_PASSWORD=
#dev stuff
GITEA_TOKEN=""
EMAIL_USER=""
EMAIL_PASSWORD=""
# how often to check for new/updated queries in min
QUERY_TIME_TYPE=m #valid options are m, h
QUERY_CHECK=1

View File

@@ -9,6 +9,18 @@ jobs:
release:
runs-on: ubuntu-latest
env:
# Internal/origin Gitea URL. Do NOT use the Cloudflare fronted URL here.
# Examples:
# http://gitea.internal.lan:3000
# https://gitea-origin.yourdomain.local
GITEA_INTERNAL_URL: "https://git.tuffraid.net"
# Internal/origin registry host. Usually same host as above, but without protocol.
# Example:
# gitea.internal:3000
REGISTRY_HOST: "git.tuffraid.net"
steps:
- name: Check out repository
uses: actions/checkout@v4
@@ -16,12 +28,11 @@ jobs:
- name: Prepare release metadata
shell: bash
run: |
set -euo pipefail
TAG="${GITHUB_REF_NAME:-${GITHUB_REF##refs/tags/}}"
VERSION="${TAG#v}"
IMAGE_REGISTRY="${{ gitea.server_url }}"
IMAGE_REGISTRY="${IMAGE_REGISTRY#http://}"
IMAGE_REGISTRY="${IMAGE_REGISTRY#https://}"
IMAGE_NAME="${IMAGE_REGISTRY}/${{ gitea.repository }}"
IMAGE_NAME="${REGISTRY_HOST}/${{ gitea.repository }}"
echo "TAG=$TAG" >> "$GITHUB_ENV"
echo "VERSION=$VERSION" >> "$GITHUB_ENV"
@@ -33,9 +44,62 @@ jobs:
echo "PRERELEASE=false" >> "$GITHUB_ENV"
fi
- name: Extract matching CHANGELOG section
echo "Resolved TAG=$TAG"
echo "Resolved VERSION=$VERSION"
echo "Resolved IMAGE_NAME=$IMAGE_NAME"
- name: Log in to Gitea container registry
shell: bash
env:
REGISTRY_USERNAME: ${{ secrets.REGISTRY_USERNAME }}
REGISTRY_TOKEN: ${{ secrets.RELEASE_TOKEN }}
run: |
set -euo pipefail
echo "$REGISTRY_TOKEN" | docker login "$REGISTRY_HOST" -u "$REGISTRY_USERNAME" --password-stdin
- name: Build Docker image
shell: bash
run: |
set -euo pipefail
docker build \
-t "$IMAGE_NAME:$TAG" \
-t "$IMAGE_NAME:latest" \
.
- name: Push version tag
shell: bash
run: |
set -euo pipefail
docker push "$IMAGE_NAME:$TAG"
- name: Push latest tag
if: ${{ !contains(env.TAG, '-') }}
shell: bash
run: |
set -euo pipefail
docker push "$IMAGE_NAME:latest"
- name: Push prerelease channel tag
if: ${{ contains(env.TAG, '-') }}
shell: bash
env:
TAG: ${{ env.TAG }}
run: |
set -euo pipefail
CHANNEL="${TAG#*-}"
CHANNEL="${CHANNEL%%.*}"
echo "Resolved prerelease channel: $CHANNEL"
docker tag "$IMAGE_NAME:$TAG" "$IMAGE_NAME:$CHANNEL"
docker push "$IMAGE_NAME:$CHANNEL"
- name: Extract matching CHANGELOG section
shell: bash
env:
VERSION: ${{ env.VERSION }}
run: |
set -euo pipefail
python3 - <<'PY'
import os
import re
@@ -45,14 +109,17 @@ jobs:
changelog_path = Path("CHANGELOG.md")
if not changelog_path.exists():
body = f"# {version}\n\nNo CHANGELOG.md found."
Path("release_body.md").write_text(body, encoding="utf-8")
Path("release_body.md").write_text(f"Release {version}\n", encoding="utf-8")
raise SystemExit(0)
text = changelog_path.read_text(encoding="utf-8")
# Matches headings like:
# ## [0.1.0]
# ## 0.1.0
# ## [0.1.0-alpha.1]
pattern = re.compile(
rf"^##\s+\[?{re.escape(version)}\]?[^\n]*\n(.*?)(?=^##\s+\[?[0-9]|\Z)",
rf"^##\s+\[?{re.escape(version)}\]?[^\n]*\n(.*?)(?=^##\s+\[?[^\n]+|\Z)",
re.MULTILINE | re.DOTALL,
)
@@ -66,93 +133,59 @@ jobs:
body = f"Release {version}"
Path("release_body.md").write_text(body + "\n", encoding="utf-8")
print("----- release_body.md -----")
print(body)
print("---------------------------")
PY
- name: Log in to Gitea container registry
shell: bash
env:
REGISTRY_USERNAME: ${{ secrets.REGISTRY_USERNAME }}
REGISTRY_TOKEN: ${{ secrets.RELEASE_TOKEN }}
run: |
echo "$REGISTRY_TOKEN" | docker login "${IMAGE_NAME%%/*}" -u "$REGISTRY_USERNAME" --password-stdin
- name: Build Docker image
shell: bash
run: |
docker build \
-t "$IMAGE_NAME:$TAG" \
-t "$IMAGE_NAME:latest" \
.
- name: Push version tag
shell: bash
run: |
docker push "$IMAGE_NAME:$TAG"
- name: Push latest tag
if: ${{ !contains(env.TAG, '-') }}
shell: bash
run: |
docker push "$IMAGE_NAME:latest"
- name: Push prerelease channel tag
if: ${{ contains(env.TAG, '-') }}
shell: bash
run: |
CHANNEL="${TAG#*-}"
CHANNEL="${CHANNEL%%.*}"
docker tag "$IMAGE_NAME:$TAG" "$IMAGE_NAME:$CHANNEL"
docker push "$IMAGE_NAME:$CHANNEL"
- name: Create Gitea release
shell: bash
env:
RELEASE_TOKEN: ${{ secrets.RELEASE_TOKEN }}
GITEA_SERVER_URL: ${{ gitea.server_url }}
GITEA_REPOSITORY: ${{ gitea.repository }}
shell: bash
GITEA_INTERNAL_URL: ${{ env.GITEA_INTERNAL_URL }}
TAG: ${{ env.TAG }}
PRERELEASE: ${{ env.PRERELEASE }}
run: |
set -euo pipefail
python3 - <<'PY'
import json
import os
import urllib.request
import urllib.error
from pathlib import Path
tag = os.environ["TAG"]
prerelease = os.environ["PRERELEASE"].lower() == "true"
server_url = os.environ["GITEA_SERVER_URL"].rstrip("/")
server_url = os.environ["GITEA_INTERNAL_URL"].rstrip("/")
repo = os.environ["GITEA_REPOSITORY"]
token = os.environ["RELEASE_TOKEN"]
with open("release_body.md", "r", encoding="utf-8") as f:
body = Path("release_body.md").read_text(encoding="utf-8").strip()
tag = os.environ["TAG"]
# Check if the release already exists for this tag
get_url = f"{server_url}/api/v1/repos/{repo}/releases/tags/{tag}"
get_req = urllib.request.Request(
get_url,
method="GET",
headers={
"Authorization": f"token {token}",
"Accept": "application/json",
"User-Agent": "lst-release-workflow/1.0",
},
)
header = (
"## 🚀 How to run this release\n\n"
"### Pull image\n"
f"```bash\n"
f"docker pull {image_name}:{tag}\n"
f"```\n\n"
"### Run container\n"
f"```bash\n"
f"docker run -d \\\n"
f" --name lst \\\n"
f" -p 3000:3000 \\\n"
f" {image_name}:{tag}\n"
f"```\n\n"
"---\n\n"
)
existing_release = None
try:
with urllib.request.urlopen(get_req) as resp:
existing_release = json.loads(resp.read().decode("utf-8"))
except urllib.error.HTTPError as e:
if e.code != 404:
details = e.read().decode("utf-8", errors="replace")
print("Failed checking existing release:")
print(details)
raise
```py
if "-" not in tag:
header += f"\n**Also available as:** `{image_name}:latest`\n\n"
body = f.read()
image_name = os.environ["IMAGE_NAME"]
body = body.rstrip() + f"\n\n### Container Image\n\n- `{image_name}:{tag}`\n"
url = f"{server_url}/api/v1/repos/{repo}/releases"
payload = {
"tag_name": tag,
"name": tag,
@@ -162,14 +195,26 @@ jobs:
}
data = json.dumps(payload).encode("utf-8")
if existing_release:
release_id = existing_release["id"]
url = f"{server_url}/api/v1/repos/{repo}/releases/{release_id}"
method = "PATCH"
print(f"Release already exists for tag {tag}, updating release id {release_id}")
else:
url = f"{server_url}/api/v1/repos/{repo}/releases"
method = "POST"
print(f"No release exists for tag {tag}, creating a new one")
req = urllib.request.Request(
url,
data=data,
method="POST",
method=method,
headers={
"Authorization": f"token {token}",
"Content-Type": "application/json",
"Accept": "application/json",
"User-Agent": "lst-release-workflow/1.0",
},
)
@@ -178,6 +223,7 @@ jobs:
print(resp.read().decode("utf-8"))
except urllib.error.HTTPError as e:
details = e.read().decode("utf-8", errors="replace")
print("Release create/update failed:")
print(details)
raise
PY

1
.gitignore vendored
View File

@@ -4,6 +4,7 @@ builds
.includes
.buildNumber
temp
brunoApi
.scriptCreds
node-v24.14.0-x64.msi
postgresql-17.9-2-windows-x64.exe

View File

@@ -11,7 +11,7 @@
{ "type": "ci", "hidden": false, "section": "📈 Project changes" },
{ "type": "build", "hidden": false, "section": "📈 Project Builds" }
],
"commitUrlFormat": "https://git.tuffraid.net/cowch/lst/commits/{{hash}}",
"compareUrlFormat": "https://git.tuffraid.net/cowch/lst/compare/{{previousTag}}...{{currentTag}}",
"commitUrlFormat": "https://git.tuffraid.net/cowch/lst_v3/commits/{{hash}}",
"compareUrlFormat": "https://git.tuffraid.net/cowch/lst_v3/compare/{{previousTag}}...{{currentTag}}",
"header": "# All Changes to LST can be found below.\n"
}

View File

@@ -54,8 +54,10 @@
"alpla",
"alplamart",
"alplaprod",
"alplapurchase",
"bookin",
"Datamart",
"dotenvx",
"dyco",
"intiallally",
"manadatory",
@@ -63,12 +65,14 @@
"onnotice",
"opendock",
"opendocks",
"palletizer",
"ppoo",
"preseed",
"prodlabels",
"prolink",
"Skelly",
"trycatch"
"trycatch",
"whse"
],
"gitea.token": "8456def90e1c651a761a8711763d6ef225d6b2db",
"gitea.instanceURL": "https://git.tuffraid.net",

View File

@@ -1,52 +1,134 @@
# lst_v3
# All Changes to LST can be found below.
## 0.1.0-alpha.5
## [0.0.1-alpha.4](https://git.tuffraid.net/cowch/lst_v3/compare/v0.0.1-alpha.3...v0.0.1-alpha.4) (2026-04-15)
### Patch Changes
- sop stuff
### 🌟 Enhancements
## 0.1.0-alpha.4
* **datamart:** migrations completed remaining is the deactivation that will be ran by anylitics ([eccaf17](https://git.tuffraid.net/cowch/lst_v3/commits/eccaf17332fb1c63b8d6bbea6f668c3bb42d44b7))
* **datamart:** psi data has been added :D ([e0d0ac2](https://git.tuffraid.net/cowch/lst_v3/commits/e0d0ac20773159373495d65023587b76b47df34f))
* **migrate:** quality alert migrated ([b0e5fd7](https://git.tuffraid.net/cowch/lst_v3/commits/b0e5fd79998d551d4f155d58416157a324498fbd))
* **ocp:** printer sync and logging logic added ([80189ba](https://git.tuffraid.net/cowch/lst_v3/commits/80189baf906224da43ec1b9b7521153d2a49e059))
* **tcp crud:** tcp server start, stop, restart endpoints + status check ([6307037](https://git.tuffraid.net/cowch/lst_v3/commits/6307037985162bc6b49f9f711132853296f43eee))
### Patch Changes
- more info in the change stuff
### 🐛 Bug fixes
## 0.1.0-alpha.3
* **datamart:** error when running build and crashed everything ([52a6c82](https://git.tuffraid.net/cowch/lst_v3/commits/52a6c821f4632e4b5b51e0528a0d620e2e0deffc))
### Patch Changes
- changed the password to token
### 📚 Documentation
## 0.1.0-alpha.2
* **docs:** removed docusorus as all docs will be inside lst now to better assist users ([6ba905a](https://git.tuffraid.net/cowch/lst_v3/commits/6ba905a887dbd8f306d71fed75bb34c71fee74c9))
* **env example:** updated the file ([ca3425d](https://git.tuffraid.net/cowch/lst_v3/commits/ca3425d327757120c2cc876fff28e8668c76838d))
* **notifcations:** docs for intro, notifcations, reprint added ([87f7387](https://git.tuffraid.net/cowch/lst_v3/commits/87f738702a935279a248d471541cdd9d49330565))
### Patch Changes
- Changes to the build process
### 🛠️ Code Refactor
# Build
* **agent:** changed to have the test servers on there own push for better testing ([3bf024c](https://git.tuffraid.net/cowch/lst_v3/commits/3bf024cfc97d2841130d54d1a7c5cb5f09f0f598))
* **connection:** corrected the connection to the old system ([38a0b65](https://git.tuffraid.net/cowch/lst_v3/commits/38a0b65e9450c65b8300a10058a8f0357400f4e6))
* **logging:** when notify is true send the error to systemAdmins ([79e653e](https://git.tuffraid.net/cowch/lst_v3/commits/79e653efa3bcb2941ccee06b28378e709e085ec0))
* **notification:** blocking added ([9a0ef8e](https://git.tuffraid.net/cowch/lst_v3/commits/9a0ef8e51a36e3ab45b601b977f1b5cf35d56947))
* **puchase:** changes how the error handling works so a better email can be sent ([9d39c13](https://git.tuffraid.net/cowch/lst_v3/commits/9d39c13510974b5ada2a6f6c2448da3f1b755a5c))
* **reprint:** new query added to deactivate the old notifcation so no chance of duplicates ([c9eb59e](https://git.tuffraid.net/cowch/lst_v3/commits/c9eb59e2ad9847418ac55cb8a4a91c013f6c97bb))
* **server:** added in serverCrash email ([dcb3f2d](https://git.tuffraid.net/cowch/lst_v3/commits/dcb3f2dd1382986639b722778fad113392533b28))
* **services:** added in examples for migration stuff ([fc6dc82](https://git.tuffraid.net/cowch/lst_v3/commits/fc6dc82d8458a9928050dd3770778d6a6e1eea7f))
* **sql:** corrections to the way we reconnect so the app can error out and be reactivated later ([f33587a](https://git.tuffraid.net/cowch/lst_v3/commits/f33587a3d9a72ca72806635fac9d1214bb1452f1))
* **templates:** corrections for new notify process on critcal errors ([07ebf88](https://git.tuffraid.net/cowch/lst_v3/commits/07ebf88806b93b9320f8f9d36b867572dd9a9580))
- Added release flow
- when new release is in build the docker image
- latest will still be built as well
## 0.1.0-alpha.1
### 📈 Project changes
### Minor Changes
* **agent:** added in jeff city ([e47ea9e](https://git.tuffraid.net/cowch/lst_v3/commits/e47ea9ec52a6ebaf5a8f67a7e8bd2c73da6186fb))
* **agent:** added in sherman ([4b6061c](https://git.tuffraid.net/cowch/lst_v3/commits/4b6061c478cbeba7c845dc1c8a015b9998721456))
* **service:** changes to the script to allow running the powershell on execution palicy restrictions ([84909bf](https://git.tuffraid.net/cowch/lst_v3/commits/84909bfcf85b91d085ea9dca78be00482b7fd231))
- more build stuff
### Build
- changes to now auto release when we push new v\*
## [0.0.1-alpha.3](https://git.tuffraid.net/cowch/lst_v3/compare/v0.0.1-alpha.2...v0.0.1-alpha.3) (2026-04-10)
## 1.0.2-alpha.0
### Patch Changes
### 🌟 Enhancements
- build stuff
- external url added for docker
* **puchase hist:** finished up purhcase historical / gp updates ([a691dc2](https://git.tuffraid.net/cowch/lst_v3/commits/a691dc276e8650c669409241f73d7b2d7a1f9176))
## 1.0.1
### Patch Changes
### 🛠️ Code Refactor
- cf18e94: core stuff
* **gp connect:** gp connect as was added to long live services ([635635b](https://git.tuffraid.net/cowch/lst_v3/commits/635635b356e1262e1c0b063408fe2209e6a8d4ec))
* **reprints:** changes the module and submodule around to be more accurate ([97f93a1](https://git.tuffraid.net/cowch/lst_v3/commits/97f93a1830761437118863372108df810ce9977a))
* **send email:** changes the error message to show the true message in the error ([995b1dd](https://git.tuffraid.net/cowch/lst_v3/commits/995b1dda7cdfebf4367d301ccac38fd339fab6dd))
## [0.0.1-alpha.2](https://git.tuffraid.net/cowch/lst_v3/compare/v0.0.1-alpha.1...v0.0.1-alpha.2) (2026-04-08)
### 📈 Project Builds
* **release:** docker and release corrections ([103ae77](https://git.tuffraid.net/cowch/lst_v3/commits/103ae77e9f82fc008a8ae143b6feccc3ce802f8c))
## [0.0.1-alpha.1](https://git.tuffraid.net/cowch/lst_v3/compare/v0.0.1-alpha.0...v0.0.1-alpha.1) (2026-04-08)
* **notifcaion:** style changes to the notificaion card and started the table ([7d6c2db](https://git.tuffraid.net/cowch/lst_v3/commits/7d6c2db89cae1f137f126f5814dccd373f7ccb76))
### 🌟 Enhancements
* **notification:** base notifcaiton sub and admin compelted ([5865ac3](https://git.tuffraid.net/cowch/lst_v3/commits/5865ac3b99d60005c4245740369b0e0789c8fbbd))
* **notification:** reprint added ([a17787e](https://git.tuffraid.net/cowch/lst_v3/commits/a17787e85217f1fa4a5e5389e29c33ec09c286c5))
* **puchase history:** purhcase history changed to long running no notification ([34b0aba](https://git.tuffraid.net/cowch/lst_v3/commits/34b0abac36f645d0fe5f508881ddbef81ff04b7c))
* **purchase:** historical data capture for alpla purchase ([42861cc](https://git.tuffraid.net/cowch/lst_v3/commits/42861cc69e8d4aba5a9670aaed55417efda2b505))
* **user notifications:** added the ability for users to sub to notifications and add multi email ([637de85](https://git.tuffraid.net/cowch/lst_v3/commits/637de857f99499a41f7175181523f5d809d95d7e))
### 🐛 Bug fixes
* **build:** issue with how i wrote the release token ([fe889ca](https://git.tuffraid.net/cowch/lst_v3/commits/fe889ca75731af08c42ec714b7f2abf17cd1ee40))
* **build:** type in how we pushed the header over ([83a94ca](https://git.tuffraid.net/cowch/lst_v3/commits/83a94cacf3fc87287cdc0c0cc861b339e72e4b83))
* **build:** typo ([860207a](https://git.tuffraid.net/cowch/lst_v3/commits/860207a60b6e04b15736cba631be6c7eab74d020))
* **i suck:** more learning experance ([9ceba8b](https://git.tuffraid.net/cowch/lst_v3/commits/9ceba8b5bba17959f27b16b28f50a83c044863fb))
* **lala:** something here ([17aed6c](https://git.tuffraid.net/cowch/lst_v3/commits/17aed6cb89f8220570f6c66f78dba6bb202c1aaa))
* **release:** typo that caused errors ([76747cf](https://git.tuffraid.net/cowch/lst_v3/commits/76747cf91738bd0d0530afcf7b4f51f0db11ca98))
* **typo:** more dam typos ([079478f](https://git.tuffraid.net/cowch/lst_v3/commits/079478f93217dea31c9a1e8ffed85d2381a6977d))
* **wrelease:** forgot to save ([3775760](https://git.tuffraid.net/cowch/lst_v3/commits/377576073449e95d315defb913dc317759cc3f43))
### 📝 Chore
* **release:** 0.1.0-alpha.10 ([98e408c](https://git.tuffraid.net/cowch/lst_v3/commits/98e408cb8577da18e24821b55474198439434f3e))
* **release:** 0.1.0-alpha.11 ([d6d5b45](https://git.tuffraid.net/cowch/lst_v3/commits/d6d5b451cd9aeba642ef94654ca20f4acd0b827c))
* **release:** 0.1.0-alpha.12 ([1ad789b](https://git.tuffraid.net/cowch/lst_v3/commits/1ad789b2b91a20a2f5a8dc9e6f39af2e19ec9cdc))
* **release:** 0.1.0-alpha.9 ([8f59bba](https://git.tuffraid.net/cowch/lst_v3/commits/8f59bba614a8eaa3105bb56f0db36013d5e68485))
* **release:** version packages ([fb2c560](https://git.tuffraid.net/cowch/lst_v3/commits/fb2c5609aa12ea7823783c364d5bd029c48a64bd))
* **release:** version packages ([b02b93b](https://git.tuffraid.net/cowch/lst_v3/commits/b02b93b83f488fbcee6d24db080ad0d1fe1c5f59))
* **release:** version packages ([2c0dbf9](https://git.tuffraid.net/cowch/lst_v3/commits/2c0dbf95c7b8dfd2c98b476d3f44bc8929668c88))
* **release:** version packages ([5c64600](https://git.tuffraid.net/cowch/lst_v3/commits/5c6460012aa70d336fbc9702240b4f19262a6b41))
* **release:** version packages ([0ce3790](https://git.tuffraid.net/cowch/lst_v3/commits/0ce3790675bc408762eafe76cbd5ab496fd06e73))
* **release:** version packages ([4caaf74](https://git.tuffraid.net/cowch/lst_v3/commits/4caaf745693d4df847aefd3721ac5d0ae792114a))
* **release:** version packages ([699c124](https://git.tuffraid.net/cowch/lst_v3/commits/699c124b0efba8282e436210619504bda8878e90))
* **release:** version packages ([c4fd74f](https://git.tuffraid.net/cowch/lst_v3/commits/c4fd74fc93226cffd9e39602f507a05cd8ea628b))
### 📚 Documentation
* **readme:** updated progress data ([92ba3ef](https://git.tuffraid.net/cowch/lst_v3/commits/92ba3ef5121afd0d82d4f40a5a985e1fdc081011))
* **sop:** added more info ([be1d408](https://git.tuffraid.net/cowch/lst_v3/commits/be1d4081e07b0982b355a270b7850a852a4398f5))
### 🛠️ Code Refactor
* **build:** added in more info to the relase section ([5854889](https://git.tuffraid.net/cowch/lst_v3/commits/5854889eb5398feebda50a5d256ce7aec39ce112))
* **build:** changes to auto release when we cahnge version ([643d12f](https://git.tuffraid.net/cowch/lst_v3/commits/643d12ff182827e724e1569a583bd625a0d1dd0c))
* **build:** changes to the way we do release so it builds as well ([7d55c5f](https://git.tuffraid.net/cowch/lst_v3/commits/7d55c5f43173edb48d8709adcb972b7d8fbc3ebd))
* **changelog:** reverted back to commit-chagnelog, like more than changeset for solo dev ([ed052df](https://git.tuffraid.net/cowch/lst_v3/commits/ed052dff3c81a7064660a7d25685e0505065252c))
* **notification:** reprint - removed a console log as it shouldnt bc there ([5f3d683](https://git.tuffraid.net/cowch/lst_v3/commits/5f3d683a13c831229674166cced699e373131316))
* **notification:** select menu looks propper now ([74262be](https://git.tuffraid.net/cowch/lst_v3/commits/74262beb6596ddc971971cc9214a2688accf3a8e))
* **opendock refactor on how releases are posted:** this was a bug maybe just a better refactory ([0880298](https://git.tuffraid.net/cowch/lst_v3/commits/0880298cf53d83e487c706e73854e0874ae2d9da))
* **queries:** changed dev version to be 1500ms vs 5000ms ([f3b8dd9](https://git.tuffraid.net/cowch/lst_v3/commits/f3b8dd94e5ebae0cc4dd0a2689a19051942e94b8))
* **release:** changes to only have the changelog in the release ([6e85991](https://git.tuffraid.net/cowch/lst_v3/commits/6e8599106298ed13febd069d6fda8b354efb5b7b))
* **userprofile:** changes to have the table be blank and say nothing subscribed ([3ecf5fb](https://git.tuffraid.net/cowch/lst_v3/commits/3ecf5fb916d5dc1b1ffb224e2142d94f7a9cb126))
### 📈 Project Builds
* **agent:** added westbend into the flow ([28c226d](https://git.tuffraid.net/cowch/lst_v3/commits/28c226ddbc37ab85cd6a9a6aec091def3e5623d6))
* **changelog:** reset the change log after all crap testing ([0059b9b](https://git.tuffraid.net/cowch/lst_v3/commits/0059b9b850c9647695a3fecaf5927c2e3ee7b192))

View File

@@ -7,7 +7,7 @@
Quick summary of current rewrite/migration goal.
- **Phase:** Backend rewrite
- **Last updated:** 2024-05-01
- **Last updated:** 2026-04-06
---
@@ -16,10 +16,10 @@ Quick summary of current rewrite/migration goal.
| Feature | Description | Status |
|----------|--------------|--------|
| User Authentication | ~~Login~~, ~~Signup~~, API Key | 🟨 In Progress |
| User Profile | Edit profile, upload avatar | ⏳ Not Started |
| User Profile | ~~Edit profile~~, upload avatar | 🟨 In Progress |
| User Admin | Edit user, create user, remove user, alplaprod user integration | ⏳ Not Started |
| Notifications | Subscribe, Create, Update, Remove, Manual Trigger | ⏳ Not Started |
| Datamart | Create, Update, Run, Deactivate | 🔧 In Progress |
| Notifications | ~~Subscribe~~, ~~Create~~, ~~Update~~, ~~~~Remove~~, Manual Trigger | 🟨 In Progress |
| Datamart | ~~Create~~, ~~Update~~, ~~Run~~, Deactivate | 🟨 In Progress |
| Frontend | Analytics and charts | ⏳ Not Started |
| Docs | Instructions and trouble shooting | ⏳ Not Started |
| One Click Print | Get printers, monitor printers, label process, material process, Special processes | ⏳ Not Started |
@@ -44,7 +44,7 @@ _Status legend:_
How to run the current version of the app.
```bash
git clone https://github.com/youruser/yourrepo.git
cd yourrepo
git clone https://git.tuffraid.net/cowch/lst_v3.git
cd lst_v3
npm install
npm run dev

View File

@@ -26,7 +26,7 @@ const createApp = async () => {
const __dirname = dirname(__filename);
// well leave this active so we can monitor it to validate
app.use(morgan("tiny"));
app.use(morgan("dev"));
app.set("trust proxy", true);
app.use(lstCors());
app.all(`${baseUrl}/api/auth/*splat`, toNodeHandler(auth));
@@ -34,11 +34,11 @@ const createApp = async () => {
setupRoutes(baseUrl, app);
app.use(
baseUrl + "/app",
`${baseUrl}/app`,
express.static(join(__dirname, "../frontend/dist")),
);
app.get(baseUrl + "/app/*splat", (_, res) => {
app.get(`${baseUrl}/app/*splat`, (_, res) => {
res.sendFile(join(__dirname, "../frontend/dist/index.html"));
});

View File

@@ -0,0 +1,23 @@
import type sql from "mssql";
const username = "gpviewer";
const password = "gp$$ViewOnly!";
export const gpSqlConfig: sql.config = {
server: `USMCD1VMS011`,
database: `ALPLA`,
user: username,
password: password,
options: {
encrypt: true,
trustServerCertificate: true,
},
requestTimeout: 90000, // how long until we kill the query and fail it
pool: {
max: 20, // Maximum number of connections in the pool
min: 0, // Minimum number of connections in the pool
idleTimeoutMillis: 10000, // How long a connection is allowed to be idle before being released
reapIntervalMillis: 1000, // how often to check for idle resources to destroy
acquireTimeoutMillis: 100000, // How long until a complete timeout happens
},
};

View File

@@ -13,6 +13,10 @@
*
* when a criteria is password over we will handle it by counting how many were passed up to 3 then deal with each one respectively
*/
import { and, between, inArray, notInArray } from "drizzle-orm";
import { db } from "../db/db.controller.js";
import { invHistoricalData } from "../db/schema/historicalInv.schema.js";
import { prodQuery } from "../prodSql/prodSqlQuery.controller.js";
import {
type SqlQuery,
@@ -22,37 +26,93 @@ import { returnFunc } from "../utils/returnHelper.utils.js";
import { tryCatch } from "../utils/trycatch.utils.js";
import { datamartData } from "./datamartData.utlis.js";
type Options = {
name: string;
value: string;
};
type Data = {
name: string;
options: Options;
options: any;
optionsRequired?: boolean;
howManyOptionsRequired?: number;
};
const lstDbRun = async (data: Data) => {
if (data.options) {
if (data.name === "psiInventory") {
const ids = data.options.articles.split(",").map((id: any) => id.trim());
const whse = data.options.whseToInclude
? data.options.whseToInclude
.split(",")
.map((w: any) => w.trim())
.filter(Boolean)
: [];
const locations = data.options.exludeLanes
? data.options.exludeLanes
.split(",")
.map((l: any) => l.trim())
.filter(Boolean)
: [];
const conditions = [
inArray(invHistoricalData.article, ids),
between(
invHistoricalData.histDate,
data.options.startDate,
data.options.endDate,
),
];
// only add the warehouse condition if there are any whse values
if (whse.length > 0) {
conditions.push(inArray(invHistoricalData.whseId, whse));
}
// locations we dont want in the system
if (locations.length > 0) {
conditions.push(notInArray(invHistoricalData.location, locations));
}
return await db
.select()
.from(invHistoricalData)
.where(and(...conditions));
}
}
return [];
};
export const runDatamartQuery = async (data: Data) => {
// search the query db for the query by name
const sqlQuery = sqlQuerySelector(`${data.name}`) as SqlQuery;
const considerLstDBRuns = ["psiInventory"];
if (considerLstDBRuns.includes(data.name)) {
const lstDB = await lstDbRun(data);
return returnFunc({
success: true,
level: "info",
module: "datamart",
subModule: "lstDBrn",
message: `Data for: ${data.name}`,
data: lstDB,
notify: false,
});
}
const sqlQuery = sqlQuerySelector(`datamart.${data.name}`) as SqlQuery;
const getDataMartInfo = datamartData.filter((x) => x.endpoint === data.name);
// const optionsMissing =
// !data.options || Object.keys(data.options).length === 0;
const optionCount =
Object.keys(data.options).length ===
getDataMartInfo[0]?.howManyOptionsRequired;
const isValid =
Object.keys(data.options ?? {}).length >=
(getDataMartInfo[0]?.howManyOptionsRequired ?? 0);
if (getDataMartInfo[0]?.optionsRequired && !optionCount) {
if (getDataMartInfo[0]?.optionsRequired && !isValid) {
return returnFunc({
success: false,
level: "error",
module: "datamart",
subModule: "query",
message: `This query is required to have the ${getDataMartInfo[0]?.howManyOptionsRequired} options set in order use it.`,
message: `This query is required to have ${getDataMartInfo[0]?.howManyOptionsRequired} option(s) set in order use it, please add in your option(s) data and try again.`,
data: [getDataMartInfo[0].options],
notify: false,
});
@@ -75,10 +135,129 @@ export const runDatamartQuery = async (data: Data) => {
// split the criteria by "," then and then update the query
if (data.options) {
Object.entries(data.options ?? {}).forEach(([key, value]) => {
const pattern = new RegExp(`\\[${key.trim()}\\]`, "g");
datamartQuery = datamartQuery.replace(pattern, String(value).trim());
});
switch (data.name) {
case "activeArticles":
break;
case "deliveryByDateRange":
datamartQuery = datamartQuery
.replace("[startDate]", `${data.options.startDate}`)
.replace("[endDate]", `${data.options.endDate}`);
break;
case "customerInventory":
datamartQuery = datamartQuery
.replace(
"--and IdAdressen",
`and IdAdressen in (${data.options.customer})`,
)
.replace(
"--and x.IdWarenlager in (0)",
`${data.options.whseToInclude ? `and x.IdWarenlager in (${data.options.whseToInclude})` : `--and x.IdWarenlager in (0)`}`,
);
break;
case "openOrders":
datamartQuery = datamartQuery
.replace("[startDay]", `${data.options.startDay}`)
.replace("[endDay]", `${data.options.endDay}`);
break;
case "inventory":
datamartQuery = datamartQuery
.replaceAll(
"--,l.RunningNumber",
`${data.options.includeRunningNumbers ? `,l.RunningNumber` : `--,l.RunningNumber`}`,
)
.replaceAll(
"--,l.MachineLocation,l.MachineName,l.ProductionLotRunningNumber as lot",
`${data.options.lots ? `,l.MachineLocation,l.MachineName,l.ProductionLotRunningNumber as lot` : `--,l.MachineLocation,l.MachineName,l.ProductionLotRunningNumber as lot`}`,
)
.replaceAll(
"--,l.WarehouseDescription,l.LaneDescription",
`${data.options.locations ? `,l.WarehouseDescription,l.LaneDescription` : `--,l.WarehouseDescription,l.LaneDescription`}`,
);
// adding in a test for historical check.
if (data.options.historical) {
datamartQuery = datamartQuery
.replace(
"--,l.ProductionLotRunningNumber as lot,l.warehousehumanreadableid as warehouseId,l.WarehouseDescription as warehouseDescription,l.lanehumanreadableid as locationId,l.lanedescription as laneDescription",
",l.ProductionLotRunningNumber as lot,l.warehousehumanreadableid as warehouseId,l.WarehouseDescription as warehouseDescription,l.lanehumanreadableid as locationId,l.lanedescription as laneDescription",
)
.replace(
"--,l.ProductionLotRunningNumber,l.warehousehumanreadableid,l.WarehouseDescription,l.lanehumanreadableid,l.lanedescription",
",l.ProductionLotRunningNumber,l.warehousehumanreadableid,l.WarehouseDescription,l.lanehumanreadableid,l.lanedescription",
);
}
break;
case "fakeEDIUpdate":
datamartQuery = datamartQuery.replace(
"--AND h.CustomerHumanReadableId in (0)",
`${data.options.address ? `AND h.CustomerHumanReadableId in (${data.options.address})` : `--AND h.CustomerHumanReadableId in (0)`}`,
);
break;
case "forecast":
datamartQuery = datamartQuery.replace(
"where DeliveryAddressHumanReadableId in ([customers])",
data.options.customers
? `where DeliveryAddressHumanReadableId in (${data.options.customers})`
: "--where DeliveryAddressHumanReadableId in ([customers])",
);
break;
case "activeArticles2":
datamartQuery = datamartQuery.replace(
"and a.HumanReadableId in ([articles])",
data.options.articles
? `and a.HumanReadableId in (${data.options.articles})`
: "--and a.HumanReadableId in ([articles])",
);
break;
case "psiDeliveryData":
datamartQuery = datamartQuery
.replace("[startDate]", `${data.options.startDate}`)
.replace("[endDate]", `${data.options.endDate}`)
.replace(
"and IdArtikelVarianten in ([articles])",
data.options.articles
? `and IdArtikelVarianten in (${data.options.articles})`
: "--and IdArtikelVarianten in ([articles])",
);
break;
case "productionData":
datamartQuery = datamartQuery
.replace("[startDate]", `${data.options.startDate}`)
.replace("[endDate]", `${data.options.endDate}`)
.replace(
"and ArticleHumanReadableId in ([articles])",
data.options.articles
? `and ArticleHumanReadableId in (${data.options.articles})`
: "--and ArticleHumanReadableId in ([articles])",
);
break;
case "psiPlanningData":
datamartQuery = datamartQuery
.replace("[startDate]", `${data.options.startDate}`)
.replace("[endDate]", `${data.options.endDate}`)
.replace(
"and p.IdArtikelvarianten in ([articles])",
data.options.articles
? `and p.IdArtikelvarianten in (${data.options.articles})`
: "--and p.IdArtikelvarianten in ([articles])",
);
break;
default:
return returnFunc({
success: false,
level: "error",
module: "datamart",
subModule: "query",
message: `${data.name} encountered an error as it might not exist in LST please contact support if this continues to happen`,
data: [sqlQuery.message],
notify: true,
});
}
}
const { data: queryRun, error } = await tryCatch(

View File

@@ -10,14 +10,50 @@ export const datamartData = [
name: "Active articles",
endpoint: "activeArticles",
description: "returns all active articles for the server with custom data",
options: "", // set as a string and each item will be seperated by a , this way we can split it later in the excel file.
options: "",
optionsRequired: false,
},
{
name: "Delivery by date range",
endpoint: "deliveryByDateRange",
description: `Returns all Deliverys in selected date range IE: 1/1/${new Date(Date.now()).getFullYear()} to 1/31/${new Date(Date.now()).getFullYear()}`,
options: "startDate,endDate", // set as a string and each item will be seperated by a , this way we can split it later in the excel file.
description: `Returns all Deliveries in selected date range IE: 1/1/${new Date(Date.now()).getFullYear()} to 1/31/${new Date(Date.now()).getFullYear()}`,
options: "startDate,endDate",
optionsRequired: true,
howManyOptionsRequired: 2,
},
{
name: "Get Customer Inventory",
endpoint: "customerInventory",
description: `Returns specific customer inventory based on there address ID, IE: 8,12,145. \nWith option to include specific warehousesIds, IE 36,41,5. \nNOTES: *leaving warehouse blank will just pull everything for the customer, Inventory dose not include PPOO or INV`,
options: "customer,whseToInclude",
optionsRequired: true,
howManyOptionsRequired: 1,
},
{
name: "Get open order",
endpoint: "openOrders",
description: `Returns open orders based on day count sent over, IE: startDay 15 days in the past endDay 5 days in the future, can be left empty for this default days`,
options: "startDay,endDay",
optionsRequired: true,
howManyOptionsRequired: 2,
},
{
name: "Get inventory",
endpoint: "inventory",
description: `Returns all inventory, excludes inv location. adding an x in one of the options will enable it.`,
options: "includeRunningNumbers,locations,lots",
},
{
name: "Fake EDI Update",
endpoint: "fakeEDIUpdate",
description: `Returns all open orders to correct and resubmit via lst demand mgt, leaving blank will get everything putting an address only returns the specified address. \nNOTE: only orders that were created via edi will populate here.`,
options: "address",
},
{
name: "Production Data",
endpoint: "productionData",
description: `Returns all production data from the date range with the option to have 1 to many avs to search by.`,
options: "startDate,endDate,articles",
optionsRequired: true,
howManyOptionsRequired: 2,
},

View File

@@ -0,0 +1,39 @@
import {
integer,
jsonb,
pgTable,
text,
timestamp,
uuid,
} from "drizzle-orm/pg-core";
import { createInsertSchema, createSelectSchema } from "drizzle-zod";
import type { z } from "zod";
export const alplaPurchaseHistory = pgTable("alpla_purchase_history", {
id: uuid("id").defaultRandom().primaryKey(),
apo: integer("apo"),
revision: integer("revision"),
confirmed: integer("confirmed"),
status: integer("status"),
statusText: text("status_text"),
journalNum: integer("journal_num"),
add_date: timestamp("add_date").defaultNow(),
add_user: text("add_user"),
upd_user: text("upd_user"),
upd_date: timestamp("upd_date").defaultNow(),
remark: text("remark"),
approvedStatus: text("approved_status").default("new"),
position: jsonb("position").default([]),
createdAt: timestamp("created_at").defaultNow(),
updatedAt: timestamp("updated_at").defaultNow(),
});
export const alplaPurchaseHistorySchema =
createSelectSchema(alplaPurchaseHistory);
export const newAlplaPurchaseHistorySchema =
createInsertSchema(alplaPurchaseHistory);
export type AlplaPurchaseHistory = z.infer<typeof alplaPurchaseHistorySchema>;
export type NewAlplaPurchaseHistory = z.infer<
typeof newAlplaPurchaseHistorySchema
>;

View File

@@ -0,0 +1,30 @@
import { date, pgTable, text, timestamp, uuid } from "drizzle-orm/pg-core";
import { createInsertSchema, createSelectSchema } from "drizzle-zod";
import type z from "zod";
export const invHistoricalData = pgTable("inv_historical_data", {
inv: uuid("id").defaultRandom().primaryKey(),
histDate: date("hist_date").notNull(), // this date should always be yesterday when we post it.
plantToken: text("plant_token"),
article: text("article").notNull(),
articleDescription: text("article_description").notNull(),
materialType: text("material_type"),
total_QTY: text("total_QTY"),
available_QTY: text("available_QTY"),
coa_QTY: text("coa_QTY"),
held_QTY: text("held_QTY"),
consignment_QTY: text("consignment_qty"),
lot_Number: text("lot_number"),
locationId: text("location_id"),
location: text("location"),
whseId: text("whse_id").default(""),
whseName: text("whse_name").default("missing whseName"),
upd_user: text("upd_user").default("lst-system"),
upd_date: timestamp("upd_date").defaultNow(),
});
export const invHistoricalDataSchema = createSelectSchema(invHistoricalData);
export const newInvHistoricalDataSchema = createInsertSchema(invHistoricalData);
export type InvHistoricalData = z.infer<typeof invHistoricalDataSchema>;
export type NewInvHistoricalData = z.infer<typeof newInvHistoricalDataSchema>;

View File

@@ -1,4 +1,5 @@
import {
index,
integer,
jsonb,
pgTable,
@@ -9,14 +10,23 @@ import {
import { createInsertSchema, createSelectSchema } from "drizzle-zod";
import type { z } from "zod";
export const opendockApt = pgTable("opendock_apt", {
id: uuid("id").defaultRandom().primaryKey(),
release: integer("release").unique(),
openDockAptId: text("open_dock_apt_id").notNull(),
appointment: jsonb("appointment").default([]),
upd_date: timestamp("upd_date").defaultNow(),
createdAt: timestamp("created_at").defaultNow(),
});
export const opendockApt = pgTable(
"opendock_apt",
{
id: uuid("id").defaultRandom().primaryKey(),
release: integer("release").notNull().unique(),
openDockAptId: text("open_dock_apt_id").notNull(),
appointment: jsonb("appointment").notNull().default([]),
upd_date: timestamp("upd_date").notNull().defaultNow(),
createdAt: timestamp("created_at").notNull().defaultNow(),
},
(table) => ({
releaseIdx: index("opendock_apt_release_idx").on(table.release),
openDockAptIdIdx: index("opendock_apt_opendock_id_idx").on(
table.openDockAptId,
),
}),
);
export const opendockAptSchema = createSelectSchema(opendockApt);
export const newOpendockAptSchema = createInsertSchema(opendockApt);

View File

@@ -1,6 +1,11 @@
import { integer, pgTable, text } from "drizzle-orm/pg-core";
import { integer, pgTable, text, timestamp } from "drizzle-orm/pg-core";
export const opendockApt = pgTable("printer_log", {
export const printerLog = pgTable("printer_log", {
id: integer().primaryKey().generatedAlwaysAsIdentity(),
name: text("name").notNull(),
name: text("name"),
ip: text("ip"),
printerSN: text("printer_sn"),
condition: text("condition").notNull(),
message: text("message"),
createdAt: timestamp("created_at").defaultNow(),
});

View File

@@ -0,0 +1,44 @@
import {
boolean,
integer,
jsonb,
pgTable,
text,
timestamp,
uniqueIndex,
uuid,
} from "drizzle-orm/pg-core";
import { createInsertSchema, createSelectSchema } from "drizzle-zod";
import type z from "zod";
export const printerData = pgTable(
"printer_data",
{
id: uuid("id").defaultRandom().primaryKey(),
humanReadableId: text("humanReadable_id").unique().notNull(),
name: text("name").notNull(),
ipAddress: text("ipAddress"),
port: integer("port"),
status: text("status"),
statusText: text("statusText"),
printerSN: text("printer_sn"),
lastTimePrinted: timestamp("last_time_printed").notNull().defaultNow(),
assigned: boolean("assigned").default(false),
remark: text("remark"),
printDelay: integer("printDelay").default(90),
processes: jsonb("processes").default([]),
printDelayOverride: boolean("print_delay_override").default(false), // this will be more for if we have the lot time active but want to over ride this single line for some reason
add_Date: timestamp("add_Date").defaultNow(),
upd_date: timestamp("upd_date").defaultNow(),
},
(table) => [
//uniqueIndex("emailUniqueIndex").on(sql`lower(${table.email})`),
uniqueIndex("printer_id").on(table.humanReadableId),
],
);
export const printerSchema = createSelectSchema(printerData);
export const newPrinterSchema = createInsertSchema(printerData);
export type Printer = z.infer<typeof printerSchema>;
export type NewPrinter = z.infer<typeof newPrinterSchema>;

View File

@@ -0,0 +1,17 @@
import { type Express, Router } from "express";
import { requireAuth } from "../middleware/auth.middleware.js";
import restart from "./gpSqlRestart.route.js";
import start from "./gpSqlStart.route.js";
import stop from "./gpSqlStop.route.js";
export const setupGPSqlRoutes = (baseUrl: string, app: Express) => {
//setup all the routes
// Apply auth to entire router
const router = Router();
router.use(requireAuth);
router.use(start);
router.use(stop);
router.use(restart);
app.use(`${baseUrl}/api/system/gpSql`, router);
};

View File

@@ -0,0 +1,148 @@
import sql from "mssql";
import { gpSqlConfig } from "../configs/gpSql.config.js";
import { createLogger } from "../logger/logger.controller.js";
import { checkHostnamePort } from "../utils/checkHost.utils.js";
import { returnFunc } from "../utils/returnHelper.utils.js";
export let pool2: sql.ConnectionPool;
export let connected: boolean = false;
export let reconnecting = false;
// start the delay out as 2 seconds
let delayStart = 2000;
let attempt = 0;
const maxAttempts = 10;
export const connectGPSql = async () => {
const serverUp = await checkHostnamePort(`USMCD1VMS011:1433`);
if (!serverUp) {
// we will try to reconnect
connected = false;
reconnectToSql;
return returnFunc({
success: false,
level: "error",
module: "system",
subModule: "db",
message: "GP server is offline or unreachable.",
});
}
// if we are trying to click restart from the api for some reason we want to kick back and say no
if (connected) {
return returnFunc({
success: false,
level: "error",
module: "system",
subModule: "db",
message: "The Sql server is already connected.",
});
}
// try to connect to the sql server
try {
pool2 = new sql.ConnectionPool(gpSqlConfig);
await pool2.connect();
connected = true;
return returnFunc({
success: true,
level: "info",
module: "system",
subModule: "db",
message: `${gpSqlConfig.server} is connected to ${gpSqlConfig.database}`,
data: [],
notify: false,
});
} catch (error) {
reconnectToSql;
return returnFunc({
success: false,
level: "error",
module: "system",
subModule: "db",
message: "Failed to connect to the prod sql server.",
data: [error],
notify: false,
});
}
};
export const closePool = async () => {
if (!connected) {
return returnFunc({
success: false,
level: "error",
module: "system",
subModule: "db",
message: "There is no connection to the prod server currently.",
});
}
try {
await pool2.close();
connected = false;
return returnFunc({
success: true,
level: "info",
module: "system",
subModule: "db",
message: "The sql connection has been closed.",
});
} catch (error) {
connected = false;
return returnFunc({
success: false,
level: "error",
module: "system",
subModule: "db",
message: "There was an error closing the sql connection",
data: [error],
});
}
};
export const reconnectToSql = async () => {
const log = createLogger({
module: "system",
subModule: "db",
});
if (reconnecting) return;
//set reconnecting to true while we try to reconnect
reconnecting = true;
while (!connected && attempt < maxAttempts) {
attempt++;
log.info(
`Reconnect attempt ${attempt}/${maxAttempts} in ${delayStart / 1000}s ...`,
);
await new Promise((res) => setTimeout(res, delayStart));
const serverUp = await checkHostnamePort(`${process.env.PROD_SERVER}:1433`);
if (!serverUp) {
delayStart = Math.min(delayStart * 2, 30000); // exponential backoff until up to 30000
continue;
}
try {
pool2 = await sql.connect(gpSqlConfig);
reconnecting = false;
connected = true;
log.info(`${gpSqlConfig.server} is connected to ${gpSqlConfig.database}`);
} catch (error) {
delayStart = Math.min(delayStart * 2, 30000);
log.error({ error }, "Failed to reconnect to the prod sql server.");
}
}
if (!connected && attempt >= maxAttempts) {
log.error(
{ notify: true },
"Max reconnect attempts reached on the prodSql server. Stopping retries.",
);
reconnecting = false;
// TODO: exit alert someone here
}
};

View File

@@ -0,0 +1,78 @@
import { returnFunc } from "../utils/returnHelper.utils.js";
import { connected, pool2 } from "./gpSqlConnection.controller.js";
interface SqlError extends Error {
code?: string;
originalError?: {
info?: { message?: string };
};
}
/**
* Run a prod query
* just pass over the query as a string and the name of the query.
* Query should be like below.
* * select * from AlplaPROD_test1.dbo.table
* You must use test1 always as it will be changed via query
*/
export const gpQuery = async (queryToRun: string, name: string) => {
if (!connected) {
return returnFunc({
success: false,
level: "error",
module: "system",
subModule: "gpSql",
message: `${process.env.PROD_PLANT_TOKEN} is offline or attempting to reconnect`,
data: [],
notify: false,
});
}
//change to the correct server
const query = queryToRun.replaceAll(
"test1",
`${process.env.PROD_PLANT_TOKEN}`,
);
try {
const result = await pool2.request().query(query);
return {
success: true,
message: `Query results for: ${name}`,
data: result.recordset ?? [],
};
} catch (error: unknown) {
const err = error as SqlError;
if (err.code === "ETIMEOUT") {
return returnFunc({
success: false,
module: "system",
subModule: "gpSql",
level: "error",
message: `${name} did not run due to a timeout.`,
notify: false,
data: [],
});
}
if (err.code === "EREQUEST") {
return returnFunc({
success: false,
module: "system",
subModule: "gpSql",
level: "error",
message: `${name} encountered an error ${err.originalError?.info?.message || "undefined error"}`,
data: [],
});
}
return returnFunc({
success: false,
module: "system",
subModule: "gpSql",
level: "error",
message: `${name} encountered an unknown error.`,
data: [],
});
}
};

View File

@@ -0,0 +1,29 @@
import { readFileSync } from "node:fs";
export type SqlGPQuery = {
query: string;
success: boolean;
message: string;
};
export const sqlGpQuerySelector = (name: string) => {
try {
const queryFile = readFileSync(
new URL(`../gpSql/queries/${name}.sql`, import.meta.url),
"utf8",
);
return {
success: true,
message: `Query for: ${name}`,
query: queryFile,
};
} catch (e) {
console.error(e);
return {
success: false,
message:
"Error getting the query file, please make sure you have the correct name.",
};
}
};

View File

@@ -0,0 +1,23 @@
import { Router } from "express";
import { apiReturn } from "../utils/returnHelper.utils.js";
import { closePool, connectGPSql } from "./gpSqlConnection.controller.js";
const r = Router();
r.post("/restart", async (_, res) => {
await closePool();
await new Promise((r) => setTimeout(r, 2000));
const connect = await connectGPSql();
apiReturn(res, {
success: connect.success,
level: connect.success ? "info" : "error",
module: "routes",
subModule: "prodSql",
message: "Sql Server has been restarted",
data: connect.data,
status: connect.success ? 200 : 400,
});
});
export default r;

View File

@@ -0,0 +1,20 @@
import { Router } from "express";
import { apiReturn } from "../utils/returnHelper.utils.js";
import { connectGPSql } from "./gpSqlConnection.controller.js";
const r = Router();
r.post("/start", async (_, res) => {
const connect = await connectGPSql();
apiReturn(res, {
success: connect.success,
level: connect.success ? "info" : "error",
module: "routes",
subModule: "prodSql",
message: connect.message,
data: connect.data,
status: connect.success ? 200 : 400,
});
});
export default r;

View File

@@ -0,0 +1,20 @@
import { Router } from "express";
import { apiReturn } from "../utils/returnHelper.utils.js";
import { closePool } from "./gpSqlConnection.controller.js";
const r = Router();
r.post("/stop", async (_, res) => {
const connect = await closePool();
apiReturn(res, {
success: connect.success,
level: connect.success ? "info" : "error",
module: "routes",
subModule: "prodSql",
message: connect.message,
data: connect.data,
status: connect.success ? 200 : 400,
});
});
export default r;

View File

@@ -0,0 +1,39 @@
USE [ALPLA]
SELECT Distinct r.[POPRequisitionNumber] as req,
r.[ApprovalStatus] as approvalStatus,
r.[Requested By] requestedBy,
format(t.[Created Date], 'yyyy-MM-dd') as createdAt,
format(r.[Requisition Date], 'MM/dd/yyyy') as expectedDate,
r.[Requisition Amount] as glAccount,
case when r.[Account Segment 2] is null or r.[Account Segment 2] = '' then '999' else cast(r.[Account Segment 2] as varchar) end as plant
,t.Status as status
,t.[Document Status] as docStatus
,t.[Workflow Status] as reqState
,CASE
WHEN [Workflow Status] = 'Completed'
THEN 'Pending APO convertion'
WHEN [Workflow Status] = 'Pending User Action'
AND r.[ApprovalStatus] = 'Pending Approval'
THEN 'Pending plant approver'
WHEN [Workflow Status] = ''
AND r.[ApprovalStatus] = 'Not Submitted'
THEN 'Req not submited'
ELSE 'Unknown reason'
END AS approvedStatus
FROM [dbo].[PORequisitions] r (nolock)
left join
[dbo].[PurchaseRequisitions] as t (nolock) on
t.[Requisition Number] = r.[POPRequisitionNumber]
--where ApprovalStatus = 'Pending Approval'
--and [Account Segment 2] = 80
where r.POPRequisitionNumber in ([reqsToCheck])
Order By r.POPRequisitionNumber

View File

@@ -5,6 +5,7 @@ import { db } from "../db/db.controller.js";
import { logs } from "../db/schema/logs.schema.js";
import { emitToRoom } from "../socket.io/roomEmitter.socket.js";
import { tryCatch } from "../utils/trycatch.utils.js";
import { notifySystemIssue } from "./logger.notify.js";
//import build from "pino-abstract-transport";
export const logLevel = process.env.LOG_LEVEL || "info";
@@ -45,6 +46,10 @@ const dbStream = new Writable({
console.error(res.error);
}
if (obj.notify) {
notifySystemIssue(obj);
}
if (obj.room) {
emitToRoom(obj.room, res.data ? res.data[0] : obj);
}

View File

@@ -0,0 +1,44 @@
/**
* For all logging that has notify set to true well send an email to the system admins, if we have a discord webhook set well send it there as well
*/
import { eq } from "drizzle-orm";
import { db } from "../db/db.controller.js";
import { user } from "../db/schema/auth.schema.js";
import { sendEmail } from "../utils/sendEmail.utils.js";
type NotifyData = {
module: string;
submodule: string;
hostname: string;
msg: string;
stack: unknown[];
};
export const notifySystemIssue = async (data: NotifyData) => {
// build the email out
const formattedError = Array.isArray(data.stack)
? data.stack.map((e: any) => e.error || e)
: data.stack;
const sysAdmin = await db
.select()
.from(user)
.where(eq(user.role, "systemAdmin"));
await sendEmail({
email: sysAdmin.map((r) => r.email).join("; ") ?? "cowchmonkey@gmail.com", // change to pull in system admin emails
subject: `${data.hostname} has encountered a critical issue.`,
template: "serverCritialIssue",
context: {
plant: data.hostname,
module: data.module,
subModule: data.submodule,
message: data.msg,
error: JSON.stringify(formattedError, null, 2),
},
});
// TODO: add discord
};

View File

@@ -0,0 +1,220 @@
import { format } from "date-fns";
import { eq, sql } from "drizzle-orm";
import { runDatamartQuery } from "../datamart/datamart.controller.js";
import { db } from "../db/db.controller.js";
import { invHistoricalData } from "../db/schema/historicalInv.schema.js";
import { prodQuery } from "../prodSql/prodSqlQuery.controller.js";
import {
type SqlQuery,
sqlQuerySelector,
} from "../prodSql/prodSqlQuerySelector.utils.js";
import { createCronJob } from "../utils/croner.utils.js";
import { returnFunc } from "../utils/returnHelper.utils.js";
import { tryCatch } from "../utils/trycatch.utils.js";
type Inventory = {
article: string;
alias: string;
materialType: string;
total_palletQTY: string;
available_QTY: string;
coa_QTY: string;
held_QTY: string;
consignment_qty: string;
lot: string;
locationId: string;
laneDescription: string;
warehouseId: string;
warehouseDescription: string;
};
const historicalInvImport = async () => {
const today = new Date();
const { data, error } = await tryCatch(
db
.select()
.from(invHistoricalData)
.where(eq(invHistoricalData.histDate, format(today, "yyyy-MM-dd"))),
);
if (error) {
return returnFunc({
success: false,
level: "error",
module: "system",
subModule: "query",
message: `Error getting historical inv info`,
data: error as any,
notify: false,
});
}
if (data?.length === 0) {
const avSQLQuery = sqlQuerySelector(`datamart.activeArticles`) as SqlQuery;
if (!avSQLQuery.success) {
return returnFunc({
success: false,
level: "error",
module: "logistics",
subModule: "inv",
message: `Error getting Article info`,
data: [avSQLQuery.message],
notify: true,
});
}
const { data: inv, error: invError } = await tryCatch(
//prodQuery(sqlQuery.query, "Inventory data"),
runDatamartQuery({ name: "inventory", options: { historical: "x" } }),
);
const { data: av, error: avError } = (await tryCatch(
runDatamartQuery({ name: "activeArticles", options: {} }),
)) as any;
if (invError) {
return returnFunc({
success: false,
level: "error",
module: "logistics",
subModule: "inv",
message: `Error getting inventory info from prod query`,
data: invError as any,
notify: false,
});
}
if (avError) {
return returnFunc({
success: false,
level: "error",
module: "logistics",
subModule: "inv",
message: `Error getting article info from prod query`,
data: invError as any,
notify: false,
});
}
// shape the data to go into our table
const plantToken = process.env.PROD_PLANT_TOKEN ?? "test1";
const importInv = (inv.data ? inv.data : []) as Inventory[];
const importData = importInv.map((i) => {
return {
histDate: sql`(NOW())::date`,
plantToken: plantToken,
article: i.article,
articleDescription: i.alias,
materialType:
av.data.filter((a: any) => a.article === i.article).length > 0
? av.data.filter((a: any) => a.article === i.article)[0]
?.TypeOfMaterial
: "Item not defined",
total_QTY: i.total_palletQTY ?? "0.00",
available_QTY: i.available_QTY ?? "0.00",
coa_QTY: i.coa_QTY ?? "0.00",
held_QTY: i.held_QTY ?? "0.00",
consignment_QTY: i.consignment_qty ?? "0.00",
lot_Number: i.lot ?? "0",
locationId: i.locationId ?? "0",
location: i.laneDescription ?? "Missing lane",
whseId: i.warehouseId ?? "0",
whseName: i.warehouseDescription ?? "Missing warehouse",
};
});
const { data: dataImport, error: errorImport } = await tryCatch(
db.insert(invHistoricalData).values(importData),
);
if (errorImport) {
return returnFunc({
success: false,
level: "error",
module: "logistics",
subModule: "inv",
message: `Error adding historical data to lst db`,
data: errorImport as any,
notify: true,
});
}
if (dataImport) {
return returnFunc({
success: false,
level: "info",
module: "logistics",
subModule: "inv",
message: `Historical data was added to lst :D`,
data: [],
notify: false,
});
}
} else {
return returnFunc({
success: false,
level: "info",
module: "logistics",
subModule: "inv",
message: `Historical Data for: ${format(today, "yyyy-MM-dd")}, is already added and nothing to do.`,
data: [],
notify: false,
});
}
return returnFunc({
success: false,
level: "info",
module: "logistics",
subModule: "inv",
message: `Some weird crazy error just happened and didnt get captured during the historical inv check.`,
data: [],
notify: true,
});
};
export const historicalSchedule = async () => {
// running the history in case my silly ass dose an update around the shift change time lol, this will prevent loss data. it might be off a little but no one cares
historicalInvImport();
const sqlQuery = sqlQuerySelector(`shiftChange`) as SqlQuery;
if (!sqlQuery.success) {
return returnFunc({
success: false,
level: "error",
module: "logistics",
subModule: "query",
message: `Error getting shiftChange sql file`,
data: [sqlQuery.message],
notify: false,
});
}
const { data, error } = await tryCatch(
prodQuery(sqlQuery.query, "Shift Change data"),
);
if (error) {
return returnFunc({
success: false,
level: "error",
module: "logistics",
subModule: "query",
message: `Error getting shiftChange info`,
data: error as any,
notify: false,
});
}
// shift split
const shiftTimeSplit = data?.data[0]?.shiftChange.split(":");
const cronSetup = `0 ${
shiftTimeSplit?.length > 0 ? `${parseInt(shiftTimeSplit[1])}` : "0"
} ${
shiftTimeSplit?.length > 0 ? `${parseInt(shiftTimeSplit[0])}` : "7"
} * * *`;
createCronJob("historicalInv", cronSetup, () => historicalInvImport());
};

View File

@@ -0,0 +1,113 @@
import { eq } from "drizzle-orm";
import { db } from "../db/db.controller.js";
import { notifications } from "../db/schema/notifications.schema.js";
import { prodQuery } from "../prodSql/prodSqlQuery.controller.js";
import {
type SqlQuery,
sqlQuerySelector,
} from "../prodSql/prodSqlQuerySelector.utils.js";
import { returnFunc } from "../utils/returnHelper.utils.js";
import { sendEmail } from "../utils/sendEmail.utils.js";
import { tryCatch } from "../utils/trycatch.utils.js";
/**
*
*/
const func = async (data: any, emails: string) => {
// get the actual notification as items will be updated between intervals if no one touches
const { data: l, error: le } = (await tryCatch(
db.select().from(notifications).where(eq(notifications.id, data.id)),
)) as any;
if (le) {
return returnFunc({
success: false,
level: "error",
module: "notification",
subModule: "query",
message: `${data.name} encountered an error while trying to get initial info`,
data: [le],
notify: true,
});
}
// search the query db for the query by name
const sqlQuery = sqlQuerySelector(`${data.name}`) as SqlQuery;
// create the ignore audit logs ids
const ignoreIds = l[0].options[0]?.auditId
? `${l[0].options[0]?.auditId}`
: "0";
// run the check
const { data: queryRun, error } = await tryCatch(
prodQuery(
sqlQuery.query
.replace("[intervalCheck]", l[0].interval)
.replace("[ignoreList]", ignoreIds),
`Running notification query: ${l[0].name}`,
),
);
if (error) {
return returnFunc({
success: false,
level: "error",
module: "notification",
subModule: "query",
message: `Data for: ${l[0].name} encountered an error while trying to get it`,
data: [error],
notify: true,
});
}
if (queryRun.data.length > 0) {
// update the latest audit id
const { error: dbe } = await tryCatch(
db
.update(notifications)
.set({ options: [{ auditId: `${queryRun.data[0].id}` }] })
.where(eq(notifications.id, data.id)),
);
if (dbe) {
return returnFunc({
success: false,
level: "error",
module: "notification",
subModule: "query",
message: `Data for: ${l[0].name} encountered an error while trying to get it`,
data: [dbe],
notify: true,
});
}
// send the email
const sentEmail = await sendEmail({
email: emails,
subject: "Alert! Label Reprinted",
template: "reprintLabels",
context: {
items: queryRun.data,
},
});
if (!sentEmail?.success) {
return returnFunc({
success: false,
level: "error",
module: "email",
subModule: "notification",
message: `${l[0].name} failed to send the email`,
data: [sentEmail],
notify: true,
});
}
} else {
console.log("doing nothing as there is nothing to do.");
}
// TODO send the error to systemAdmin users so they do not always need to be on the notifications.
// these errors are defined per notification.
};
export default func;

View File

@@ -0,0 +1,96 @@
import { eq } from "drizzle-orm";
import { type Response, Router } from "express";
import { db } from "../db/db.controller.js";
import { notifications } from "../db/schema/notifications.schema.js";
import { auth } from "../utils/auth.utils.js";
import { apiReturn } from "../utils/returnHelper.utils.js";
import { tryCatch } from "../utils/trycatch.utils.js";
const r = Router();
r.post("/", async (req, res: Response) => {
const hasPermissions = await auth.api.userHasPermission({
body: {
//userId: req?.user?.id,
role: req.user?.roles as any,
permissions: {
notifications: ["readAll"], // This must match the structure in your access control
},
},
});
if (!hasPermissions) {
return apiReturn(res, {
success: false,
level: "error",
module: "notification",
subModule: "post",
message: `You do not have permissions to be here`,
data: [],
status: 400,
});
}
const { data: nName, error: nError } = await tryCatch(
db
.select()
.from(notifications)
.where(eq(notifications.name, req.body.name)),
);
if (nError) {
return apiReturn(res, {
success: false,
level: "error",
module: "notification",
subModule: "get",
message: `There was an error getting the notifications `,
data: [nError],
status: 400,
});
}
const { data: sub, error: sError } = await tryCatch(
db
.select()
.from(notifications)
.where(eq(notifications.name, req.body.name)),
);
if (sError) {
return apiReturn(res, {
success: false,
level: "error",
module: "notification",
subModule: "get",
message: `There was an error getting the subs `,
data: [sError],
status: 400,
});
}
const emailString = [
...new Set(
sub.flatMap((e: any) =>
e.emails?.map((email: any) => email.trim().toLowerCase()),
),
),
].join(";");
console.log(emailString);
const { default: runFun } = await import(
`./notification.${req.body.name.trim()}.js`
);
const manual = await runFun(nName[0], "blake.matthes@alpla.com");
return apiReturn(res, {
success: true,
level: "info",
module: "notification",
subModule: "post",
message: `Manual Trigger ran`,
data: manual ?? [],
status: 200,
});
});
export default r;

View File

@@ -0,0 +1,114 @@
import { eq } from "drizzle-orm";
import { db } from "../db/db.controller.js";
import { notifications } from "../db/schema/notifications.schema.js";
import { prodQuery } from "../prodSql/prodSqlQuery.controller.js";
import {
type SqlQuery,
sqlQuerySelector,
} from "../prodSql/prodSqlQuerySelector.utils.js";
import { delay } from "../utils/delay.utils.js";
import { returnFunc } from "../utils/returnHelper.utils.js";
import { sendEmail } from "../utils/sendEmail.utils.js";
import { tryCatch } from "../utils/trycatch.utils.js";
import { v2QueryRun } from "../utils/pgConnectToLst.utils.js";
let shutoffv1 = false
const func = async (data: any, emails: string) => {
// TODO: remove this disable once all 17 plants are on this new lst
if (!shutoffv1){
v2QueryRun(`update public.notifications set active = false where name = '${data.name}'`)
shutoffv1 = true
}
const { data: l, error: le } = (await tryCatch(
db.select().from(notifications).where(eq(notifications.id, data.id)),
)) as any;
if (le) {
return returnFunc({
success: false,
level: "error",
module: "notification",
subModule: "query",
message: `${data.name} encountered an error while trying to get initial info`,
data: le as any,
notify: true,
});
}
// search the query db for the query by name
const sqlQuery = sqlQuerySelector(`${data.name}`) as SqlQuery;
// create the ignore audit logs ids
// get get the latest blocking order id that was sent
const blockingOrderId = l[0].options[0].lastBlockingOrderIdSent ?? 69;
// run the check
const { data: queryRun, error } = await tryCatch(
prodQuery(
sqlQuery.query.replace("[lastBlocking]", blockingOrderId),
`Running notification query: ${l[0].name}`,
),
);
if (error) {
return returnFunc({
success: false,
level: "error",
module: "notification",
subModule: "query",
message: `Data for: ${l[0].name} encountered an error while trying to get it`,
data: error as any,
notify: true,
});
}
if (queryRun.data.length > 0) {
for (const bo of queryRun.data) {
const sentEmail = await sendEmail({
email: emails,
subject: bo.subject,
template: "qualityBlocking",
context: {
items: bo,
},
});
if (!sentEmail?.success) {
return returnFunc({
success: false,
level: "error",
module: "notification",
subModule: "email",
message: `${l[0].name} failed to send the email`,
data: sentEmail?.data as any,
notify: true,
});
}
await delay(1500);
const { error: dbe } = await tryCatch(
db
.update(notifications)
.set({ options: [{ lastBlockingOrderIdSent: bo.blockingNumber }] })
.where(eq(notifications.id, data.id)),
);
if (dbe) {
return returnFunc({
success: false,
level: "error",
module: "notification",
subModule: "query",
message: `Data for: ${l[0].name} encountered an error while trying to get it`,
data: dbe as any,
notify: true,
});
}
}
}
};
export default func;

View File

@@ -1,10 +1,113 @@
const reprint = (data: any, emails: string) => {
// TODO: do the actual logic for the notification.
console.log(data);
console.log(emails);
import { eq } from "drizzle-orm";
import { db } from "../db/db.controller.js";
import { notifications } from "../db/schema/notifications.schema.js";
import { prodQuery } from "../prodSql/prodSqlQuery.controller.js";
import {
type SqlQuery,
sqlQuerySelector,
} from "../prodSql/prodSqlQuerySelector.utils.js";
import { returnFunc } from "../utils/returnHelper.utils.js";
import { sendEmail } from "../utils/sendEmail.utils.js";
import { tryCatch } from "../utils/trycatch.utils.js";
import { v2QueryRun } from "../utils/pgConnectToLst.utils.js";
// TODO send the error to systemAdmin users so they do not always need to be on the notifications.
// these errors are defined per notification.
let shutoffv1 = false
const func = async (data: any, emails: string) => {
// TODO: remove this disable once all 17 plants are on this new lst
if (!shutoffv1){
v2QueryRun(`update public.notifications set active = false where name = '${data.name}'`)
shutoffv1 = true
}
const { data: l, error: le } = (await tryCatch(
db.select().from(notifications).where(eq(notifications.id, data.id)),
)) as any;
if (le) {
return returnFunc({
success: false,
level: "error",
module: "notification",
subModule: "query",
message: `${data.name} encountered an error while trying to get initial info`,
data: le as any,
notify: true,
});
}
// search the query db for the query by name
const sqlQuery = sqlQuerySelector(`${data.name}`) as SqlQuery;
// create the ignore audit logs ids
const ignoreIds = l[0].options[0]?.auditId
? `${l[0].options[0]?.auditId}`
: "0";
// run the check
const { data: queryRun, error } = await tryCatch(
prodQuery(
sqlQuery.query
.replace("[intervalCheck]", l[0].interval)
.replace("[ignoreList]", ignoreIds),
`Running notification query: ${l[0].name}`,
),
);
if (error) {
return returnFunc({
success: false,
level: "error",
module: "notification",
subModule: "query",
message: `Data for: ${l[0].name} encountered an error while trying to get it`,
data: error as any,
notify: true,
});
}
if (queryRun.data.length > 0) {
// update the latest audit id
const { error: dbe } = await tryCatch(
db
.update(notifications)
.set({ options: [{ auditId: `${queryRun.data[0].id}` }] })
.where(eq(notifications.id, data.id)),
);
if (dbe) {
return returnFunc({
success: false,
level: "error",
module: "notification",
subModule: "query",
message: `Data for: ${l[0].name} encountered an error while trying to get it`,
data: dbe as any,
notify: true,
});
}
// send the email
const sentEmail = await sendEmail({
email: emails,
subject: "Alert! Label Reprinted",
template: "reprintLabels",
context: {
items: queryRun.data,
},
});
if (!sentEmail?.success) {
return returnFunc({
success: false,
level: "error",
module: "notification",
subModule: "email",
message: `${l[0].name} failed to send the email`,
data: sentEmail?.data as any,
notify: true,
});
}
}
};
export default reprint;
export default func;

View File

@@ -1,5 +1,6 @@
import type { Express } from "express";
import { requireAuth } from "../middleware/auth.middleware.js";
import manual from "./notification.manualTrigger.js";
import getNotifications from "./notification.route.js";
import updateNote from "./notification.update.route.js";
import deleteSub from "./notificationSub.delete.route.js";
@@ -11,6 +12,7 @@ export const setupNotificationRoutes = (baseUrl: string, app: Express) => {
//stats will be like this as we dont need to change this
app.use(`${baseUrl}/api/notification`, requireAuth, getNotifications);
app.use(`${baseUrl}/api/notification`, requireAuth, updateNote);
app.use(`${baseUrl}/api/notification/manual`, requireAuth, manual);
app.use(`${baseUrl}/api/notification/sub`, requireAuth, subs);
app.use(`${baseUrl}/api/notification/sub`, requireAuth, newSub);
app.use(`${baseUrl}/api/notification/sub`, requireAuth, updateSub);

View File

@@ -3,12 +3,12 @@ import { type Response, Router } from "express";
import z from "zod";
import { db } from "../db/db.controller.js";
import { notificationSub } from "../db/schema/notifications.sub.schema.js";
import { auth } from "../utils/auth.utils.js";
import { apiReturn } from "../utils/returnHelper.utils.js";
import { tryCatch } from "../utils/trycatch.utils.js";
import { modifiedNotification } from "./notification.controller.js";
const newSubscribe = z.object({
emails: z.email().array().describe("An array of emails"),
userId: z.string().describe("User id."),
notificationId: z.string().describe("Notification id"),
});
@@ -16,14 +16,29 @@ const newSubscribe = z.object({
const r = Router();
r.delete("/", async (req, res: Response) => {
const hasPermissions = await auth.api.userHasPermission({
body: {
//userId: req?.user?.id,
role: req.user?.roles as any,
permissions: {
notifications: ["readAll"], // This must match the structure in your access control
},
},
});
try {
const validated = newSubscribe.parse(req.body);
const { data, error } = await tryCatch(
db
.delete(notificationSub)
.where(
and(
eq(notificationSub.userId, validated.userId),
eq(
notificationSub.userId,
hasPermissions ? validated.userId : (req?.user?.id ?? ""),
), // allows the admin to delete this
//eq(notificationSub.userId, req?.user?.id ?? ""),
eq(notificationSub.notificationId, validated.notificationId),
),
)
@@ -44,6 +59,18 @@ r.delete("/", async (req, res: Response) => {
});
}
if (data.length <= 0) {
return apiReturn(res, {
success: false,
level: "info",
module: "notification",
subModule: "post",
message: `Subscription was not deleted invalid data sent over`,
data: data ?? [],
status: 200,
});
}
return apiReturn(res, {
success: true,
level: "info",

View File

@@ -21,12 +21,16 @@ r.get("/", async (req, res: Response) => {
},
});
if (userId) {
hasPermissions.success = false;
}
const { data, error } = await tryCatch(
db
.select()
.from(notificationSub)
.where(
userId || !hasPermissions.success
!hasPermissions.success
? eq(notificationSub.userId, `${req?.user?.id ?? ""}`)
: undefined,
),

View File

@@ -25,8 +25,25 @@ r.post("/", async (req, res: Response) => {
try {
const validated = newSubscribe.parse(req.body);
const emails = validated.emails
.map((e) => e.trim().toLowerCase())
.filter(Boolean);
const uniqueEmails = [...new Set(emails)];
const { data, error } = await tryCatch(
db.insert(notificationSub).values(validated).returning(),
db
.insert(notificationSub)
.values({
userId: req?.user?.id ?? "",
notificationId: validated.notificationId,
emails: uniqueEmails,
})
.onConflictDoUpdate({
target: [notificationSub.userId, notificationSub.notificationId],
set: { emails: uniqueEmails },
})
.returning(),
);
await modifiedNotification(validated.notificationId);

View File

@@ -14,7 +14,27 @@ const note: NewNotification[] = [
"Monitors the labels that are printed and returns a there data, if one falls withing the time frame.",
active: false,
interval: "10",
options: [{ prodID: 1 }],
options: [{ auditId: [0] }],
},
{
name: "qualityBlocking",
description:
"Checks for new blocking orders that have been entered, recommend to get the most recent order in here before activating.",
active: false,
interval: "10",
options: [{ lastBlockingOrderIdSent: 1 }],
},
{
name: "alplaPurchaseHistory",
description:
"Will check the alpla purchase data for any changes, if the req has not been sent already then we will send this, for a po or fresh order we will ignore. ",
active: false,
interval: "5",
options: [
{ sentReqs: [{ timeStamp: "0", req: 1, approved: false }] },
{ sentAPOs: [{ timeStamp: "0", apo: 1 }] },
{ sentRCT: [{ timeStamp: "0", rct: 1 }] },
],
},
];

View File

@@ -14,20 +14,82 @@
*/
import { Router } from "express";
import multer from "multer";
import { db } from "../db/db.controller.js";
import { printerLog } from "../db/schema/printerLogs.schema.js";
import { apiReturn } from "../utils/returnHelper.utils.js";
import { tryCatch } from "../utils/trycatch.utils.js";
type PrinterEvent = {
name: string;
condition: string;
message: string;
};
const r = Router();
const upload = multer();
r.post("/printer/listener/:printer", async (req, res) => {
const parseZebraAlert = (body: any): PrinterEvent => {
const name = body.uniqueId || "unknown";
const decoded = decodeURIComponent(body.alertMsg || "");
const [conditionRaw, ...rest] = decoded.split(":");
const condition = conditionRaw?.toLowerCase()?.trim() || "unknown";
const message = rest.join(":").trim();
return {
name,
condition,
message,
};
};
r.post("/printer/listener/:printer", upload.any(), async (req, res) => {
const { printer: printerName } = req.params;
console.log(req.body);
const event: PrinterEvent = parseZebraAlert(req.body);
const rawIp =
req.headers["x-forwarded-for"]?.toString().split(",")[0]?.trim() ||
req.socket.remoteAddress ||
req.ip;
const ip = rawIp?.replace("::ffff:", "");
// post the new message
const { data, error } = await tryCatch(
db
.insert(printerLog)
.values({
ip: ip?.replace("::ffff:", ""),
name: printerName,
printerSN: event.name,
condition: event.condition,
message: event.message,
})
.returning(),
);
if (error) {
return apiReturn(res, {
success: false,
level: "info",
module: "ocp",
subModule: "printing",
message: `${printerName} encountered an error posting the log`,
data: error as any,
status: 400,
});
}
if (data) {
// TODO: send message over to the controller to decide what to do next with it
}
return apiReturn(res, {
success: true,
level: "info",
module: "ocp",
subModule: "printing",
message: `${printerName} just passed over a message`,
message: `${printerName} just sent a message`,
data: req.body ?? [],
status: 200,
});

View File

@@ -10,10 +10,323 @@
* printer status will live here this will be how we manage all the levels of status like 3 paused, 1 printing, 8 error, 10 power up, etc...
*/
import { eq } from "drizzle-orm";
import net from "net";
import { db } from "../db/db.controller.js";
import { printerData } from "../db/schema/printers.schema.js";
import { createLogger } from "../logger/logger.controller.js";
import { delay } from "../utils/delay.utils.js";
import { runProdApi } from "../utils/prodEndpoint.utils.js";
import { returnFunc } from "../utils/returnHelper.utils.js";
type Printer = {
name: string;
humanReadableId: string;
type: number;
ipAddress: string;
port: number;
default: boolean;
labelInstanceIpAddress: string;
labelInstancePort: number;
active: boolean;
remark: string;
processes: number[];
};
const log = createLogger({ module: "ocp", subModule: "printers" });
export const printerManager = async () => {};
export const printerHeartBeat = async () => {
// heat heats will be defaulted to 60 seconds no reason to allow anything else
// heat heats will be defaulted to 60 seconds no reason to allow anything else, and heart beats will only go to assigned printers no need to be monitoring non labeling printers
};
//export const printerStatus = async (statusNr: number, printerId: number) => {};
export const printerSync = async () => {
// pull the printers from alpla prod and update them in lst
const printers = await runProdApi({
method: "get",
endpoint: "/public/v1.0/Administration/Printers",
});
if (!printers?.success) {
return returnFunc({
success: false,
level: "error",
module: "ocp",
subModule: "printer",
message: printers?.message ?? "",
data: printers?.data ?? [],
notify: false,
});
}
if (printers?.success) {
const ignorePrinters = ["pdf24", "standard"];
const validPrinters =
printers.data.filter(
(n: any) =>
!ignorePrinters.includes(n.name.toLowerCase()) && n.ipAddress,
) ?? [];
if (validPrinters.length) {
for (const printer of validPrinters as Printer[]) {
// run an update for each printer, do on conflicts based on the printer id
log.debug({}, `Add/Updating ${printer.name}`);
if (printer.active) {
await db
.insert(printerData)
.values({
name: printer.name,
humanReadableId: printer.humanReadableId,
ipAddress: printer.ipAddress,
port: printer.port,
remark: printer.remark,
processes: printer.processes,
})
.onConflictDoUpdate({
target: printerData.humanReadableId,
set: {
name: printer.name,
humanReadableId: printer.humanReadableId,
ipAddress: printer.ipAddress,
port: printer.port,
remark: printer.remark,
processes: printer.processes,
},
})
.returning();
await tcpPrinter(printer);
}
if (!printer.active) {
log.warn({}, `${printer.name} is not active so removing from lst.`);
await db
.delete(printerData)
.where(eq(printerData.humanReadableId, printer.humanReadableId));
}
}
return returnFunc({
success: true,
level: "info",
module: "ocp",
subModule: "printer",
message: `${printers.data.length} printers were just synced, this includes new and old printers`,
data: [],
notify: false,
});
}
}
return returnFunc({
success: true,
level: "info",
module: "ocp",
subModule: "printer",
message: `No printers to update`,
data: [],
notify: false,
});
};
const tcpPrinter = (printer: Printer) => {
return new Promise<void>((resolve) => {
const socket = new net.Socket();
const timeoutMs = 15 * 1000;
const commands = [
{
key: "clearAlerts",
command: '! U1 setvar "alerts.configured" ""\r\n',
},
{
key: "addAlert",
command: `! U1 setvar "alerts.add" "ALL MESSAGES,HTTP-POST,Y,Y,http://${process.env.SERVER_IP}:${process.env.PORT}/lst/api/ocp/printer/listener/${printer.name},0,N,printer"\r\n`,
},
{
key: "setFriendlyName",
command: `! U1 setvar "device.friendly_name" "${printer.name}"\r\n`,
},
{
key: "getUniqueId",
command: '! U1 getvar "device.unique_id"\r\n',
},
] as const;
let currentCommandIndex = 0;
let awaitingSerial = false;
let settled = false;
const cleanup = () => {
socket.removeAllListeners();
socket.destroy();
};
const finish = (err?: unknown) => {
if (settled) return;
settled = true;
clearTimeout(timeout);
cleanup();
if (err) {
log.error(
{ err, printer: printer.name },
`Printer update failed for ${printer.name}: doing the name and alert add directly on the printer.`,
);
}
resolve();
};
const timeout = setTimeout(() => {
finish(`${printer.name} timed out while updating printer config`);
}, timeoutMs);
const sendNext = async () => {
if (currentCommandIndex >= commands.length) {
socket.end();
return;
}
const current = commands[currentCommandIndex];
if (!current) {
socket.end();
return;
}
awaitingSerial = current.key === "getUniqueId";
log.info(
{ printer: printer.name, command: current.key },
`Sending command to ${printer.name}`,
);
socket.write(current.command);
currentCommandIndex++;
// Small pause between commands so the printer has breathing room
if (currentCommandIndex < commands.length) {
await delay(1500);
await sendNext();
} else {
// last command was sent, now wait for final data/close
await delay(1500);
socket.end();
}
};
socket.connect(printer.port, printer.ipAddress, async () => {
log.info({}, `Connected to ${printer.name}`);
try {
await sendNext();
} catch (error) {
finish(
error instanceof Error
? error
: new Error(
`Unknown error while sending commands to ${printer.name}`,
),
);
}
});
socket.on("data", async (data) => {
const response = data.toString().trim().replaceAll('"', "");
log.info(
{ printer: printer.name, response },
`Received printer response from ${printer.name}`,
);
if (!awaitingSerial) return;
awaitingSerial = false;
try {
await db
.update(printerData)
.set({ printerSN: response })
.where(eq(printerData.humanReadableId, printer.humanReadableId));
} catch (error) {
finish(
error instanceof Error
? error
: new Error(`Failed to update printer SN for ${printer.name}`),
);
}
});
socket.on("close", () => {
log.info({}, `Closed connection to ${printer.name}`);
finish();
});
socket.on("error", (err) => {
finish(err);
});
});
};
// const tcpPrinter = async (printer: Printer) => {
// const p = new net.Socket();
// const commands = [
// '! U1 setvar "alerts.configured" ""\r\n', // clean install just remove all alerts
// `! U1 setvar "alerts.add" "ALL MESSAGES,HTTP-POST,Y,Y,http://${process.env.SERVER_IP}:${process.env.PORT}/lst/api/ocp/printer/listener/${printer.name},0,N,printer"\r\n`, // add in the all alert
// `! U1 setvar "device.friendly_name" "${printer.name}"\r\n`, // change the name to match the alplaprod name
// `! U1 getvar "device.unique_id"\r\n`, // this will get mapped into the printer as this is the one we will link to in the db.
// //'! U1 getvar "alerts.configured" ""\r\n',
// ];
// let index = 0;
// const sendNext = async () => {
// if (index >= commands.length) {
// p.end();
// return;
// }
// const cmd = commands[index] as string;
// p.write(cmd);
// return;
// };
// p.connect(printer.port, printer.ipAddress, async () => {
// log.info({}, `Connected to ${printer.name}`);
// while (index < commands.length) {
// await sendNext();
// await delay(2000);
// index++;
// }
// });
// p.on("data", async (data) => {
// // this is just the sn that comes over so we will update this printer.
// await db
// .update(printerData)
// .set({ printerSN: data.toString().trim().replaceAll('"', "") })
// .where(eq(printerData.humanReadableId, printer.humanReadableId));
// // get the name
// // p.write('! U1 getvar "device.friendly_name"\r\n');
// // p.write('! U1 getvar "device.unique_id"\r\n');
// // p.write('! U1 getvar "alerts.configured"\r\n');
// });
// p.on("close", () => {
// log.info({}, `Closed connection to ${printer.name}`);
// p.destroy();
// return;
// });
// p.on("error", (err) => {
// log.info(
// { stack: err },
// `${printer.name} encountered an error while trying to update`,
// );
// return;
// });
// };

View File

@@ -0,0 +1,38 @@
/**
* the route that listens for the printers post.
*
* and http-post alert should be setup on each printer pointing to at min you will want to make the alert for
* pause printer, you can have all on here as it will also monitor and do things on all messages
*
* http://{serverIP}:2222/lst/api/ocp/printer/listener/{printerName}
*
* the messages will be sent over to the db for logging as well as specific ones will do something
*
* pause will validate if can print
* close head will repause the printer so it wont print a label
* power up will just repause the printer so it wont print a label
*/
import { Router } from "express";
import { apiReturn } from "../utils/returnHelper.utils.js";
//import { tryCatch } from "../utils/trycatch.utils.js";
import { printerSync } from "./ocp.printer.manage.js";
const r = Router();
r.post("/printer/update", async (_, res) => {
printerSync();
return apiReturn(res, {
success: true,
level: "info",
module: "ocp",
subModule: "printing",
message:
"Printer update has been triggered to monitor progress please head to the logs.",
data: [],
status: 200,
});
});
export default r;

View File

@@ -2,6 +2,7 @@ import { type Express, Router } from "express";
import { requireAuth } from "../middleware/auth.middleware.js";
import { featureCheck } from "../middleware/featureActive.middleware.js";
import listener from "./ocp.printer.listener.js";
import update from "./ocp.printer.update.js";
export const setupOCPRoutes = (baseUrl: string, app: Express) => {
//setup all the routes
@@ -16,6 +17,7 @@ export const setupOCPRoutes = (baseUrl: string, app: Express) => {
// auth routes below here
router.use(requireAuth);
router.use(update);
//router.use("");
app.use(`${baseUrl}/api/ocp`, router);

View File

@@ -17,15 +17,6 @@ import { returnFunc } from "../utils/returnHelper.utils.js";
import { tryCatch } from "../utils/trycatch.utils.js";
import { getToken, odToken } from "./opendock.utils.js";
let lastCheck = formatInTimeZone(
new Date().toISOString(),
"America/New_York",
"yyyy-MM-dd HH:mm:ss",
);
//const queue: unknown[] = [];
//const isProcessing: boolean = false;
type Releases = {
ReleaseNumber: number;
DeliveryState: number;
@@ -37,10 +28,38 @@ type Releases = {
LineItemArticleWeight: number;
CustomerReleaseNumber: string;
};
const timeZone = process.env.TIMEZONE as string;
const TWENTY_FOUR_HOURS = 24 * 60 * 60 * 1000;
const log = createLogger({ module: "opendock", subModule: "releaseMonitor" });
// making the cron more safe when it comes to buffer stuff
let opendockSyncRunning = false;
let lastCheck = formatInTimeZone(
new Date().toISOString(),
timeZone,
"yyyy-MM-dd HH:mm:ss",
);
// const lastCheck = formatInTimeZone(
// new Date().toISOString(),
// `America/New_York`, //TODO: Pull timezone from the .env last as process.env.TIME_ZONE is not working so need to figure itout
// "yyyy-MM-dd HH:mm:ss",
// );
//const queue: unknown[] = [];
//const isProcessing: boolean = false;
// const parseDbDate = (value: string | Date) => {
// if (value instanceof Date) return value;
// // normalize "2026-04-08 13:10:43.280" -> "2026-04-08T13:10:43.280"
// const normalized = value.replace(" ", "T");
// // interpret that wall-clock time as America/New_York
// return fromZonedTime(normalized, timeZone);
// };
const postRelease = async (release: Releases) => {
if (!odToken.odToken) {
log.info({}, "Getting Auth Token");
@@ -152,22 +171,25 @@ const postRelease = async (release: Releases) => {
};
// TODO: pull the current added releases from the db and if one matches then we want to get its id and run the update vs create
const { data: apt, error: aptError } = await tryCatch(
db.select().from(opendockApt),
const { data: existingApt, error: aptError } = await tryCatch(
db
.select()
.from(opendockApt)
.where(eq(opendockApt.release, release.ReleaseNumber))
.limit(1),
);
if (aptError) {
log.error({ error: aptError }, "Error getting apt data");
// TODO: send an error email on this one as it will cause issues
return;
}
const releaseCheck = apt.filter((r) => r.release === release.ReleaseNumber);
const existing = existingApt[0];
//console.log(releaseCheck);
if (releaseCheck.length > 0) {
const id = releaseCheck[0]?.openDockAptId;
if (existing) {
const id = existing.openDockAptId;
try {
const response = await axios.patch(
`${process.env.OPENDOCK_URL}/appointment/${id}`,
@@ -196,7 +218,11 @@ const postRelease = async (release: Releases) => {
})
.onConflictDoUpdate({
target: opendockApt.release,
set: { appointment: response.data.data, upd_date: sql`NOW()` },
set: {
openDockAptId: response.data.data.id,
appointment: response.data.data,
upd_date: sql`NOW()`,
},
})
.returning();
@@ -250,8 +276,12 @@ const postRelease = async (release: Releases) => {
appointment: response.data.data,
})
.onConflictDoUpdate({
target: opendockApt.id,
set: { appointment: response.data.data, upd_date: sql`NOW()` },
target: opendockApt.release,
set: {
openDockAptId: response.data.data.id,
appointment: response.data.data,
upd_date: sql`NOW()`,
},
})
.returning();
@@ -270,7 +300,7 @@ const postRelease = async (release: Releases) => {
}
}
await delay(500); // rate limit protection
await delay(750); // rate limit protection
};
export const monitorReleaseChanges = async () => {
@@ -298,184 +328,66 @@ export const monitorReleaseChanges = async () => {
}
if (openDockMonitor[0]?.active) {
createCronJob("opendock_sync", "*/15 * * * * *", async () => {
try {
const result = await prodQuery(
sqlQuery.query.replace("[dateCheck]", `'${lastCheck}'`),
"Get release info",
);
// const BUFFER_MS =
// Math.floor(parseInt(openDockMonitor[0]?.value, 10) || 30) * 1.5 * 1000; // this should be >= to the interval we set in the cron TODO: should pull the buffer from the setting and give it an extra 10% then round to nearest int.
if (result.data.length) {
for (const release of result.data) {
await postRelease(release);
lastCheck = formatInTimeZone(
new Date(release.Upd_Date).toISOString(),
"UTC",
"yyyy-MM-dd HH:mm:ss",
);
await delay(500);
}
createCronJob(
"opendock_sync",
`*/${parseInt(openDockMonitor[0]?.value, 10) || 30} * * * * *`,
async () => {
if (opendockSyncRunning) {
log.warn(
{},
"Skipping opendock_sync because previous run is still active",
);
return;
}
} catch (e) {
console.error(
{ error: e },
"Error occurred while running the monitor job",
);
log.error({ error: e }, "Error occurred while running the monitor job");
}
});
opendockSyncRunning = true;
try {
// set this to the latest time.
const result = await prodQuery(
sqlQuery.query.replace("[dateCheck]", `'${lastCheck}'`),
"Get release info",
);
log.debug(
{ lastCheck },
`${result.data.length} Changes to a release have been made`,
);
if (result.data.length) {
for (const release of result.data) {
await postRelease(release);
// add a 2 seconds to account for a massive influx of orders and when we dont finish in 1 go it wont try to grab the same amount again
const nDate = new Date(release.Upd_Date);
nDate.setSeconds(nDate.getSeconds() + 2);
lastCheck = formatInTimeZone(
nDate.toISOString(),
"UTC",
"yyyy-MM-dd HH:mm:ss",
);
log.debug({ lastCheck }, "Changes to a release have been made");
await delay(500);
}
}
} catch (e) {
console.error(
{ error: e },
"Error occurred while running the monitor job",
);
log.error(
{ error: e },
"Error occurred while running the monitor job",
);
} finally {
opendockSyncRunning = false;
}
},
"monitorReleaseChanges",
);
}
// run the main game loop
// while (openDockSetting) {
// try {
// const result = await prodQuery(
// sqlQuery.query.replace("[dateCheck]", `'${lastCheck}'`),
// "Get release info",
// );
// if (result.data.length) {
// for (const release of result.data) {
// // potentially move this to a buffer table to easy up on memory
// await postRelease(release);
// // Move checkpoint AFTER successful post
// lastCheck = formatInTimeZone(
// new Date(release.Upd_Date).toISOString(),
// "UTC",
// "yyyy-MM-dd HH:mm:ss",
// );
// await delay(500);
// }
// }
// } catch (e) {
// console.error("Monitor error:", e);
// }
// await delay(15 * 1000); // making this 15 seconds as we would really only see issues if we have a mass burst.
// }
};
// export const monitorReleaseChanges = async () => {
// console.log("Starting release monitor", lastCheck);
// setInterval(async () => {
// try {
// const result = await prodQuery(
// releaseQuery.replace("[dateCheck]", `'${lastCheck}'`),
// "get last release change",
// );
// //console.log(releaseQuery.replace("[dateCheck]", `'${lastCheck}'`));
// if (result.data.length > 0) {
// console.log(
// formatInTimeZone(
// result.data[result.data.length - 1].Upd_Date,
// "UTC",
// "yyyy-MM-dd HH:mm:ss",
// ),
// lastCheck,
// );
// lastCheck = formatInTimeZone(
// result.data[result.data.length - 1].Upd_Date,
// "UTC",
// "yyyy-MM-dd HH:mm:ss",
// );
// const releases = result.data;
// for (let i = 0; i < releases.length; i++) {
// const newDockApt = {
// status: "Scheduled",
// userId: "ee956455-e193-47fc-b53b-dff30fabdf4b", // this should be the carrierid
// loadTypeId: "0aa7988e-b17b-4f10-acdd-3d029b44a773", // well get this and make it a default one
// dockId: "00ba4386-ce5a-4dd1-9356-6e6d10a24609", // this the warehouse we want it in to start out
// refNumbers: [releases[i].ReleaseNumber],
// refNumber: releases[i].ReleaseNumber,
// start: releases[i].DeliveryDate,
// end: addHours(releases[i].DeliveryDate, 1),
// notes: "",
// ccEmails: [""],
// muteNotifications: true,
// metadata: {
// externalValidationFailed: false,
// externalValidationErrorMessage: null,
// },
// units: null,
// customFields: [
// {
// name: "strArticle",
// type: "str",
// label: "Article",
// value: `${releases[i].LineItemHumanReadableId} - ${releases[i].ArticleAlias}`,
// description: "What bottle are we sending ",
// placeholder: "",
// dropDownValues: [],
// minLengthOrValue: 1,
// hiddenFromCarrier: false,
// requiredForCarrier: false,
// requiredForWarehouse: false,
// },
// {
// name: "intPallet Count",
// type: "int",
// label: "Pallet Count",
// value: parseInt(releases[i].LoadingUnits, 10),
// description: "How many pallets",
// placeholder: "22",
// dropDownValues: [],
// minLengthOrValue: 1,
// hiddenFromCarrier: false,
// requiredForCarrier: false,
// requiredForWarehouse: false,
// },
// {
// name: "strTotal Weight",
// type: "str",
// label: "Total Weight",
// value: `${(((releases[i].Quantity * releases[i].LineItemArticleWeight) / 1000) * 2.20462).toFixed(2)}`,
// description: "What is the total weight of the load",
// placeholder: "",
// dropDownValues: [],
// minLengthOrValue: 1,
// hiddenFromCarrier: false,
// requiredForCarrier: false,
// requiredForWarehouse: false,
// },
// {
// name: "strCustomer ReleaseNumber",
// type: "str",
// label: "Customer Release Number",
// value: `${releases[i].CustomerReleaseNumber}`,
// description: "What is the customer release number",
// placeholder: "",
// dropDownValues: [],
// minLengthOrValue: 1,
// hiddenFromCarrier: false,
// requiredForCarrier: false,
// requiredForWarehouse: false,
// },
// ],
// };
// //console.log(newDockApt);
// const newDockResult = await axios.post(
// "https://neutron.staging.opendock.com/appointment",
// newDockApt,
// {
// headers: {
// "content-type": "application/json; charset=utf-8",
// },
// },
// );
// console.log(newDockResult.statusText);
// await delay(500);
// }
// }
// } catch (e) {
// console.log(e);
// }
// }, 5 * 1000);
// };

View File

@@ -28,7 +28,7 @@ export const getToken = async () => {
}
odToken = { odToken: data.access_token, tokenDate: new Date() };
log.info({}, "Token added");
log.info({ odToken }, "Token added");
} catch (e) {
log.error({ error: e }, "Error getting/refreshing token");
}

View File

@@ -36,12 +36,12 @@ export const opendockSocketMonitor = async () => {
// console.log(data);
// });
socket.on("create-Appointment", (data) => {
console.log("appt create:", data);
socket.on("create-Appointment", () => {
//console.log("appt create:", data);
});
socket.on("update-Appointment", (data) => {
console.log("appt update:", data);
socket.on("update-Appointment", () => {
//console.log("appt update:", data);
});
socket.on("error", (data) => {

View File

@@ -7,12 +7,17 @@ import { returnFunc } from "../utils/returnHelper.utils.js";
export let pool: sql.ConnectionPool;
export let connected: boolean = false;
export let reconnecting = false;
// start the delay out as 2 seconds
let delayStart = 2000;
let attempt = 0;
const maxAttempts = 10;
export const connectProdSql = async () => {
const serverUp = await checkHostnamePort(`${process.env.PROD_SERVER}:1433`);
if (!serverUp) {
// we will try to reconnect
connected = false;
reconnectToSql();
return returnFunc({
success: false,
level: "error",
@@ -48,6 +53,7 @@ export const connectProdSql = async () => {
notify: false,
});
} catch (error) {
reconnectToSql();
return returnFunc({
success: false,
level: "error",
@@ -104,11 +110,6 @@ export const reconnectToSql = async () => {
//set reconnecting to true while we try to reconnect
reconnecting = true;
// start the delay out as 2 seconds
let delayStart = 2000;
let attempt = 0;
const maxAttempts = 10;
while (!connected && attempt < maxAttempts) {
attempt++;
log.info(
@@ -121,7 +122,7 @@ export const reconnectToSql = async () => {
if (!serverUp) {
delayStart = Math.min(delayStart * 2, 30000); // exponential backoff until up to 30000
return;
continue;
}
try {
@@ -133,19 +134,12 @@ export const reconnectToSql = async () => {
);
} catch (error) {
delayStart = Math.min(delayStart * 2, 30000);
return returnFunc({
success: false,
level: "error",
module: "system",
subModule: "db",
message: "Failed to reconnect to the prod sql server.",
data: [error],
notify: false,
});
delayStart = Math.min(delayStart * 2, 30000);
log.error({ error }, "Failed to reconnect to the prod sql server.");
}
}
if (!connected) {
if (!connected && attempt >= maxAttempts) {
log.error(
{ notify: true },
"Max reconnect attempts reached on the prodSql server. Stopping retries.",

View File

@@ -1,10 +1,5 @@
import { returnFunc } from "../utils/returnHelper.utils.js";
import {
connected,
pool,
reconnecting,
reconnectToSql,
} from "./prodSqlConnection.controller.js";
import { connected, pool } from "./prodSqlConnection.controller.js";
interface SqlError extends Error {
code?: string;
@@ -22,29 +17,15 @@ interface SqlError extends Error {
*/
export const prodQuery = async (queryToRun: string, name: string) => {
if (!connected) {
reconnectToSql();
if (reconnecting) {
return returnFunc({
success: false,
level: "error",
module: "system",
subModule: "prodSql",
message: `The sql ${process.env.PROD_PLANT_TOKEN} is trying to reconnect already`,
data: [],
notify: false,
});
} else {
return returnFunc({
success: false,
level: "error",
module: "system",
subModule: "prodSql",
message: `${process.env.PROD_PLANT_TOKEN} is not connected, and failed to connect.`,
data: [],
notify: true,
});
}
return returnFunc({
success: false,
level: "error",
module: "system",
subModule: "prodSql",
message: `${process.env.PROD_PLANT_TOKEN} is offline or attempting to reconnect`,
data: [],
notify: false,
});
}
//change to the correct server

View File

@@ -0,0 +1,63 @@
use AlplaPROD_test1
declare @intervalCheck as int = '[interval]'
/*
Monitors alpla purchase for thing new. this will not update unless the order status is updated.
this means if a user just reopens the order it will update but everything changed in the position will not be updated until the user reorders or cancels the po
*/
select
IdBestellung as apo
,po.revision as revision
,po.Bestaetigt as confirmed
,po.status
,case po.Status
when 1 then 'Created'
when 2 then 'Ordered'
when 22 then 'Reopened'
when 11 then 'Reopened'
when 4 then 'Planned'
when 5 then 'Partly Delivered'
when 6 then 'Delivered'
when 7 then 'Canceled'
when 8 then 'Closed'
else 'Unknown' end as statusText
,po.IdJournal as journalNum -- use this to validate if we used it already.
,po.Add_User as add_user
,po.Add_Date as add_date
,po.Upd_User as upd_user
,po.Upd_Date as upd_Date
,po.Bemerkung as remark
,po.IdJournal as journal -- use this to validate if we used it already.
,isnull((
select
o.IdArtikelVarianten as av
,a.Bezeichnung as alias
,Lieferdatum as deliveryDate
,cast(BestellMenge as decimal(18,2)) as qty
,cast(BestellMengeVPK as decimal(18,0)) as pkg
,cast(PreisProEinheit as decimal(18,0)) as price
,PositionsStatus
,case PositionsStatus
when 1 then 'Created'
when 2 then 'Ordered'
when 22 then 'Reopened'
when 4 then 'Planned'
when 5 then 'Partly Delivered'
when 6 then 'Delivered'
when 7 then 'Canceled'
when 8 then 'Closed'
else 'Unknown' end as statusText
,o.upd_user
,o.upd_date
from T_Bestellpositionen (nolock) as o
left join
T_Artikelvarianten as a on
a.IdArtikelvarianten = o.IdArtikelVarianten
where o.IdBestellung = po.IdBestellung
for json path
), '[]') as position
--,*
from T_Bestellungen (nolock) as po
where po.Upd_Date > dateadd(MINUTE, -@intervalCheck, getdate())

View File

@@ -1,6 +1,6 @@
use AlplaPROD_test1
SELECT V_Artikel.IdArtikelvarianten,
SELECT V_Artikel.IdArtikelvarianten as article,
V_Artikel.Bezeichnung,
V_Artikel.ArtikelvariantenTypBez,
V_Artikel.PreisEinheitBez,

View File

@@ -0,0 +1,43 @@
/**
This will be replacing activeArticles once all data is remapped into this query.
make a note in the docs this activeArticles will go stale sooner or later.
**/
use [test1_AlplaPROD2.0_Read]
select a.Id,
a.HumanReadableId as av,
a.Alias as alias,
p.LoadingUnitsPerTruck as loadingUnitsPerTruck,
p.LoadingUnitsPerTruck * p.LoadingUnitPieces as qtyPerTruck,
p.LoadingUnitPieces,
case when i.MinQuantity IS NOT NULL then round(cast(i.MinQuantity as float), 2) else 0 end as min,
case when i.MaxQuantity IS NOT NULL then round(cast(i.MaxQuantity as float),2) else 0 end as max
from masterData.Article (nolock) as a
/* sales price */
left join
(select *
from (select
id,
PackagingId,
ArticleId,
DefaultCustomer,
ROW_NUMBER() OVER (PARTITION BY ArticleId ORDER BY ValidAfter DESC) AS RowNum
from masterData.SalesPrice (nolock)
where DefaultCustomer = 1) as x
where RowNum = 1
) as s
on a.id = s.ArticleId
/* pkg instructions */
left join
masterData.PackagingInstruction (nolock) as p
on s.PackagingId = p.id
/* stock limits */
left join
masterData.StockLimit (nolock) as i
on a.id = i.ArticleId
where a.active = 1
and a.HumanReadableId in ([articles])

View File

@@ -0,0 +1,45 @@
select x.idartikelVarianten as av
,ArtikelVariantenAlias as Alias
--x.Lfdnr as RunningNumber,
--,round(sum(EinlagerungsMengeVPKSum),0) as Total_Pallets
--,sum(EinlagerungsMengeSum) as Total_PalletQTY
,round(sum(VerfuegbareMengeVPKSum),0) as Avalible_Pallets
,sum(VerfuegbareMengeSum) as Avaliable_PalletQTY
,sum(case when c.Description LIKE '%COA%' then GesperrteMengeVPKSum else 0 end) as COA_Pallets
,sum(case when c.Description LIKE '%COA%' then GesperrteMengeSum else 0 end) as COA_QTY
--,sum(case when c.Description NOT LIKE '%COA%' then GesperrteMengeVPKSum else 0 end) as Held_Pallets
--,sum(case when c.Description NOT LIKE '%COA%' then GesperrteMengeSum else 0 end) as Held_QTY
,IdProdPlanung as Lot
--,IdAdressen
--,x.AdressBez
--,*
from [AlplaPROD_test1].dbo.[V_LagerPositionenBarcodes] (nolock) x
left join
[AlplaPROD_test1].dbo.T_EtikettenGedruckt (nolock) on
x.Lfdnr = T_EtikettenGedruckt.Lfdnr AND T_EtikettenGedruckt.Lfdnr > 1
left join
(SELECT *
FROM [AlplaPROD_test1].[dbo].[T_BlockingDefects] (nolock) where Active = 1) as c
on x.IdMainDefect = c.IdBlockingDefect
/*
The data below will be controlled by the user in excell by default everything will be passed over
IdAdressen = 3
*/
where
--IdArtikelTyp = 1
x.IdWarenlager not in (6, 1)
--and IdAdressen
--and x.IdWarenlager in (0)
group by x.IdArtikelVarianten
,ArtikelVariantenAlias
,IdProdPlanung
--,c.Description
,IdAdressen
,x.AdressBez
--, x.Lfdnr
order by x.IdArtikelVarianten

View File

@@ -0,0 +1,29 @@
use [test1_AlplaPROD2.0_Read]
select
customerartno as CustomerArticleNumber
,h.CustomerOrderNumber as CustomerOrderNumber
,l.CustomerLineItemNumber as CustomerLineNumber
,r.CustomerReleaseNumber as CustomerRealeaseNumber
,r.Quantity
,format(r.DeliveryDate, 'MM/dd/yyyy HH:mm') as DeliveryDate
,h.CustomerHumanReadableId as CustomerID
,r.Remark
--,*
from [order].[Release] as r (nolock)
left join
[order].LineItem as l (nolock) on
l.id = r.LineItemId
left join
[order].Header as h (nolock) on
h.id = l.HeaderId
WHERE releaseState not in (1, 2, 3, 4)
AND h.CreatedByEdi = 1
AND r.deliveryDate < getdate() + 1
--AND h.CustomerHumanReadableId in (0)
order by r.deliveryDate

View File

@@ -0,0 +1,8 @@
SELECT format(RequirementDate, 'yyyy-MM-dd') as requirementDate
,ArticleHumanReadableId
,CustomerArticleNumber
,ArticleDescription
,Quantity
FROM [test1_AlplaPROD2.0_Read].[forecast].[Forecast]
where DeliveryAddressHumanReadableId in ([customers])
order by RequirementDate

View File

@@ -0,0 +1,64 @@
use [test1_AlplaPROD2.0_Read]
select
ArticleHumanReadableId as article
,ArticleAlias as alias
,round(sum(QuantityLoadingUnits),2) total_pallets
,round(sum(Quantity),2) as total_palletQTY
,round(sum(case when State = 0 then QuantityLoadingUnits else 0 end),2) available_Pallets
,round(sum(case when State = 0 then Quantity else 0 end),2) available_QTY
,round(sum(case when b.HumanReadableId = 864 then QuantityLoadingUnits else 0 end),2) as coa_Pallets
,round(sum(case when b.HumanReadableId = 864 then Quantity else 0 end),2) as coa_QTY
,round(sum(case when b.HumanReadableId <> 864 then QuantityLoadingUnits else 0 end),2) as held_Pallets
,round(sum(case when b.HumanReadableId <> 864 then Quantity else 0 end),2) as held_QTY
,round(sum(case when w.type = 7 then QuantityLoadingUnits else 0 end),2) as consignment_Pallets
,round(sum(case when w.type = 7 then Quantity else 0 end),2) as consignment_qty
--,l.RunningNumber
/** datamart include lot number **/
--,l.MachineLocation,l.MachineName,l.ProductionLotRunningNumber as lot
/** data mart include location data **/
--,l.WarehouseDescription,l.LaneDescription
/** historical section **/
--,l.ProductionLotRunningNumber as lot,l.warehousehumanreadableid as warehouseId,l.WarehouseDescription as warehouseDescription,l.lanehumanreadableid as locationId,l.lanedescription as laneDescription
,articleTypeName
FROM [warehousing].[WarehouseUnit] as l (nolock)
left join
(
SELECT [Id]
,[HumanReadableId]
,d.[Description]
,[DefectGroupId]
,[IsActive]
FROM [blocking].[BlockingDefect] as g (nolock)
left join
[AlplaPROD_test1].dbo.[T_BlockingDefects] as d (nolock) on
d.IdGlobalBlockingDefect = g.HumanReadableId
) as b on
b.id = l.MainDefectId
left join
[warehousing].[warehouse] as w (nolock) on
w.id = l.warehouseid
where LaneHumanReadableId not in (20000,21000)
group by ArticleHumanReadableId,
ArticleAlias,
ArticleTypeName
--,l.RunningNumber
/** datamart include lot number **/
--,l.MachineLocation,l.MachineName,l.ProductionLotRunningNumber
/** data mart include location data **/
--,l.WarehouseDescription,l.LaneDescription
/** historical section **/
--,l.ProductionLotRunningNumber,l.warehousehumanreadableid,l.WarehouseDescription,l.lanehumanreadableid,l.lanedescription
order by ArticleHumanReadableId

View File

@@ -0,0 +1,33 @@
use [test1_AlplaPROD2.0_Read]
select
customerartno
,r.ArticleHumanReadableId as article
,r.ArticleAlias as articleAlias
,ReleaseNumber
,h.CustomerOrderNumber as header
,l.CustomerLineItemNumber as lineItem
,r.CustomerReleaseNumber as releaseNumber
,r.LoadingUnits
,r.Quantity
,r.TradeUnits
,h.CustomerHumanReadableId
,r.DeliveryAddressDescription
,format(r.LoadingDate, 'MM/dd/yyyy HH:mm') as loadingDate
,format(r.DeliveryDate, 'MM/dd/yyyy HH:mm') as deliveryDate
,r.Remark
--,*
from [order].[Release] as r (nolock)
left join
[order].LineItem as l (nolock) on
l.id = r.LineItemId
left join
[order].Header as h (nolock) on
h.id = l.HeaderId
WHERE releasestate not in (1, 2, 4)
AND r.deliverydate between getDate() + -[startDay] and getdate() + [endDay]
order by r.deliverydate

View File

@@ -0,0 +1,19 @@
use [test1_AlplaPROD2.0_Reporting]
declare @startDate nvarchar(30) = '[startDate]' --'2024-12-30'
declare @endDate nvarchar(30) = '[endDate]' --'2025-08-09'
select MachineLocation,
ArticleHumanReadableId as article,
sum(Quantity) as Produced,
count(Quantity) as palletsProdued,
FORMAT(convert(date, ProductionDay), 'M/d/yyyy') as ProductionDay,
ProductionLotHumanReadableId as productionLot
from [reporting_productionControlling].[ScannedUnit] (nolock)
where convert(date, ProductionDay) between @startDate and @endDate
and ArticleHumanReadableId in ([articles])
and BookedOut is null
group by MachineLocation, ArticleHumanReadableId,ProductionDay, ProductionLotHumanReadableId

View File

@@ -0,0 +1,23 @@
use AlplaPROD_test1
/**
move this over to the delivery date range query once we have the shift data mapped over correctly.
update the psi stuff on this as well.
**/
declare @start_date nvarchar(30) = '[startDate]' --'2025-01-01'
declare @end_date nvarchar(30) = '[endDate]' --'2025-08-09'
select IdArtikelVarianten,
ArtikelVariantenBez,
sum(Menge) totalDelivered,
case when convert(time, upd_date) between '00:00' and '07:00' then convert(date, upd_date - 1) else convert(date, upd_date) end as ShippingDate
from dbo.V_LadePlanungenLadeAuftragAbruf (nolock)
where upd_date between CONVERT(datetime, @start_date + ' 7:00') and CONVERT(datetime, @end_date + ' 7:00')
and IdArtikelVarianten in ([articles])
group by IdArtikelVarianten, upd_date,
ArtikelVariantenBez

View File

@@ -0,0 +1,32 @@
use AlplaPROD_test1
declare @start_date nvarchar(30) = '[startDate]' --'2025-01-01'
declare @end_date nvarchar(30) = '[endDate]' --'2025-08-09'
/*
articles will need to be passed over as well as the date structure we want to see
*/
select x.IdArtikelvarianten As Article,
ProduktionAlias as Description,
standort as MachineId,
MaschinenBezeichnung as MachineName,
--MaschZyklus as PlanningCycleTime,
x.IdProdPlanung as LotNumber,
FORMAT(ProdTag, 'MM/dd/yyyy') as ProductionDay,
x.planMenge as TotalPlanned,
ProduktionMenge as QTYPerDay,
round(ProduktionMengeVPK, 2) PalDay,
Status as finished
--MaschStdAuslastung as nee
from dbo.V_ProdLosProduktionJeProdTag_PLANNING (nolock) as x
left join
dbo.V_ProdPlanung (nolock) as p on
x.IdProdPlanung = p.IdProdPlanung
where ProdTag between @start_date and @end_date
and p.IdArtikelvarianten in ([articles])
--and V_ProdLosProduktionJeProdTag_PLANNING.IdKunde = 10
--and IdProdPlanung = 18442
order by ProdTag desc

View File

@@ -0,0 +1,44 @@
use [test1_AlplaPROD2.0_Read]
SELECT
'Alert! new blocking order: #' + cast(bo.HumanReadableId as varchar) + ' - ' + bo.ArticleVariantDescription as subject
,cast(bo.[HumanReadableId] as varchar) as blockingNumber
,bo.[ArticleVariantDescription] as article
,cast(bo.[CustomerHumanReadableId] as varchar) + ' - ' + bo.[CustomerDescription] as customer
,convert(varchar(10), bo.[BlockingDate], 101) + ' ' + convert(varchar(5), bo.[BlockingDate], 108) as blockingDate
,cast(ArticleVariantHumanReadableId as varchar) + ' - ' + ArticleVariantDescription as av
,case when bo.Remark = '' or bo.Remark is NULL then 'Please reach out to quality for the reason this was placed on hold as a remark was not entered during the blocking processs' else bo.Remark end as remark
,cast(FORMAT(TotalAmountOfPieces, '###,###') as varchar) + ' / ' + cast(LoadingUnit as varchar) as peicesAndLoadingUnits
,bo.ProductionLotHumanReadableId as lotNumber
,cast(osd.IdBlockingDefectsGroup as varchar) + ' - ' + osd.Description as mainDefectGroup
,cast(df.HumanReadableId as varchar) + ' - ' + os.Description as mainDefect
,lot.MachineLocation as line
--,*
FROM [blocking].[BlockingOrder] (nolock) as bo
/*** get the defect details ***/
join
[blocking].[BlockingDefect] (nolock) AS df
on df.id = bo.MainDefectId
/*** pull description from 1.0 ***/
left join
[AlplaPROD_test1].[dbo].[T_BlockingDefects] (nolock) as os
on os.IdGlobalBlockingDefect = df.HumanReadableId
/*** join in 1.0 defect group ***/
left join
[AlplaPROD_test1].[dbo].[T_BlockingDefectsGroups] (nolock) as osd
on osd.IdBlockingDefectsGroup = os.IdBlockingDefectsGroup
left join
[productionControlling].[ProducedLot] (nolock) as lot
on lot.id = bo.ProductionLotId
where
bo.[BlockingDate] between getdate() - 2 and getdate() + 3 and
bo.BlockingTrigger = 1 -- so we only get the ir blocking and not coa
--and HumanReadableId NOT IN ([sentBlockingOrders])
and bo.HumanReadableId > [lastBlocking]

View File

@@ -0,0 +1,28 @@
use [test1_AlplaPROD2.0_Read]
SELECT
--JSON_VALUE(content, '$.EntityId') as labelId
a.id
,ActorName
,FORMAT(PrintDate, 'yyyy-MM-dd HH:mm') as printDate
,FORMAT(CreatedDateTime, 'yyyy-MM-dd HH:mm') createdDateTime
,l.ArticleHumanReadableId as av
,l.ArticleDescription as alias
,PrintedCopies
,p.name as printerName
,RunningNumber
--,*
FROM [support].[AuditLog] (nolock) as a
left join
[labelling].[InternalLabel] (nolock) as l on
l.id = JSON_VALUE(content, '$.EntityId')
left join
[masterData].[printer] (nolock) as p on
p.id = l.PrinterId
where message like '%reprint%'
and CreatedDateTime > DATEADD(minute, -[intervalCheck], SYSDATETIMEOFFSET())
and a.id > [ignoreList]
order by CreatedDateTime desc

View File

@@ -0,0 +1,4 @@
select top(1) convert(varchar(8) ,
convert(time,startdate), 108) as shiftChange
from [test1_AlplaPROD2.0_Read].[masterData].[ShiftDefinition]
where teamNumber = 1

View File

@@ -0,0 +1,125 @@
import { gpQuery } from "../gpSql/gpSqlQuery.controller.js";
import {
type SqlGPQuery,
sqlGpQuerySelector,
} from "../gpSql/gpSqlQuerySelector.utils.js";
import { createLogger } from "../logger/logger.controller.js";
import type { GpStatus } from "../types/purhcaseTypes.js";
import { returnFunc } from "../utils/returnHelper.utils.js";
const log = createLogger({ module: "purchase", subModule: "gp" });
export const gpReqCheck = async (data: GpStatus[]) => {
const gpReqCheck = sqlGpQuerySelector("reqCheck") as SqlGPQuery;
const reqs = data.map((r) => r.req.trim());
if (!gpReqCheck.success) {
return returnFunc({
success: false,
level: "error",
module: "purchase",
subModule: "query",
message: `Error getting alpla purchase info`,
data: gpReqCheck.message as any,
notify: true,
});
}
try {
// check the initial req table
const result = await gpQuery(
gpReqCheck.query.replace(
"[reqsToCheck]",
data.map((r) => `'${r.req}'`).join(", ") ?? "xo",
),
"Get req info",
);
log.debug(
{},
`There are ${result.data.length} reqs that need to be updated with there current status`,
);
const firstFound = result.data.map((r) => ({
req: r.req.trim(),
approvedStatus: r.approvedStatus,
}));
const firstFoundSet = new Set(result.data.map((r) => r.req.trim()));
const missing1Reqs = reqs.filter((req) => !firstFoundSet.has(req));
//check if we have a recall on our req
const reqCheck = await gpQuery(
`select
[Requisition Number] as req
,case when [Workflow Status] = 'recall' then 'returned' else [Workflow Status] end as approvedStatus
--,*
from [dbo].[PurchaseRequisitions] where [Requisition Number] in (${missing1Reqs.map((r) => `'${r}'`).join(", ") ?? "xo"})`,
"validate req is not in recall",
);
const secondFound = reqCheck.data.map((r) => ({
req: r.req.trim(),
approvedStatus: r.approvedStatus,
}));
const secondFoundSet =
new Set(reqCheck.data.map((r) => r.req.trim())) ?? [];
const missing2Reqs = missing1Reqs.filter((req) => !secondFoundSet.has(req));
// check if we have a po already
const apoCheck = await gpQuery(
`select
SOPNUMBE
,PONUMBER
,reqStatus='converted'
,*
from alpla.dbo.sop60100 (nolock) where sopnumbe in (${missing2Reqs.map((r) => `'${r}'`).join(", ") ?? "xo"})`,
"Get release info",
);
const thirdRound = apoCheck.data.map((r) => ({
req: r.req.trim(),
approvedStatus: r.approvedStatus,
}));
const missing3Reqs = missing2Reqs.filter((req) => !secondFoundSet.has(req));
// remaining just got canceled or no longer exist
const remaining = missing3Reqs.map((m) => ({
req: m,
approvedStatus: "canceled",
}));
const allFound = [
...firstFound,
...secondFound,
...thirdRound,
...remaining,
];
const statusMap = new Map(
allFound.map((r: any) => [r.req, r.approvedStatus]),
);
const updateData = data.map((row) => ({
id: row.id,
//req: row.req,
approvedStatus: statusMap.get(row.req.trim()) ?? null,
}));
return updateData;
} catch (error: any) {
return returnFunc({
success: false,
level: "error",
module: "purchase",
subModule: "gpChecks",
message: error.message,
data: error.stack as any,
notify: true,
});
}
};

View File

@@ -0,0 +1,232 @@
/**
* This will monitor alpla purchase
*/
import { eq, sql } from "drizzle-orm";
import { db } from "../db/db.controller.js";
import {
alplaPurchaseHistory,
type NewAlplaPurchaseHistory,
} from "../db/schema/alplapurchase.schema.js";
import { settings } from "../db/schema/settings.schema.js";
import { createLogger } from "../logger/logger.controller.js";
import { prodQuery } from "../prodSql/prodSqlQuery.controller.js";
import {
type SqlQuery,
sqlQuerySelector,
} from "../prodSql/prodSqlQuerySelector.utils.js";
import type { GpStatus, StatusUpdate } from "../types/purhcaseTypes.js";
import { createCronJob } from "../utils/croner.utils.js";
import { delay } from "../utils/delay.utils.js";
import { returnFunc } from "../utils/returnHelper.utils.js";
import { tryCatch } from "../utils/trycatch.utils.js";
import { gpReqCheck } from "./puchase.gpCheck.js";
const log = createLogger({ module: "purchase", subModule: "purchaseMonitor" });
export const monitorAlplaPurchase = async () => {
const purchaseMonitor = await db
.select()
.from(settings)
.where(eq(settings.name, "purchaseMonitor"));
const sqlQuery = sqlQuerySelector(`alplapurchase`) as SqlQuery;
if (!sqlQuery.success) {
return returnFunc({
success: false,
level: "error",
module: "purchase",
subModule: "query",
message: `Error getting alpla purchase info`,
data: sqlQuery.message as any,
notify: true,
});
}
if (purchaseMonitor[0]?.active) {
createCronJob("purchaseMonitor", "0 */5 * * * *", async () => {
try {
const result = await prodQuery(
sqlQuery.query.replace(
"[interval]",
`${purchaseMonitor[0]?.value || "5"}`,
),
"Get release info",
);
log.debug(
{},
`There are ${result.data.length} pending to be updated from the last ${purchaseMonitor[0]?.value}`,
);
if (result.data.length) {
const convertedData = result.data.map((i) => ({
...i,
position: JSON.parse(i.position),
})) as NewAlplaPurchaseHistory;
const { data, error } = await tryCatch(
db.insert(alplaPurchaseHistory).values(convertedData).returning(),
);
if (data) {
log.debug(
{ data },
"New data was just added to alpla purchase history",
);
}
if (error) {
log.error(
{ error, notify: true },
"There was an error adding alpla purchase history",
);
}
await delay(500);
}
} catch (e) {
log.error(
{ error: e, notify: true },
"Error occurred while running the monitor job",
);
return;
}
// re-pull re-pull everything that has approvedStatus is pending
const { data: allReq, error: errorReq } = await tryCatch(
db
.select()
.from(alplaPurchaseHistory)
.where(eq(alplaPurchaseHistory.approvedStatus, "new")),
);
// if theres no reqs just end meow
if (errorReq) {
log.error(
{ stack: errorReq, notify: true },
"There was an error getting history data",
);
return;
}
log.debug({}, `There are ${allReq.length} pending reqs to be updated`);
if (!allReq.length) {
log.debug({}, "There are not reqs to be processed");
return;
}
/**
* approvedStatus
* remark = '' then pending req/manual po
* pending = pending
* approved = approved
*
*/
// the flow for all the fun stuff
const needsGpLookup: GpStatus[] = [];
const updates: StatusUpdate[] = [];
for (const row of allReq ?? []) {
const remark = row.remark?.toLowerCase() ?? "";
if (remark === "") {
updates.push({ id: row.id, approvedStatus: "initial" });
continue;
}
if (remark.includes("rct")) {
updates.push({ id: row.id, approvedStatus: "received" });
continue;
}
if (remark.includes("apo")) {
updates.push({ id: row.id, approvedStatus: "approved" });
continue;
}
// not handled locally, defer to GP lookup
needsGpLookup.push({ id: row.id, req: row.remark?.trim() ?? "" });
}
const gpSmash = (await gpReqCheck(needsGpLookup)) as StatusUpdate[];
const merge = [...updates, ...gpSmash];
if (merge.length > 0) {
await db.execute(sql`
UPDATE ${alplaPurchaseHistory}
SET approved_status = CASE
${sql.join(
merge.map(
(row) =>
sql`WHEN ${alplaPurchaseHistory.id} = ${row.id} THEN ${row.approvedStatus}`,
),
sql` `,
)}
ELSE approved_status
END,
updated_at = NOW()
WHERE ${alplaPurchaseHistory.id} IN (
${sql.join(
merge.map((row) => sql`${row.id}`),
sql`, `,
)}
)
`);
log.info(
{},
"All alpla purchase orders have been processed and updated",
);
}
// for reqs, create a string of reqs then run them through the gp req table to see there status. then update in lst ass see fit.
// then double check if we have all reqs covered, for the reqs missing from above restring them and check the po table
// these ones will be called to as converted to po
// for the remaining reqs from above check the actual req table to see the status of it if the workflow is set at Recall this means a change was requested from purchasing team and needs to be re approved
// for all remaining reqs we change them to replace/canceled
});
}
};
// const updates = (allReq ?? [])
// .map((row) => {
// const remark = row.remark?.toLowerCase() ?? "";
// let approvedStatus: string | null = null;
// // priority order matters here
// if (remark === "") {
// approvedStatus = "initial";
// } else if (remark.includes("rct")) {
// approvedStatus = "received";
// } else if (remark.includes("apo")) {
// approvedStatus = "approved";
// }
// // add your next 4 checks here
// // else if (...) approvedStatus = "somethingElse";
// if (!approvedStatus) return null;
// return {
// id: row.id,
// approvedStatus,
// };
// })
// .filter(
// (
// row,
// ): row is {
// id: string;
// approvedStatus: string;
// } => row !== null,
// );

View File

@@ -4,11 +4,13 @@ import { setupAuthRoutes } from "./auth/auth.routes.js";
// import the routes and route setups
import { setupApiDocsRoutes } from "./configs/scaler.config.js";
import { setupDatamartRoutes } from "./datamart/datamart.routes.js";
import { setupGPSqlRoutes } from "./gpSql/gpSql.routes.js";
import { setupNotificationRoutes } from "./notification/notification.routes.js";
import { setupOCPRoutes } from "./ocp/ocp.routes.js";
import { setupOpendockRoutes } from "./opendock/opendock.routes.js";
import { setupProdSqlRoutes } from "./prodSql/prodSql.routes.js";
import { setupSystemRoutes } from "./system/system.routes.js";
import { setupTCPRoutes } from "./tcpServer/tcp.routes.js";
import { setupUtilsRoutes } from "./utils/utils.routes.js";
export const setupRoutes = (baseUrl: string, app: Express) => {
@@ -16,10 +18,12 @@ export const setupRoutes = (baseUrl: string, app: Express) => {
setupSystemRoutes(baseUrl, app);
setupApiDocsRoutes(baseUrl, app);
setupProdSqlRoutes(baseUrl, app);
setupGPSqlRoutes(baseUrl, app);
setupDatamartRoutes(baseUrl, app);
setupAuthRoutes(baseUrl, app);
setupUtilsRoutes(baseUrl, app);
setupOpendockRoutes(baseUrl, app);
setupNotificationRoutes(baseUrl, app);
setupOCPRoutes(baseUrl, app);
setupTCPRoutes(baseUrl, app);
};

View File

@@ -4,15 +4,21 @@ import createApp from "./app.js";
import { db } from "./db/db.controller.js";
import { dbCleanup } from "./db/dbCleanup.controller.js";
import { type Setting, settings } from "./db/schema/settings.schema.js";
import { connectGPSql } from "./gpSql/gpSqlConnection.controller.js";
import { createLogger } from "./logger/logger.controller.js";
import { historicalSchedule } from "./logistics/logistics.historicalInv.js";
import { startNotifications } from "./notification/notification.controller.js";
import { createNotifications } from "./notification/notifications.master.js";
import { printerSync } from "./ocp/ocp.printer.manage.js";
import { monitorReleaseChanges } from "./opendock/openDockRreleaseMonitor.utils.js";
import { opendockSocketMonitor } from "./opendock/opendockSocketMonitor.utils.js";
import { connectProdSql } from "./prodSql/prodSqlConnection.controller.js";
import { monitorAlplaPurchase } from "./purchase/purchase.controller.js";
import { setupSocketIORoutes } from "./socket.io/serverSetup.js";
import { baseSettingValidationCheck } from "./system/settingsBase.controller.js";
import { startTCPServer } from "./tcpServer/tcp.server.js";
import { createCronJob } from "./utils/croner.utils.js";
import { sendEmail } from "./utils/sendEmail.utils.js";
const port = Number(process.env.PORT) || 3000;
export let systemSettings: Setting[] = [];
@@ -26,7 +32,9 @@ const start = async () => {
const log = createLogger({ module: "system", subModule: "main start" });
// triggering long lived processes
startTCPServer();
connectProdSql();
connectGPSql();
// trigger startup processes these must run before anything else can run
await baseSettingValidationCheck();
@@ -36,7 +44,7 @@ const start = async () => {
// also we always want to have long lived processes inside a setting check.
setTimeout(() => {
if (systemSettings.filter((n) => n.name === "opendock_sync")[0]?.active) {
log.info({}, "Opendock is not active");
log.info({}, "Opendock is active");
monitorReleaseChanges(); // this is od monitoring the db for all new releases
opendockSocketMonitor();
createCronJob("opendockAptCleanup", "0 30 5 * * *", () =>
@@ -44,17 +52,43 @@ const start = async () => {
);
}
if (systemSettings.filter((n) => n.name === "purchaseMonitor")[0]?.active) {
monitorAlplaPurchase();
}
if (systemSettings.filter((n) => n.name === "ocp")[0]?.active) {
printerSync();
}
// these jobs below are system jobs and should run no matter what.
createCronJob("JobAuditLogCleanUp", "0 0 5 * * *", () =>
dbCleanup("jobs", 30),
);
createCronJob("logsCleanup", "0 15 5 * * *", () => dbCleanup("logs", 120));
historicalSchedule();
// one shots only needed to run on server startups
createNotifications();
startNotifications();
}, 5 * 1000);
process.on("uncaughtException", async (err) => {
console.error("Uncaught Exception:", err);
//await closePool();
const emailData = {
email: "blake.matthes@alpla.com", // should be moved to the db so it can be reused.
subject: `${os.hostname()} has just encountered a crash.`,
template: "serverCrash",
context: {
error: err,
plant: `${os.hostname()}`,
},
};
await sendEmail(emailData);
//process.exit(1);
});
server.listen(port, async () => {
log.info(
`Listening on http://${os.hostname()}:${port}${baseUrl}, logging in ${process.env.LOG_LEVEL}, current ENV ${process.env.NODE_ENV ? process.env.NODE_ENV : "development"}`,

View File

@@ -8,7 +8,7 @@ const newSettings: NewSetting[] = [
// feature settings
{
name: "opendock_sync",
value: "0",
value: "15",
active: false,
description: "Dock Scheduling system",
moduleName: "opendock",
@@ -66,6 +66,16 @@ const newSettings: NewSetting[] = [
roles: ["admin"],
seedVersion: 1,
},
{
name: "purchaseMonitor",
value: "5",
active: true,
description: "Monitors alpla purchase fo all changes",
moduleName: "purchase",
settingType: "feature",
roles: ["admin"],
seedVersion: 1,
},
// standard settings
{

View File

@@ -10,6 +10,7 @@ import {
killOpendockSocket,
opendockSocketMonitor,
} from "../opendock/opendockSocketMonitor.utils.js";
import { monitorAlplaPurchase } from "../purchase/purchase.controller.js";
import {
createCronJob,
resumeCronJob,
@@ -31,8 +32,24 @@ export const featureControl = async (data: Setting) => {
createCronJob("opendockAptCleanup", "0 30 5 * * *", () =>
dbCleanup("opendockApt", 90),
);
} else {
}
if (data.name === "opendock_sync" && !data.active) {
killOpendockSocket();
stopCronJob("opendockAptCleanup");
}
// purchase stuff
if (data.name === "purchaseMonitor" && data.active) {
monitorAlplaPurchase();
}
if (data.name === "purchaseMonitor" && !data.active) {
stopCronJob("purchaseMonitor");
}
// this means the data time has changed
if (data.name === "purchaseMonitor" && data.value) {
monitorAlplaPurchase();
}
};

View File

@@ -1,9 +1,12 @@
import { Router } from "express";
import { connected as gpSql } from "../gpSql/gpSqlConnection.controller.js";
import { connected as prodSql } from "../prodSql/prodSqlConnection.controller.js";
import { prodQuery } from "../prodSql/prodSqlQuery.controller.js";
import {
type SqlQuery,
sqlQuerySelector,
} from "../prodSql/prodSqlQuerySelector.utils.js";
import { isServerRunning } from "../tcpServer/tcp.server.js";
const router = Router();
@@ -25,6 +28,9 @@ router.get("/", async (_, res) => {
: [],
eomFGPkgSheetVersion: 1, // this is the excel file version when we have a change to the macro we want to grab this
masterMacroFile: 1,
tcpServerOnline: isServerRunning,
sqlServerConnected: prodSql,
gpServerConnected: gpSql,
});
});

View File

@@ -0,0 +1,51 @@
import { db } from "../db/db.controller.js";
import { printerLog } from "../db/schema/printerLogs.schema.js";
import { createLogger } from "../logger/logger.controller.js";
import { returnFunc } from "../utils/returnHelper.utils.js";
import { tryCatch } from "../utils/trycatch.utils.js";
export type PrinterData = {
ip: string;
name: string;
condition: string;
message: string;
date?: string;
printerSN: string;
};
const log = createLogger({ module: "tcp", submodule: "create_server" });
export const printerListen = async (tcpData: PrinterData) => {
const ip = tcpData.ip?.replace("::ffff:", "");
// post the new message
const { data, error } = await tryCatch(
db
.insert(printerLog)
.values({
ip,
name: tcpData.name,
condition: tcpData.condition,
message: tcpData.message,
printerSN: tcpData.printerSN,
})
.returning(),
);
if (error) {
return returnFunc({
success: false,
level: "error",
module: "tcp",
subModule: "post",
message: "Failed to post tcp printer data.",
data: [],
notify: false,
});
}
if (data) {
log.info({}, `${tcpData.name} sent a message over`);
// TODO: send message over to the controller to decide what to do next with it
}
};

View File

@@ -0,0 +1,14 @@
import type { Express } from "express";
import { requireAuth } from "../middleware/auth.middleware.js";
import restart from "./tcpRestart.route.js";
import start from "./tcpStart.route.js";
import stop from "./tcpStop.route.js";
export const setupTCPRoutes = (baseUrl: string, app: Express) => {
//stats will be like this as we dont need to change this
app.use(`${baseUrl}/api/tcp/start`, requireAuth, start);
app.use(`${baseUrl}/api/tcp/stop`, requireAuth, stop);
app.use(`${baseUrl}/api/tcp/restart`, requireAuth, restart);
// all other system should be under /api/system/*
};

View File

@@ -0,0 +1,180 @@
import net from "node:net";
import { eq } from "drizzle-orm";
import { db } from "../db/db.controller.js";
import { printerData } from "../db/schema/printers.schema.js";
import { createLogger } from "../logger/logger.controller.js";
import { delay } from "../utils/delay.utils.js";
import { returnFunc } from "../utils/returnHelper.utils.js";
import { tryCatch } from "../utils/trycatch.utils.js";
import { type PrinterData, printerListen } from "./tcp.printerListener.js";
let tcpServer: net.Server;
const tcpSockets: Set<net.Socket> = new Set();
export let isServerRunning = false;
const port = parseInt(process.env.TCP_PORT ?? "2222", 10);
const parseTcpAlert = (input: string) => {
// guard
const colonIndex = input.indexOf(":");
if (colonIndex === -1) return null;
const condition = input.slice(0, colonIndex).trim();
const rest = input.slice(colonIndex + 1).trim();
// extract all [ ... ] blocks from rest
const matches = [...rest.matchAll(/\[(.*?)\]/g)];
const date = matches[0]?.[1] ?? "";
const name = matches[1]?.[1] ?? "";
// message = everything before first "["
const bracketIndex = rest.indexOf("[");
const message =
bracketIndex !== -1 ? rest.slice(0, bracketIndex).trim() : rest;
return {
condition,
message,
date,
name,
};
};
const log = createLogger({ module: "tcp", submodule: "create_server" });
export const startTCPServer = async () => {
tcpServer = net.createServer(async (socket) => {
tcpSockets.add(socket);
socket.on("data", async (data: Buffer) => {
const parseData = data.toString("utf-8").trimEnd();
// check where the data came from then we do something.
const ip = socket.remoteAddress ?? "127.0.0.1";
const { data: printer, error: pError } = await tryCatch(
db
.select()
.from(printerData)
.where(eq(printerData.ipAddress, ip.replace("::ffff:", ""))),
);
if (pError) {
log.error(
{ stack: pError },
"There was an error getting printer data for tcp check",
);
return;
}
if (printer?.length) {
const printerData = {
...parseTcpAlert(parseData),
ip,
printerSN: printer[0]?.printerSN,
name: printer[0]?.name,
};
printerListen(printerData as PrinterData);
}
});
socket.on("end", () => {
log.debug({}, "Client disconnected");
// just in case we dont fully disconnect
setTimeout(() => {
if (!socket.destroyed) {
socket.destroy();
}
}, 1000);
tcpSockets.delete(socket);
});
socket.on("error", (err: Error) => {
log.error({ stack: err }, `Socket error:", ${err}`);
// just in case we dont fully disconnect
setTimeout(() => {
if (!socket.destroyed) {
socket.destroy();
}
}, 1000);
tcpSockets.delete(socket);
});
});
tcpServer.listen(port, () => {
log.info({}, `TCP Server listening on port ${port}`);
});
isServerRunning = true;
return returnFunc({
success: true,
level: "info",
module: "tcp",
subModule: "create_server",
message: "TCP server started.",
data: [],
notify: false,
room: "",
});
};
export const stopTCPServer = async () => {
if (!isServerRunning)
return { success: false, message: "Server is not running" };
for (const socket of tcpSockets) {
socket.destroy();
}
tcpSockets.clear();
tcpServer.close(() => {
log.info({}, "TCP Server stopped");
});
isServerRunning = false;
return returnFunc({
success: true,
level: "info",
module: "tcp",
subModule: "create_server",
message: "TCP server stopped.",
data: [],
notify: false,
room: "",
});
};
export const restartTCPServer = async () => {
if (!isServerRunning) {
startTCPServer();
return returnFunc({
success: false,
level: "warn",
module: "tcp",
subModule: "create_server",
message: "Server is not running will try to start it",
data: [],
notify: false,
room: "",
});
} else {
for (const socket of tcpSockets) {
socket.destroy();
}
tcpSockets.clear();
tcpServer.close(() => {
log.info({}, "TCP Server stopped");
});
isServerRunning = false;
await delay(1500);
startTCPServer();
}
return returnFunc({
success: true,
level: "info",
module: "tcp",
subModule: "create_server",
message: "TCP server has been restarted.",
data: [],
notify: false,
room: "",
});
};

View File

@@ -0,0 +1,19 @@
import { Router } from "express";
import { apiReturn } from "../utils/returnHelper.utils.js";
import { restartTCPServer } from "./tcp.server.js";
const r = Router();
r.post("/restart", async (_, res) => {
const connect = await restartTCPServer();
apiReturn(res, {
success: connect.success,
level: connect.success ? "info" : "error",
module: "tcp",
subModule: "post",
message: "TCP Server has been restarted",
data: connect.data,
status: connect.success ? 200 : 400,
});
});
export default r;

View File

@@ -0,0 +1,20 @@
import { Router } from "express";
import { apiReturn } from "../utils/returnHelper.utils.js";
import { startTCPServer } from "./tcp.server.js";
const r = Router();
r.post("/start", async (_, res) => {
const connect = await startTCPServer();
apiReturn(res, {
success: connect.success,
level: connect.success ? "info" : "error",
module: "routes",
subModule: "prodSql",
message: connect.message,
data: connect.data,
status: connect.success ? 200 : 400,
});
});
export default r;

View File

@@ -0,0 +1,20 @@
import { Router } from "express";
import { apiReturn } from "../utils/returnHelper.utils.js";
import { stopTCPServer } from "./tcp.server.js";
const r = Router();
r.post("/stop", async (_, res) => {
const connect = await stopTCPServer();
apiReturn(res, {
success: connect.success,
level: connect.success ? "info" : "error",
module: "routes",
subModule: "prodSql",
message: connect.message,
data: [],
status: connect.success ? 200 : 400,
});
});
export default r;

View File

@@ -0,0 +1,9 @@
export type GpStatus = {
id: string;
req: string;
};
export type StatusUpdate = {
id: string;
approvedStatus: string;
};

View File

@@ -3,6 +3,7 @@ import { eq } from "drizzle-orm";
import { db } from "../db/db.controller.js";
import { jobAuditLog } from "../db/schema/auditLog.schema.js";
import { createLogger } from "../logger/logger.controller.js";
import type { ReturnHelper } from "./returnHelper.utils.js";
// example createJob
// createCronJob("test Cron", "*/5 * * * * *", async () => {
@@ -18,7 +19,9 @@ export interface JobInfo {
// Store running cronjobs
export const runningCrons: Record<string, Cron> = {};
const activeRuns = new Set<string>();
const log = createLogger({ module: "system", subModule: "croner" });
const cronStats: Record<string, { created: number; replaced: number }> = {};
// how to se the times
// * ┌──────────────── (optional) second (0 - 59)
@@ -38,17 +41,36 @@ const log = createLogger({ module: "system", subModule: "croner" });
* @param name Name of the job we want to run
* @param schedule Cron expression (example: `*\/5 * * * * *`)
* @param task Async function that will run
* @param source we can add where it came from to assist in getting this tracked down, more for debugging
*/
export const createCronJob = async (
name: string,
schedule: string, // cron string with 8 8 IE: */5 * * * * * every 5th second
task: () => Promise<void>, // what function are we passing over
task: () => Promise<void | ReturnHelper>, // what function are we passing over
source = "unknown",
) => {
// get the timezone based on the os timezone set
const timeZone = Intl.DateTimeFormat().resolvedOptions().timeZone;
// initial go so just store it this is more for debugging to see if something crazy keeps happening
if (!cronStats[name]) {
cronStats[name] = { created: 0, replaced: 0 };
}
// Destroy existing job if it exist
if (runningCrons[name]) {
cronStats[name].replaced += 1;
log.warn(
{
job: name,
source,
oldSchedule: runningCrons[name].getPattern?.(),
newSchedule: schedule,
replaceCount: cronStats[name].replaced,
},
`Cron job "${name}" already existed and is being replaced`,
);
runningCrons[name].stop();
}
@@ -61,6 +83,13 @@ export const createCronJob = async (
name: name,
},
async () => {
if (activeRuns.has(name)) {
log.warn({ jobName: name }, "Skipping overlapping cron execution");
return;
}
activeRuns.add(name);
const startedAt = new Date();
const start = Date.now();
@@ -91,14 +120,19 @@ export const createCronJob = async (
.where(eq(jobAuditLog.id, executionId));
} catch (e: any) {
if (executionId) {
await db.update(jobAuditLog).set({
finishedAt: new Date(),
durationMs: Date.now() - start,
status: "error",
errorMessage: e.message,
errorStack: e.stack,
});
await db
.update(jobAuditLog)
.set({
finishedAt: new Date(),
durationMs: Date.now() - start,
status: "error",
errorMessage: e.message,
errorStack: e.stack,
})
.where(eq(jobAuditLog.id, executionId));
}
} finally {
activeRuns.delete(name);
}
},
);

View File

@@ -0,0 +1,73 @@
<html lang="en">
<head>
<meta charset="UTF-8" />
<meta name="viewport" content="width=device-width, initial-scale=1.0" />
{{!-- <link rel="stylesheet" href="styles/styles.css" /> --}}
<style>
.email-wrapper {
max-width: 80%; /* Limit width to 80% of the window */
margin: 0 auto; /* Center the content horizontally */
}
.email-table {
width: 100%;
border-collapse: collapse;
}
.email-table td {
vertical-align: top;
padding: 10px;
border: 1px solid #000;
border-radius: 25px; /* Rounded corners */
background-color: #f0f0f0; /* Optional: Add a background color */
}
.email-table h2 {
margin: 0;
}
.remarks {
border: 1px solid black;
padding: 10px;
background-color: #f0f0f0;
border-radius: 25px;
}
</style>
</head>
<body>
<div class="email-wrapper">
<p>All,</p>
<p>Please see the new blocking order that was created.</p>
<div>
<div class="email-table">
<table>
<tr>
<td>
<p><strong>Blocking number: </strong>{{items.blockingNumber}}</p>
<p><strong>Blocking Date: </strong>{{items.blockingDate}}</p>
<p><strong>Article: </strong>{{items.av}}</p>
<p><strong>Production Lot: </strong>{{items.lotNumber}}</p>
<p><strong>Line: </strong>{{items.line}}</p>
</td>
<td>
<p><strong>Customer: </strong>{{items.customer}}</p>
<p><strong>Blocked pieces /LUs: </strong>{{items.peicesAndLoadingUnits}}</p>
<p><strong>Main defect group: </strong>{{items.mainDefectGroup}}</p>
<p><strong>Main defect: </strong>{{items.mainDefect}}</p>
</td>
</tr>
</table>
</div>
</div>
<div class="remarks">
<h4>Remarks:</h4>
<p>{{items.remark}}</p>
</div>
</div>
<br>
<p>For further questions please reach out to quality.</p>
<p>Thank you,</p>
<p>Quality Department</p>
</p>
</div>
</body>
</html>

View File

@@ -0,0 +1,47 @@
<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="UTF-8">
<meta name="viewport" content="width=device-width, initial-scale=1.0">
{{!-- <link rel="stylesheet" href="styles/styles.css" /> --}}
{{> styles}}
</head>
<body>
<p>All,</p>
<p>The below labels have been reprinted.</p>
<table >
<thead>
<tr>
<th>AV</th>
<th>Description</th>
<th>Label Number</th>
<th>Date Added</th>
<th>Date Reprinted</th>
<th>Who printed/Updated</th>
<th>What printer it came from</th>
</tr>
</thead>
<tbody>
{{#each items}}
<tr>
<td>{{av}}</td>
<td>{{alias}}</td>
<td>{{RunningNumber}}</td>
<td>{{printDate}}</td>
<td>{{createdDateTime}}</td>
<td>{{ActorName}}</td>
<td>{{printerName}}</td>
</tr>
{{/each}}
</tbody>
</table>
<div>
<p>Thank you,</p>
<p>LST Team</p>
</div>
</body>
</html>

View File

@@ -0,0 +1,35 @@
<html lang="en">
<head>
<meta charset="UTF-8" />
<meta name="viewport" content="width=device-width, initial-scale=1.0" />
{{!--<title>Order Summary</title> --}}
{{> styles}}
<style>
pre {
background-color: #f8f9fa;
color: #d63384;
padding: 10px;
border-radius: 5px;
white-space: pre-wrap;
font-family: monospace;
}
</style>
{{!-- <link rel="stylesheet" href="styles/styles.css" /> --}}
</head>
<body>
<h3>{{plant}},<br/> Has encountered an unexpected error.</h1>
<p>
Please see below the stack error from the crash.
</p>
<hr/>
<div>
<h3>Error Message: </h3>
<p>{{error.message}}</p>
</div>
<hr/>
<div>
<h3>Stack trace</h3>
<pre>{{{error.stack}}}</pre>
</div>
</body>
</html>

View File

@@ -0,0 +1,36 @@
<html lang="en">
<head>
<meta charset="UTF-8" />
<meta name="viewport" content="width=device-width, initial-scale=1.0" />
{{!--<title>Order Summary</title> --}}
{{> styles}}
<style>
pre {
background-color: #f8f9fa;
color: #d63384;
padding: 10px;
border-radius: 5px;
white-space: pre-wrap;
font-family: monospace;
}
</style>
{{!-- <link rel="stylesheet" href="styles/styles.css" /> --}}
</head>
<body>
<h3>{{plant}},<br/> Has encountered an error.</h1>
<p>
The below error came from Module: {{module}}, Submodule: {{submodule}}.
</p>
<p>The error below is considered to be critical and should be addressed</p>
<hr/>
<div>
<h3>Error Message: </h3>
<p>{{message}}</p>
</div>
<hr/>
<div>
<h3>Stack trace</h3>
<pre>{{{error}}}</pre>
</div>
</body>
</html>

View File

@@ -0,0 +1,41 @@
import pkg from "pg";
const { Pool } = pkg;
const baseConfig = {
host: process.env.DATABASE_HOST ?? "localhost",
port: parseInt(process.env.DATABASE_PORT ?? "5433", 10),
user: process.env.DATABASE_USER,
password: process.env.DATABASE_PASSWORD,
};
// Pools (one per DB)
const v1Pool = new Pool({
...baseConfig,
database: "lst",
});
const v2Pool = new Pool({
...baseConfig,
database: "lst_db",
});
// Query helpers
export const v1QueryRun = async (query: string, params?: any[]) => {
try {
const res = await v1Pool.query(query, params);
return res;
} catch (err) {
console.error("V1 query error:", err);
throw err;
}
};
export const v2QueryRun = async (query: string, params?: any[]) => {
try {
const res = await v2Pool.query(query, params);
return res;
} catch (err) {
console.error("V2 query error:", err);
throw err;
}
};

View File

@@ -0,0 +1,124 @@
import https from "node:https";
import axios from "axios";
import { returnFunc } from "./returnHelper.utils.js";
import { tryCatch } from "./trycatch.utils.js";
type bodyData = any;
type Data = {
endpoint: string;
data?: bodyData[];
method: "post" | "get" | "delete" | "patch";
};
// type ApiResponse<T = unknown> = {
// status: number;
// statusText: string;
// data: T;
// };
// create the test server stuff
const testServers = [
{ token: "test1", port: 8940 },
{ token: "test2", port: 8941 },
{ token: "test3", port: 8942 },
];
const agent = new https.Agent({
rejectUnauthorized: false,
});
export const prodEndpointCreation = async (endpoint: string) => {
let url = "";
//get the plant token
const plantToken = process.env.PROD_PLANT_TOKEN ?? "test1";
// check if we are a test server
const testServer = testServers.some((server) => server.token === plantToken);
// await db
// .select()
// .from(settings)
// .where(eq(settings.name, "dbServer"));
if (testServer) {
//filter out what testserver we are
const test = testServers.filter((t) => t.token === plantToken);
// "https://usmcd1vms036.alpla.net:8942/application/public/v1.0/DemandManagement/ORDERS"
url = `https://${process.env.PROD_SERVER}.alpla.net:${test[0]?.port}/application${endpoint}`;
return url;
} else {
url = `https://${plantToken}prod.alpla.net/application${endpoint}`;
return url;
}
};
/**
*
* @param data
* @param timeoutDelay
* @returns
*/
export const runProdApi = async (data: Data) => {
const url = await prodEndpointCreation(data.endpoint);
const { data: d, error } = await tryCatch(
axios({
method: data.method as string,
url,
data: data.data ? data.data[0] : undefined,
headers: {
"X-API-Key": process.env.TEC_API_KEY || "",
"Content-Type": "application/json",
},
validateStatus: () => true,
httpsAgent: agent,
}),
);
switch (d?.status) {
case 200:
return returnFunc({
success: true,
level: "info",
module: "utils",
subModule: "prodEndpoint",
message: "Data from prod endpoint",
data: d.data,
notify: false,
});
case 401:
return returnFunc({
success: false,
level: "error",
module: "utils",
subModule: "prodEndpoint",
message: "Data from prod endpoint",
data: d.data,
notify: false,
});
case 400:
return returnFunc({
success: false,
level: "error",
module: "utils",
subModule: "prodEndpoint",
message: "Data from prod endpoint",
data: d.data,
notify: false,
});
}
if (error) {
return returnFunc({
success: true,
level: "error",
module: "utils",
subModule: "prodEndpoint",
message: "Failed to get data from the prod endpoint",
data: error as any,
notify: true,
});
}
};

View File

@@ -1,7 +1,7 @@
import type { Response } from "express";
import { createLogger } from "../logger/logger.controller.js";
interface Data<T = unknown[]> {
export interface ReturnHelper<T = unknown[]> {
success: boolean;
module:
| "system"
@@ -10,27 +10,14 @@ interface Data<T = unknown[]> {
| "datamart"
| "utils"
| "opendock"
| "notification";
subModule:
| "db"
| "labeling"
| "printer"
| "prodSql"
| "query"
| "sendmail"
| "auth"
| "datamart"
| "jobs"
| "apt"
| "settings"
| "get"
| "update"
| "delete"
| "post"
| "notification"
| "delete"
| "printing";
level: "info" | "error" | "debug" | "fatal";
| "email"
| "purchase"
| "tcp"
| "logistics";
subModule: string;
level: "info" | "error" | "debug" | "fatal" | "warn";
message: string;
room?: string;
data?: T;
@@ -51,7 +38,7 @@ interface Data<T = unknown[]> {
* data: [] the data that will be passed back
* notify: false by default this is to send a notification to a users email to alert them of an issue.
*/
export const returnFunc = (data: Data) => {
export const returnFunc = (data: ReturnHelper) => {
const notify = data.notify ? data.notify : false;
const room = data.room ?? data.room;
const log = createLogger({ module: data.module, subModule: data.subModule });
@@ -61,13 +48,14 @@ export const returnFunc = (data: Data) => {
log.info({ notify: notify, room }, data.message);
break;
case "error":
log.error({ notify: notify, error: data.data, room }, data.message);
log.error({ notify: notify, stack: data.data ?? [], room }, data.message);
break;
case "debug":
log.debug({ notify: notify, room }, data.message);
log.debug({ notify: notify, stack: data.data ?? [], room }, data.message);
break;
case "fatal":
log.fatal({ notify: notify, room }, data.message);
log.fatal({ notify: notify, stack: data.data ?? [], room }, data.message);
}
// api section to return
@@ -83,7 +71,7 @@ export const returnFunc = (data: Data) => {
export function apiReturn(
res: Response,
opts: Data & { status?: number },
opts: ReturnHelper & { status?: number },
optional?: unknown, // leave this as unknown so we can pass an object or an array over.
): Response {
const result = returnFunc(opts);

View File

@@ -88,7 +88,7 @@ export const sendEmail = async (data: EmailData) => {
level: "error",
module: "utils",
subModule: "sendmail",
message: `Error sending Email to : ${data.email}`,
message: `Error sending Email to : ${data.email}, Error: ${error.message}`,
data: [{ error: error }],
notify: false,
});

View File

@@ -5,13 +5,17 @@ meta {
}
get {
url: {{url}}/api/datamart/:name
url: {{url}}/api/datamart/:name?historical=x
body: none
auth: inherit
}
params:query {
historical: x
}
params:path {
name: activeArticles
name: inventory
}
settings {

View File

@@ -14,7 +14,7 @@ body:json {
{
"userId":"m6AbQXFwOXoX3YKLfwWgq2LIdDqS5jqv",
"notificationId": "0399eb2a-39df-48b7-9f1c-d233cec94d2e",
"emails": ["blake.mattes@alpla.com","cowchmonkey@gmail.com"]
"emails": ["blake.matthes@alpla.com","blake.matthes@alpla.com"]
}
}

View File

@@ -11,6 +11,11 @@ services:
ports:
#- "${VITE_PORT:-4200}:4200"
- "3600:3000"
dns:
- 10.193.9.250
- 10.193.9.251 # your internal DNS server
dns_search:
- alpla.net # or your internal search suffix
environment:
- NODE_ENV=production
- LOG_LEVEL=info

File diff suppressed because it is too large Load Diff

View File

@@ -26,6 +26,8 @@
"radix-ui": "^1.4.3",
"react": "^19.1.1",
"react-dom": "^19.1.1",
"react-markdown": "^10.1.0",
"remark-gfm": "^4.0.1",
"shadcn": "^4.0.8",
"socket.io-client": "^4.8.3",
"sonner": "^2.0.7",
@@ -36,6 +38,7 @@
},
"devDependencies": {
"@eslint/js": "^9.36.0",
"@tailwindcss/typography": "^0.5.19",
"@tanstack/router-plugin": "^1.166.7",
"@types/react": "^19.1.13",
"@types/react-dom": "^19.1.9",

Binary file not shown.

After

Width:  |  Height:  |  Size: 5.8 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 27 KiB

Some files were not shown because too many files have changed in this diff Show More