31 Commits

Author SHA1 Message Date
c7af1901aa feat(app state): settings are now global and get updated on the fly 2025-08-05 12:40:34 -05:00
2473bfa702 feat(app): changes to dyanmically load systems based on settings 2025-08-05 12:40:11 -05:00
4dd842b3b8 docs(.env-example): updated to include discord info 2025-08-04 21:25:39 -05:00
89ef04cc6f feat(discord): added in a way to get panic messages that would crash the server only or fatal 2025-08-04 21:23:06 -05:00
3cec883356 Merge branch 'main' of https://git.tuffraid.net/cowch/logistics_support_tool 2025-08-04 06:54:26 -05:00
0ecbe29ec1 refactor(app): moved all db and log to one intialize spot 2025-08-04 06:54:21 -05:00
188331c1ad ci(backend): fixed go.sum to have all correct pkg 2025-08-03 09:22:57 -05:00
486e4fb6b8 test(ws): testing ws connection on the frontend 2025-08-01 15:54:52 -05:00
c775bb3354 refactor(correction to folder sturcture): before we got to deep resturctures to best pactice folder 2025-08-01 15:54:20 -05:00
b6968b7b67 chore(release): v0.0.1-alpha.6 2025-07-30 19:39:25 -05:00
a0aa75c5a0 refactor(settings): changed config to settings and added in the update method for this as well
strict fields on the updates so we can only change what we want in here
2025-07-30 19:35:13 -05:00
78be07c8bb ci(hotreload): added in air for hot reloading 2025-07-30 19:31:02 -05:00
0575a34422 feat(settings): migrated all settings endpoints confirmed as well for updates 2025-07-29 20:27:45 -05:00
3bc3801ffb refactor(config): changed to settings to match the other lst in node. makes it more easy to manage 2025-07-29 20:13:35 -05:00
4368111311 fix(websocket): errors in saving client info during ping ping 2025-07-29 20:13:05 -05:00
daf9e8a966 perf(websocket): added in base url to help with ssl stuff and iis 2025-07-29 18:07:10 -05:00
8a08d3eac6 refactor(wrapper): removed the logger stuff so we dont fill up space 2025-07-29 18:06:27 -05:00
a761a3634b fix(wrapper): corrections to properly handle websockets :D 2025-07-29 15:58:21 -05:00
a1a30cffd1 refactor(ws): ws logging and channel manager added no auth currently 2025-07-29 11:29:59 -05:00
6a631be909 docs(docker): docs about the custom network for the db is seperated 2025-07-25 12:14:51 -05:00
75c17d2065 test(iis): wrapper test for ws 2025-07-25 12:14:05 -05:00
63c053b38c docs(wss): more ws stuff 2025-07-25 12:13:47 -05:00
5bcbdaf3d0 feat(ws server): added in a websocket on port system to help with better logging 2025-07-25 12:13:19 -05:00
074032f20d refactor(app port): changed to have the port be dyncamic on the iis side
docker will default to 8080 and can be adjusted via the docker compose, or passing the same env over
it will change it as well.
2025-07-23 07:36:18 -05:00
13e282e815 fix(update server): fixed to make sure everything is stopped before doing the remaining update 2025-07-22 20:00:43 -05:00
6c8ac33be7 refactor(createzip): added in env-example to the zip file 2025-07-22 19:59:55 -05:00
92ce51eb7c refactor(build): added back in the build name stuff 2025-07-22 19:59:29 -05:00
52ef39fd5c feat(logging): added in db and logging with websocket 2025-07-22 19:59:06 -05:00
623e19f028 refactor(docker compose example): added in postgress stuff plus network 2025-07-22 19:58:26 -05:00
14dd87e335 docs(.env example): added postrgres example 2025-07-22 19:58:01 -05:00
52956ecaa4 docs(dockerbuild): comments as a reminder for my seld 2025-07-22 07:03:29 -05:00
45 changed files with 2334 additions and 180 deletions

View File

@@ -1,9 +1,16 @@
# uncomment this out to run in productions
# APP_ENV=production
# Server port that will allow vite to talk to the backend.
VITE_SERVER_PORT=4000
# lstv2 loc
LSTV2="C\drive\loc"
# discord - this us used to monitor the logs and make sure we never have a critial shut down.
# this will be for other critical stuff like nice label and some other events to make sure we are still in a good spot and dont need to jump in
WEBHOOK=
# dev stuff below
# Gitea Info
@@ -12,7 +19,17 @@ GITEA_USERNAME=username
GITEA_REPO=logistics_support_tool
GITEA_TOKEN=ad8eac91a01e3a1885a1dc10
# postgres db
DB_HOST=localhost
DB_PORT=5433
DB_USER=username
DB_PASSWORD=password
DB_NAME=lst # db must be created before you start the app
# dev locs
DEV_FOLDER=C\drive\loc
ADMUSER=username
ADMPASSWORD=password
# Build number info
BUILD_NAME=leBlfRaj

3
.gitignore vendored
View File

@@ -10,6 +10,7 @@ LstWrapper/obj
scripts/tmp
backend/docs
backend/frontend
testFolder
# ---> Go
# If you prefer the allow list template instead of the deny list, see community template:
@@ -191,4 +192,6 @@ backend/go.sum
BUILD_NUMBER
scripts/resetDanger.js
LstWrapper/Program_vite_as_Static.txt
LstWrapper/Program_proxy_backend.txt
scripts/stopPool.go
backend_bad_practice

View File

@@ -3,6 +3,50 @@
All notable changes to LST will be documented in this file.
## [0.0.1-alpha.6](https://git.tuffraid.net/cowch/logistics_support_tool/compare/v0.0.1-alpha.5...v0.0.1-alpha.6) (2025-07-31)
### 🌟 Enhancements
* **logging:** added in db and logging with websocket ([52ef39f](https://git.tuffraid.net/cowch/logistics_support_tool/commit/52ef39fd5c129ed02ed9f38dbf7e49ae06807ad6))
* **settings:** migrated all settings endpoints confirmed as well for updates ([0575a34](https://git.tuffraid.net/cowch/logistics_support_tool/commit/0575a344229ba0ff5c0f47781c6d596e5c08e5eb))
* **ws server:** added in a websocket on port system to help with better logging ([5bcbdaf](https://git.tuffraid.net/cowch/logistics_support_tool/commit/5bcbdaf3d0e889729d4dce3df51f4330d7793868))
### 🐛 Bug fixes
* **update server:** fixed to make sure everything is stopped before doing the remaining update ([13e282e](https://git.tuffraid.net/cowch/logistics_support_tool/commit/13e282e815c1c95a0a5298ede2f6497cdf036440))
* **websocket:** errors in saving client info during ping ping ([4368111](https://git.tuffraid.net/cowch/logistics_support_tool/commit/4368111311c48e73a11a6b24febdcc3be31a2a59))
* **wrapper:** corrections to properly handle websockets :D ([a761a36](https://git.tuffraid.net/cowch/logistics_support_tool/commit/a761a3634b6cb0aeeb571dd634bd158cee530779))
### 📚 Documentation
* **.env example:** added postrgres example ([14dd87e](https://git.tuffraid.net/cowch/logistics_support_tool/commit/14dd87e335a63d76d64c07a15cf593cb286a9833))
* **dockerbuild:** comments as a reminder for my seld ([52956ec](https://git.tuffraid.net/cowch/logistics_support_tool/commit/52956ecaa45cd556ba7832d6cb9ec2cf883d983a))
* **docker:** docs about the custom network for the db is seperated ([6a631be](https://git.tuffraid.net/cowch/logistics_support_tool/commit/6a631be909b56a899af393510edffd70d7901a7a))
* **wss:** more ws stuff ([63c053b](https://git.tuffraid.net/cowch/logistics_support_tool/commit/63c053b38ce3ab3c3a94cda620da930f4e8615bd))
### 🛠️ Code Refactor
* **app port:** changed to have the port be dyncamic on the iis side ([074032f](https://git.tuffraid.net/cowch/logistics_support_tool/commit/074032f20dc90810416c5899e44fefe86b52f98a))
* **build:** added back in the build name stuff ([92ce51e](https://git.tuffraid.net/cowch/logistics_support_tool/commit/92ce51eb7cf14ebb599c29fea4721e21badafbf6))
* **config:** changed to settings to match the other lst in node. makes it more easy to manage ([3bc3801](https://git.tuffraid.net/cowch/logistics_support_tool/commit/3bc3801ffbb544a814d52c72e566e8d4866a7f38))
* **createzip:** added in env-example to the zip file ([6c8ac33](https://git.tuffraid.net/cowch/logistics_support_tool/commit/6c8ac33be73f203137b883e33feb625ccc0945e9))
* **docker compose example:** added in postgress stuff plus network ([623e19f](https://git.tuffraid.net/cowch/logistics_support_tool/commit/623e19f028d27fbfc46bee567ce78169cddba8fb))
* **settings:** changed config to settings and added in the update method for this as well ([a0aa75c](https://git.tuffraid.net/cowch/logistics_support_tool/commit/a0aa75c5a0b4a6e3a10b88bbcccf43d096e532b4))
* **wrapper:** removed the logger stuff so we dont fill up space ([8a08d3e](https://git.tuffraid.net/cowch/logistics_support_tool/commit/8a08d3eac6540b00ff23115936d56b4f22f16d53))
* **ws:** ws logging and channel manager added no auth currently ([a1a30cf](https://git.tuffraid.net/cowch/logistics_support_tool/commit/a1a30cffd18e02e1061959fa3164f8237522880c))
### 🚀 Performance
* **websocket:** added in base url to help with ssl stuff and iis ([daf9e8a](https://git.tuffraid.net/cowch/logistics_support_tool/commit/daf9e8a966fd440723b1aec932a02873a5e27eb7))
### 📝 Testing Code
* **iis:** wrapper test for ws ([75c17d2](https://git.tuffraid.net/cowch/logistics_support_tool/commit/75c17d20659dcc5a762e00928709c4d3dd277284))
### 📈 Project changes
* **hotreload:** added in air for hot reloading ([78be07c](https://git.tuffraid.net/cowch/logistics_support_tool/commit/78be07c8bbf5acbcdac65351f693941f47be4cb5))
## [0.0.1-alpha.5](https://git.tuffraid.net/cowch/logistics_support_tool/compare/v0.0.1-alpha.4...v0.0.1-alpha.5) (2025-07-21)
### 🌟 Enhancements

View File

@@ -1,65 +1,158 @@
var builder = WebApplication.CreateBuilder(args);
using System;
using System.IO;
using System.Net;
using System.Net.WebSockets;
using System.Threading;
using System.Threading.Tasks;
using Microsoft.AspNetCore.Builder;
using Microsoft.Extensions.DependencyInjection;
// Configure clients
builder.Services.AddHttpClient("GoBackend", client => {
var builder = WebApplication.CreateBuilder(args);
builder.Services.AddHttpClient("GoBackend", client =>
{
client.BaseAddress = new Uri("http://localhost:8080");
});
var app = builder.Build();
// Handle trailing slash redirects
app.Use(async (context, next) => {
if (context.Request.Path.Equals("/lst", StringComparison.OrdinalIgnoreCase)) {
context.Response.Redirect("/lst/", permanent: true);
return;
// Enable WebSocket support
app.UseWebSockets();
// Logging method
void LogToFile(string message)
{
try
{
string logDir = Path.Combine(AppContext.BaseDirectory, "logs");
Directory.CreateDirectory(logDir);
string logFilePath = Path.Combine(logDir, "proxy_log.txt");
File.AppendAllText(logFilePath, $"{DateTime.UtcNow}: {message}{Environment.NewLine}");
}
catch (Exception ex)
{
// Handle potential errors writing to log file
Console.WriteLine($"Logging error: {ex.Message}");
}
}
// Middleware to handle WebSocket requests
app.Use(async (context, next) =>
{
if (context.WebSockets.IsWebSocketRequest && context.Request.Path.StartsWithSegments("/ws"))
{
// LogToFile($"WebSocket request received for path: {context.Request.Path}");
try
{
var backendUri = new UriBuilder("ws", "localhost", 8080)
{
Path = context.Request.Path,
Query = context.Request.QueryString.ToString()
}.Uri;
using var backendSocket = new ClientWebSocket();
await backendSocket.ConnectAsync(backendUri, context.RequestAborted);
using var frontendSocket = await context.WebSockets.AcceptWebSocketAsync();
var cts = new CancellationTokenSource();
// WebSocket forwarding tasks
var forwardToBackend = ForwardWebSocketAsync(frontendSocket, backendSocket, cts.Token);
var forwardToFrontend = ForwardWebSocketAsync(backendSocket, frontendSocket, cts.Token);
await Task.WhenAny(forwardToBackend, forwardToFrontend);
cts.Cancel();
}
catch (Exception ex)
{
//LogToFile($"WebSocket proxy error: {ex.Message}");
context.Response.StatusCode = (int)HttpStatusCode.BadGateway;
await context.Response.WriteAsync($"WebSocket proxy error: {ex.Message}");
}
}
else
{
await next();
}
});
// Proxy all requests to Go backend
app.Use(async (context, next) => {
// Skip special paths
if (context.Request.Path.StartsWithSegments("/.well-known")) {
// Middleware to handle HTTP requests
app.Use(async (context, next) =>
{
if (context.WebSockets.IsWebSocketRequest)
{
await next();
return;
}
var client = context.RequestServices.GetRequiredService<IHttpClientFactory>()
.CreateClient("GoBackend");
var client = context.RequestServices.GetRequiredService<IHttpClientFactory>().CreateClient("GoBackend");
try {
var request = new HttpRequestMessage(
new HttpMethod(context.Request.Method),
try
{
var request = new HttpRequestMessage(new HttpMethod(context.Request.Method),
context.Request.Path + context.Request.QueryString);
// Copy headers
foreach (var header in context.Request.Headers) {
if (!request.Headers.TryAddWithoutValidation(header.Key, header.Value.ToArray())) {
foreach (var header in context.Request.Headers)
{
if (!request.Headers.TryAddWithoutValidation(header.Key, header.Value.ToArray()))
{
request.Content ??= new StreamContent(context.Request.Body);
request.Content.Headers.TryAddWithoutValidation(header.Key, header.Value.ToArray());
}
}
if (context.Request.ContentLength > 0) {
if (context.Request.ContentLength > 0 && request.Content == null)
{
request.Content = new StreamContent(context.Request.Body);
}
var response = await client.SendAsync(request);
var response = await client.SendAsync(request, HttpCompletionOption.ResponseHeadersRead, context.RequestAborted);
context.Response.StatusCode = (int)response.StatusCode;
foreach (var header in response.Headers) {
foreach (var header in response.Headers)
{
context.Response.Headers[header.Key] = header.Value.ToArray();
}
if (response.Content.Headers.ContentType != null) {
context.Response.ContentType = response.Content.Headers.ContentType.ToString();
foreach (var header in response.Content.Headers)
{
context.Response.Headers[header.Key] = header.Value.ToArray();
}
context.Response.Headers.Remove("transfer-encoding");
await response.Content.CopyToAsync(context.Response.Body);
}
catch (HttpRequestException) {
context.Response.StatusCode = 502;
catch (HttpRequestException ex)
{
LogToFile($"HTTP proxy error: {ex.Message}");
context.Response.StatusCode = (int)HttpStatusCode.BadGateway;
await context.Response.WriteAsync($"Backend request failed: {ex.Message}");
}
});
async Task ForwardWebSocketAsync(WebSocket source, WebSocket destination, CancellationToken cancellationToken)
{
var buffer = new byte[4 * 1024];
try
{
while (source.State == WebSocketState.Open &&
destination.State == WebSocketState.Open &&
!cancellationToken.IsCancellationRequested)
{
var result = await source.ReceiveAsync(new ArraySegment<byte>(buffer), cancellationToken);
if (result.MessageType == WebSocketMessageType.Close)
{
await destination.CloseOutputAsync(WebSocketCloseStatus.NormalClosure, "Closing", cancellationToken);
break;
}
await destination.SendAsync(new ArraySegment<byte>(buffer, 0, result.Count), result.MessageType, result.EndOfMessage, cancellationToken);
}
}
catch (WebSocketException ex)
{
LogToFile($"WebSocket forwarding error: {ex.Message}");
await destination.CloseOutputAsync(WebSocketCloseStatus.InternalServerError, "Error", cancellationToken);
}
}
app.Run();

View File

@@ -1,36 +1,24 @@
<?xml version="1.0" encoding="utf-8"?>
<configuration>
<system.webServer>
<!-- Enable WebSockets -->
<webSocket enabled="true" receiveBufferLimit="4194304" pingInterval="00:01:00" />
<rewrite>
<rules>
<!-- Redirect root to /lst/ -->
<rule name="Root Redirect" stopProcessing="true">
<match url="^$" />
<action type="Redirect" url="/lst/" redirectType="Permanent" />
</rule>
<!-- Proxy static assets -->
<rule name="Static Assets" stopProcessing="true">
<match url="^lst/assets/(.*)" />
<action type="Rewrite" url="http://localhost:8080/lst/assets/{R:1}" />
</rule>
<!-- Proxy API requests -->
<rule name="API Routes" stopProcessing="true">
<match url="^lst/api/(.*)" />
<action type="Rewrite" url="http://localhost:8080/lst/api/{R:1}" />
</rule>
<!-- Proxy all other requests -->
<rule name="Frontend Routes" stopProcessing="true">
<!-- Proxy all requests starting with /lst/ to the .NET wrapper (port 4000) -->
<rule name="Proxy to Wrapper" stopProcessing="true">
<match url="^lst/(.*)" />
<action type="Rewrite" url="http://localhost:8080/lst/{R:1}" />
<conditions>
<!-- Skip this rule if it's a WebSocket request -->
<add input="{HTTP_UPGRADE}" pattern="^WebSocket$" negate="true" />
</conditions>
<action type="Rewrite" url="http://localhost:8080/{R:1}" />
</rule>
</rules>
</rewrite>
<staticContent>
<clear />
<mimeMap fileExtension=".js" mimeType="application/javascript" />
<mimeMap fileExtension=".mjs" mimeType="application/javascript" />
<mimeMap fileExtension=".css" mimeType="text/css" />
@@ -38,6 +26,8 @@
</staticContent>
<handlers>
<!-- Let AspNetCoreModule handle all requests -->
<remove name="WebSocketHandler" />
<add name="aspNetCore" path="*" verb="*" modules="AspNetCoreModuleV2" resourceType="Unspecified" />
</handlers>

View File

@@ -10,3 +10,5 @@ this will also include a primary server to house all the common configs across a
The new lst will run in docker by building your own image and deploying or pulling the image down.
you will also be able to run it in windows or linux.
when developing in lst and you want to run hotloads installed and configure https://github.com/air-verse/air

0
backend/.air.toml Normal file
View File

View File

@@ -3,35 +3,51 @@ module lst.net
go 1.24.3
require (
github.com/bensch777/discord-webhook-golang v0.0.6
github.com/gin-contrib/cors v1.7.6
github.com/gin-gonic/gin v1.10.1
github.com/google/uuid v1.6.0
github.com/gorilla/websocket v1.5.3
github.com/joho/godotenv v1.5.1
github.com/lib/pq v1.10.9
github.com/rs/zerolog v1.34.0
gorm.io/driver/postgres v1.6.0
gorm.io/gorm v1.30.1
)
require (
github.com/bytedance/sonic v1.11.6 // indirect
github.com/bytedance/sonic/loader v0.1.1 // indirect
github.com/cloudwego/base64x v0.1.4 // indirect
github.com/bytedance/sonic v1.13.3 // indirect
github.com/bytedance/sonic/loader v0.2.4 // indirect
github.com/cloudwego/base64x v0.1.5 // indirect
github.com/cloudwego/iasm v0.2.0 // indirect
github.com/gabriel-vasile/mimetype v1.4.3 // indirect
github.com/gin-contrib/sse v0.1.0 // indirect
github.com/gabriel-vasile/mimetype v1.4.9 // indirect
github.com/gin-contrib/sse v1.1.0 // indirect
github.com/go-playground/locales v0.14.1 // indirect
github.com/go-playground/universal-translator v0.18.1 // indirect
github.com/go-playground/validator/v10 v10.20.0 // indirect
github.com/goccy/go-json v0.10.2 // indirect
github.com/go-playground/validator/v10 v10.26.0 // indirect
github.com/goccy/go-json v0.10.5 // indirect
github.com/jackc/pgpassfile v1.0.0 // indirect
github.com/jackc/pgservicefile v0.0.0-20240606120523-5a60cdf6a761 // indirect
github.com/jackc/pgx/v5 v5.7.5 // indirect
github.com/jackc/puddle/v2 v2.2.2 // indirect
github.com/jinzhu/inflection v1.0.0 // indirect
github.com/jinzhu/now v1.1.5 // indirect
github.com/json-iterator/go v1.1.12 // indirect
github.com/klauspost/cpuid/v2 v2.2.7 // indirect
github.com/klauspost/cpuid/v2 v2.2.10 // indirect
github.com/leodido/go-urn v1.4.0 // indirect
github.com/mattn/go-colorable v0.1.14 // indirect
github.com/mattn/go-isatty v0.0.20 // indirect
github.com/modern-go/concurrent v0.0.0-20180306012644-bacd9c7ef1dd // indirect
github.com/modern-go/reflect2 v1.0.2 // indirect
github.com/pelletier/go-toml/v2 v2.2.2 // indirect
github.com/pelletier/go-toml/v2 v2.2.4 // indirect
github.com/twitchyliquid64/golang-asm v0.15.1 // indirect
github.com/ugorji/go/codec v1.2.12 // indirect
golang.org/x/arch v0.8.0 // indirect
golang.org/x/crypto v0.23.0 // indirect
golang.org/x/net v0.25.0 // indirect
golang.org/x/sys v0.20.0 // indirect
golang.org/x/text v0.15.0 // indirect
google.golang.org/protobuf v1.34.1 // indirect
github.com/ugorji/go/codec v1.3.0 // indirect
golang.org/x/arch v0.18.0 // indirect
golang.org/x/crypto v0.40.0 // indirect
golang.org/x/net v0.41.0 // indirect
golang.org/x/sync v0.16.0 // indirect
golang.org/x/sys v0.34.0 // indirect
golang.org/x/text v0.27.0 // indirect
google.golang.org/protobuf v1.36.6 // indirect
gopkg.in/yaml.v3 v3.0.1 // indirect
)

51
backend/internal/db/db.go Normal file
View File

@@ -0,0 +1,51 @@
package db
import (
"fmt"
"os"
"gorm.io/driver/postgres"
"gorm.io/gorm"
"lst.net/internal/models"
)
var DB *gorm.DB
type DBConfig struct {
DB *gorm.DB
DSN string
}
func InitDB() (*DBConfig, error) {
dsn := fmt.Sprintf("host=%s port=%s user=%s password=%s dbname=%s",
os.Getenv("DB_HOST"),
os.Getenv("DB_PORT"),
os.Getenv("DB_USER"),
os.Getenv("DB_PASSWORD"),
os.Getenv("DB_NAME"))
var err error
DB, err = gorm.Open(postgres.Open(dsn), &gorm.Config{})
if err != nil {
return nil, fmt.Errorf("failed to connect to database: %v", err)
}
fmt.Println("✅ Connected to database")
// ensures we have the uuid stuff setup properly
DB.Exec(`CREATE EXTENSION IF NOT EXISTS "uuid-ossp"`)
err = DB.AutoMigrate(&models.Log{}, &models.Settings{}) // &ClientRecord{}, &Servers{}
if err != nil {
return nil, fmt.Errorf("failed to auto-migrate models: %v", err)
}
fmt.Println("✅ Database migration completed successfully")
return &DBConfig{
DB: DB,
DSN: dsn,
}, nil
}

View File

@@ -0,0 +1,21 @@
package models
import (
"time"
"github.com/google/uuid"
"gorm.io/gorm"
"lst.net/pkg"
)
type Log struct {
LogID uuid.UUID `gorm:"type:uuid;default:uuid_generate_v4();primaryKey" json:"id"`
Level string `gorm:"size:10;not null"` // "info", "error", etc.
Message string `gorm:"not null"`
Service string `gorm:"size:50"`
Metadata pkg.JSONB `gorm:"type:jsonb"` // fields (e.g., {"user_id": 123})
CreatedAt time.Time `gorm:"index"`
Checked bool `gorm:"type:boolean;default:false"`
UpdatedAt time.Time
DeletedAt gorm.DeletedAt `gorm:"index"`
}

View File

@@ -0,0 +1,32 @@
package models
import (
"time"
"github.com/google/uuid"
"lst.net/pkg"
)
type Servers struct {
ServerID uuid.UUID `gorm:"type:uuid;default:uuid_generate_v4();primaryKey" json:"id"`
ServerName string `gorm:"size:50;not null"`
ServerDNS string `gorm:"size:25;not null"`
PlantToken string `gorm:"size:10;not null"`
IPAddress string `gorm:"size:16;not null"`
GreatPlainsPlantCode int `gorm:"size:10;not null"`
StreetAddress string `gorm:"size:255;not null"`
CityState string `gorm:"size:50;not null"`
Zipcode int `gorm:"size:13;not null"`
ContactEmail string `gorm:"size:255"`
ContactPhone string `gorm:"size:255"`
CustomerTiAcc string `gorm:"size:255"`
LstServerPort int `gorm:"size:255; not null"`
Active bool `gorm:"type:boolean;default:true"`
LerverLoc string `gorm:"size:255:not null"`
LastUpdated time.Time `gorm:"index"`
ShippingHours pkg.JSONB `gorm:"type:jsonb;default:'[{\"early\": \"06:30\", \"late\": \"23:00\"}]'"`
TiPostTime pkg.JSONB `gorm:"type:jsonb;default:'[{\"from\": \"24\", \"to\": \"24\"}]'"`
OtherSettings pkg.JSONB `gorm:"type:jsonb;default:'[{\"specialInstructions\": \"something for ti\", \"active\": false}]'"`
IsUpgrading bool `gorm:"type:boolean;default:true"`
AlplaProdApiKey string `gorm:"size:255"`
}

View File

@@ -0,0 +1,20 @@
package models
import (
"time"
"github.com/google/uuid"
"gorm.io/gorm"
)
type Settings struct {
SettingID uuid.UUID `gorm:"type:uuid;default:uuid_generate_v4();primaryKey" json:"id"`
Name string `gorm:"uniqueIndex;not null"`
Description string `gorm:"type:text"`
Value string `gorm:"not null"`
Enabled bool `gorm:"default:true"`
AppService string `gorm:"default:system"`
CreatedAt time.Time `gorm:"index"`
UpdatedAt time.Time `gorm:"index"`
DeletedAt gorm.DeletedAt `gorm:"index"`
}

View File

@@ -0,0 +1,21 @@
package models
import (
"time"
"github.com/google/uuid"
"lst.net/pkg"
)
type ClientRecord struct {
ClientID uuid.UUID `gorm:"type:uuid;default:uuid_generate_v4();primaryKey" json:"id"`
APIKey string `gorm:"not null"`
IPAddress string `gorm:"not null"`
UserAgent string `gorm:"size:255"`
ConnectedAt time.Time `gorm:"index"`
LastHeartbeat time.Time `gorm:"column:last_heartbeat"`
Channels pkg.JSONB `gorm:"type:jsonb"`
CreatedAt time.Time
UpdatedAt time.Time
DisconnectedAt *time.Time `gorm:"column:disconnected_at"`
}

View File

@@ -0,0 +1,179 @@
package ws
import (
"encoding/json"
"log"
"strings"
"sync"
"lst.net/pkg/logger"
)
type Channel struct {
Name string
Clients map[*Client]bool
Register chan *Client
Unregister chan *Client
Broadcast chan []byte
lock sync.RWMutex
}
var (
channels = make(map[string]*Channel)
channelsMu sync.RWMutex
)
// InitializeChannels creates and returns all channels
func InitializeChannels() {
channelsMu.Lock()
defer channelsMu.Unlock()
channels["logServices"] = NewChannel("logServices")
channels["labels"] = NewChannel("labels")
// Add more channels here as needed
}
func NewChannel(name string) *Channel {
return &Channel{
Name: name,
Clients: make(map[*Client]bool),
Register: make(chan *Client),
Unregister: make(chan *Client),
Broadcast: make(chan []byte),
}
}
func GetChannel(name string) (*Channel, bool) {
channelsMu.RLock()
defer channelsMu.RUnlock()
ch, exists := channels[name]
return ch, exists
}
func GetAllChannels() map[string]*Channel {
channelsMu.RLock()
defer channelsMu.RUnlock()
chs := make(map[string]*Channel)
for k, v := range channels {
chs[k] = v
}
return chs
}
func StartAllChannels() {
channelsMu.RLock()
defer channelsMu.RUnlock()
for _, ch := range channels {
go ch.RunChannel()
}
}
func CleanupChannels() {
channelsMu.Lock()
defer channelsMu.Unlock()
for _, ch := range channels {
close(ch.Broadcast)
// Add any other cleanup needed
}
channels = make(map[string]*Channel)
}
func StartBroadcasting(broadcaster chan logger.Message, channels map[string]*Channel) {
logger := logger.New()
go func() {
for msg := range broadcaster {
switch msg.Channel {
case "logServices":
// Just forward the message - filtering happens in RunChannel()
messageBytes, err := json.Marshal(msg)
if err != nil {
logger.Error("Error marshaling message", "websocket", map[string]interface{}{
"errors": err,
})
continue
}
channels["logServices"].Broadcast <- messageBytes
case "labels":
// Future labels handling
messageBytes, err := json.Marshal(msg)
if err != nil {
logger.Error("Error marshaling message", "websocket", map[string]interface{}{
"errors": err,
})
continue
}
channels["labels"].Broadcast <- messageBytes
default:
log.Printf("Received message for unknown channel: %s", msg.Channel)
}
}
}()
}
func contains(slice []string, item string) bool {
// Empty filter slice means "match all"
if len(slice) == 0 {
return true
}
// Case-insensitive comparison
item = strings.ToLower(item)
for _, s := range slice {
if strings.ToLower(s) == item {
return true
}
}
return false
}
// Updated Channel.RunChannel() for logServices filtering
func (ch *Channel) RunChannel() {
for {
select {
case client := <-ch.Register:
ch.lock.Lock()
ch.Clients[client] = true
ch.lock.Unlock()
case client := <-ch.Unregister:
ch.lock.Lock()
delete(ch.Clients, client)
ch.lock.Unlock()
case message := <-ch.Broadcast:
var msg logger.Message
if err := json.Unmarshal(message, &msg); err != nil {
continue
}
ch.lock.RLock()
for client := range ch.Clients {
// Special filtering for logServices
if ch.Name == "logServices" {
logLevel, _ := msg.Meta["level"].(string)
logService, _ := msg.Meta["service"].(string)
levelMatch := len(client.LogLevels) == 0 || contains(client.LogLevels, logLevel)
serviceMatch := len(client.Services) == 0 || contains(client.Services, logService)
if !levelMatch || !serviceMatch {
continue
}
}
select {
case client.Send <- message:
default:
ch.Unregister <- client
}
}
ch.lock.RUnlock()
}
}
}

View File

@@ -0,0 +1,292 @@
package ws
import (
"fmt"
"sync"
"sync/atomic"
"time"
"github.com/google/uuid"
"github.com/gorilla/websocket"
"gorm.io/gorm"
"lst.net/internal/models"
"lst.net/pkg"
"lst.net/pkg/logger"
)
var (
clients = make(map[*Client]bool)
clientsMu sync.RWMutex
)
type Client struct {
ClientID uuid.UUID `json:"client_id"`
Conn *websocket.Conn `json:"-"` // Excluded from JSON
APIKey string `json:"api_key"`
IPAddress string `json:"ip_address"`
UserAgent string `json:"user_agent"`
Send chan []byte `json:"-"` // Excluded from JSON
Channels map[string]bool `json:"channels"`
LogLevels []string `json:"levels,omitempty"`
Services []string `json:"services,omitempty"`
Labels []string `json:"labels,omitempty"`
ConnectedAt time.Time `json:"connected_at"`
done chan struct{} // For graceful shutdown
isAlive atomic.Bool
lastActive time.Time // Tracks last activity
}
func (c *Client) SaveToDB(log *logger.CustomLogger, db *gorm.DB) {
// Convert c.Channels (map[string]bool) to map[string]interface{} for JSONB
channels := make(map[string]interface{})
for ch := range c.Channels {
channels[ch] = true
}
clientRecord := &models.ClientRecord{
APIKey: c.APIKey,
IPAddress: c.IPAddress,
UserAgent: c.UserAgent,
Channels: pkg.JSONB(channels),
ConnectedAt: time.Now(),
LastHeartbeat: time.Now(),
}
if err := db.Create(&clientRecord).Error; err != nil {
log.Error("❌ Error saving client", "websocket", map[string]interface{}{
"error": err,
})
} else {
c.ClientID = clientRecord.ClientID
c.ConnectedAt = clientRecord.ConnectedAt
clientData := fmt.Sprintf("A new client %v, just connected", c.ClientID)
log.Info(clientData, "websocket", map[string]interface{}{})
}
}
func (c *Client) MarkDisconnected(log *logger.CustomLogger, db *gorm.DB) {
clientData := fmt.Sprintf("Client %v Dicconected", c.ClientID)
log.Info(clientData, "websocket", map[string]interface{}{})
now := time.Now()
res := db.Model(&models.ClientRecord{}).
Where("client_id = ?", c.ClientID).
Updates(map[string]interface{}{
"disconnected_at": &now,
})
if res.RowsAffected == 0 {
log.Info("⚠️ No rows updated for client_id", "websocket", map[string]interface{}{
"clientID": c.ClientID,
})
}
if res.Error != nil {
log.Error("❌ Error updating disconnected_at", "websocket", map[string]interface{}{
"clientID": c.ClientID,
"error": res.Error,
})
}
}
func (c *Client) SafeClient() *Client {
return &Client{
ClientID: c.ClientID,
APIKey: c.APIKey,
IPAddress: c.IPAddress,
UserAgent: c.UserAgent,
Channels: c.Channels,
LogLevels: c.LogLevels,
Services: c.Services,
Labels: c.Labels,
ConnectedAt: c.ConnectedAt,
}
}
// GetAllClients returns safe representations of all clients
func GetAllClients() []*Client {
clientsMu.RLock()
defer clientsMu.RUnlock()
var clientList []*Client
for client := range clients {
clientList = append(clientList, client.SafeClient())
}
return clientList
}
// GetClientsByChannel returns clients in a specific channel
func GetClientsByChannel(channel string) []*Client {
clientsMu.RLock()
defer clientsMu.RUnlock()
var channelClients []*Client
for client := range clients {
if client.Channels[channel] {
channelClients = append(channelClients, client.SafeClient())
}
}
return channelClients
}
// heat beat stuff
const (
pingPeriod = 30 * time.Second
pongWait = 60 * time.Second
writeWait = 10 * time.Second
)
func (c *Client) StartHeartbeat(log *logger.CustomLogger, db *gorm.DB) {
log.Debug("Started hearbeat", "websocket", map[string]interface{}{})
ticker := time.NewTicker(pingPeriod)
defer ticker.Stop()
for {
select {
case <-ticker.C:
if !c.isAlive.Load() {
return
}
c.Conn.SetWriteDeadline(time.Now().Add(writeWait))
if err := c.Conn.WriteMessage(websocket.PingMessage, nil); err != nil {
log.Error("Heartbeat failed", "websocket", map[string]interface{}{
"client_id": c.ClientID,
"error": err,
})
c.Close(log, db)
return
}
now := time.Now()
res := db.Model(&models.ClientRecord{}).
Where("client_id = ?", c.ClientID).
Updates(map[string]interface{}{
"last_heartbeat": &now,
})
if res.RowsAffected == 0 {
log.Info("⚠️ No rows updated for client_id", "websocket", map[string]interface{}{
"clientID": c.ClientID,
})
}
if res.Error != nil {
log.Error("❌ Error updating disconnected_at", "websocket", map[string]interface{}{
"clientID": c.ClientID,
"error": res.Error,
})
}
clientStuff := fmt.Sprintf("HeartBeat just done on: %v", c.ClientID)
log.Info(clientStuff, "websocket", map[string]interface{}{
"clientID": c.ClientID,
})
case <-c.done:
return
}
}
}
func (c *Client) Close(log *logger.CustomLogger, db *gorm.DB) {
if c.isAlive.CompareAndSwap(true, false) { // Atomic swap
close(c.done)
c.Conn.Close()
// Add any other cleanup here
c.MarkDisconnected(log, db)
}
}
func (c *Client) startServerPings(log *logger.CustomLogger, db *gorm.DB) {
ticker := time.NewTicker(60 * time.Second) // Ping every 30s
defer ticker.Stop()
for {
select {
case <-ticker.C:
c.Conn.SetWriteDeadline(time.Now().Add(10 * time.Second))
if err := c.Conn.WriteMessage(websocket.PingMessage, nil); err != nil {
log.Error("Server Ping failed", "websocket", map[string]interface{}{
"clientID": c.ClientID,
"error": err,
})
c.Close(log, db)
return
}
case <-c.done:
return
}
}
}
func (c *Client) markActive() {
c.lastActive = time.Now() // No mutex needed if single-writer
}
func (c *Client) IsActive() bool {
return time.Since(c.lastActive) < 45*time.Second // 1.5x ping interval
}
func (c *Client) updateHeartbeat(log *logger.CustomLogger, db *gorm.DB) {
//fmt.Println("Updating heatbeat")
now := time.Now()
//fmt.Printf("Updating heartbeat for client: %s at %v\n", c.ClientID, now)
//db.DB = db.DB.Debug()
res := db.Model(&models.ClientRecord{}).
Where("client_id = ?", c.ClientID).
Updates(map[string]interface{}{
"last_heartbeat": &now, // Explicit format
})
//fmt.Printf("Executed SQL: %v\n", db.DB.Statement.SQL.String())
if res.RowsAffected == 0 {
log.Info("⚠️ No rows updated for client_id", "websocket", map[string]interface{}{
"clientID": c.ClientID,
})
}
if res.Error != nil {
log.Error("❌ Error updating disconnected_at", "websocket", map[string]interface{}{
"clientID": c.ClientID,
"error": res.Error,
})
}
// 2. Verify DB connection
if db == nil {
log.Error("DB connection is nil", "websocket", map[string]interface{}{})
return
}
// 3. Test raw SQL execution first
testRes := db.Exec("SELECT 1")
if testRes.Error != nil {
log.Error("DB ping failed", "websocket", map[string]interface{}{
"error": testRes.Error,
})
return
}
}
// work on this stats later
// Add to your admin endpoint
// type ConnectionStats struct {
// TotalConnections int `json:"total_connections"`
// ActiveConnections int `json:"active_connections"`
// AvgDuration string `json:"avg_duration"`
// }
// func GetConnectionStats() ConnectionStats {
// // Implement your metrics tracking
// }

View File

@@ -0,0 +1,229 @@
package ws
import (
"encoding/json"
"net/http"
"time"
"github.com/gin-gonic/gin"
"github.com/gorilla/websocket"
"gorm.io/gorm"
"lst.net/pkg/logger"
)
type JoinPayload struct {
Channel string `json:"channel"`
APIKey string `json:"apiKey"`
Services []string `json:"services,omitempty"`
Levels []string `json:"levels,omitempty"`
Labels []string `json:"labels,omitempty"`
}
var upgrader = websocket.Upgrader{
CheckOrigin: func(r *http.Request) bool { return true }, // allow all origins; customize for prod
HandshakeTimeout: 15 * time.Second,
ReadBufferSize: 1024,
WriteBufferSize: 1024,
EnableCompression: true,
}
func SocketHandler(c *gin.Context, channels map[string]*Channel, log *logger.CustomLogger, db *gorm.DB) {
// Upgrade HTTP to WebSocket
conn, err := upgrader.Upgrade(c.Writer, c.Request, nil)
if err != nil {
log.Error("WebSocket upgrade failed", "websocket", map[string]interface{}{"error": err})
return
}
//defer conn.Close()
// Create new client
client := &Client{
Conn: conn,
APIKey: "exampleAPIKey",
Send: make(chan []byte, 256), // Buffered channel
Channels: make(map[string]bool),
IPAddress: c.ClientIP(),
UserAgent: c.Request.UserAgent(),
done: make(chan struct{}),
}
client.isAlive.Store(true)
// Add to global clients map
clientsMu.Lock()
clients[client] = true
clientsMu.Unlock()
// Save initial connection to DB
client.SaveToDB(log, db)
// Save initial connection to DB
// if err := client.SaveToDB(); err != nil {
// log.Println("Failed to save client to DB:", err)
// conn.Close()
// return
// }
// Set handlers
conn.SetPingHandler(func(string) error {
return nil // Auto-responds with pong
})
conn.SetPongHandler(func(string) error {
now := time.Now()
client.markActive() // Track last pong time
client.lastActive = now
client.updateHeartbeat(log, db)
return nil
})
// Start server-side ping ticker
go client.startServerPings(log, db)
defer func() {
// Unregister from all channels
for channelName := range client.Channels {
if ch, exists := channels[channelName]; exists {
ch.Unregister <- client
}
}
// Remove from global clients map
clientsMu.Lock()
delete(clients, client)
clientsMu.Unlock()
// Mark disconnected in DB
client.MarkDisconnected(log, db)
// Close connection
conn.Close()
log.Info("Client disconnected", "websocket", map[string]interface{}{
"client": client.ClientID,
})
}()
// Send welcome message immediately
welcomeMsg := map[string]string{
"status": "connected",
"message": "Welcome to the WebSocket server. Send subscription request to begin.",
}
if err := conn.WriteJSON(welcomeMsg); err != nil {
log.Error("Failed to send welcome message", "websocket", map[string]interface{}{"error": err})
return
}
// Message handling goroutine
go func() {
defer func() {
// Cleanup on disconnect
for channelName := range client.Channels {
if ch, exists := channels[channelName]; exists {
ch.Unregister <- client
}
}
close(client.Send)
client.MarkDisconnected(log, db)
}()
for {
_, msg, err := conn.ReadMessage()
if err != nil {
if websocket.IsUnexpectedCloseError(err, websocket.CloseGoingAway) {
log.Error("Client disconnected unexpectedl", "websocket", map[string]interface{}{"error": err})
}
break
}
var payload struct {
Channel string `json:"channel"`
APIKey string `json:"apiKey"`
Services []string `json:"services,omitempty"`
Levels []string `json:"levels,omitempty"`
Labels []string `json:"labels,omitempty"`
}
if err := json.Unmarshal(msg, &payload); err != nil {
conn.WriteJSON(map[string]string{"error": "invalid payload format"})
continue
}
// Validate API key (implement your own validateAPIKey function)
// if payload.APIKey == "" || !validateAPIKey(payload.APIKey) {
// conn.WriteJSON(map[string]string{"error": "invalid or missing API key"})
// continue
// }
if payload.APIKey == "" {
conn.WriteMessage(websocket.TextMessage, []byte("Missing API Key"))
continue
}
client.APIKey = payload.APIKey
// Handle channel subscription
switch payload.Channel {
case "logServices":
// Unregister from other channels if needed
if client.Channels["labels"] {
channels["labels"].Unregister <- client
delete(client.Channels, "labels")
}
// Update client filters
client.Services = payload.Services
client.LogLevels = payload.Levels
// Register to channel
channels["logServices"].Register <- client
client.Channels["logServices"] = true
conn.WriteJSON(map[string]string{
"message": "You are now subscribed to the the service channel",
"status": "subscribed",
"channel": "logServices",
})
case "labels":
// Unregister from other channels if needed
if client.Channels["logServices"] {
channels["logServices"].Unregister <- client
delete(client.Channels, "logServices")
}
// Set label filters if provided
if payload.Labels != nil {
client.Labels = payload.Labels
}
// Register to channel
channels["labels"].Register <- client
client.Channels["labels"] = true
// Update DB record
client.SaveToDB(log, db)
// if err := client.SaveToDB(); err != nil {
// log.Println("Failed to update client labels:", err)
// }
conn.WriteJSON(map[string]interface{}{
"message": "You are now subscribed to the label channel",
"status": "subscribed",
"channel": "labels",
"filters": client.Labels,
})
default:
conn.WriteJSON(map[string]string{
"error": "invalid channel",
"available_channels": "logServices, labels",
})
}
}
}()
// Send messages to client
for message := range client.Send {
if err := conn.WriteMessage(websocket.TextMessage, message); err != nil {
log.Error("Write erro", "websocket", map[string]interface{}{"error": err})
break
}
}
}

View File

@@ -0,0 +1,79 @@
package ws
// setup the notifiyer
// -- Only needs to be run once in DB
// CREATE OR REPLACE FUNCTION notify_new_log() RETURNS trigger AS $$
// BEGIN
// PERFORM pg_notify('new_log', row_to_json(NEW)::text);
// RETURN NEW;
// END;
// $$ LANGUAGE plpgsql;
// DROP TRIGGER IF EXISTS new_log_trigger ON logs;
// CREATE TRIGGER new_log_trigger
// AFTER INSERT ON logs
// FOR EACH ROW EXECUTE FUNCTION notify_new_log();
import (
"encoding/json"
"fmt"
"os"
"time"
"github.com/lib/pq"
"lst.net/pkg/logger"
)
func LogServices(broadcaster chan logger.Message, log *logger.CustomLogger) {
log.Info("[LogServices] started - single channel for all logs", "websocket", map[string]interface{}{})
dsn := fmt.Sprintf("host=%s port=%s user=%s password=%s dbname=%s sslmode=disable",
os.Getenv("DB_HOST"),
os.Getenv("DB_PORT"),
os.Getenv("DB_USER"),
os.Getenv("DB_PASSWORD"),
os.Getenv("DB_NAME"),
)
listener := pq.NewListener(dsn, 10*time.Second, time.Minute, nil)
err := listener.Listen("new_log")
if err != nil {
log.Panic("Failed to LISTEN on new_log", "logger", map[string]interface{}{
"error": err.Error(),
})
}
log.Info("Listening for all logs through single logServices channel...", "wbsocker", map[string]interface{}{})
for {
select {
case notify := <-listener.Notify:
if notify != nil {
var logData map[string]interface{}
if err := json.Unmarshal([]byte(notify.Extra), &logData); err != nil {
log.Error("Failed to unmarshal notification payload", "logger", map[string]interface{}{
"error": err.Error(),
})
continue
}
// Always send to logServices channel
broadcaster <- logger.Message{
Channel: "logServices",
Data: logData,
Meta: map[string]interface{}{
"level": logData["level"],
"service": logData["service"],
},
}
}
case <-time.After(90 * time.Second):
go func() {
listener.Ping()
}()
}
}
}

View File

@@ -0,0 +1,56 @@
package ws
import (
"net/http"
"github.com/gin-gonic/gin"
"gorm.io/gorm"
"lst.net/pkg/logger"
)
var (
broadcaster = make(chan logger.Message)
)
func RegisterSocketRoutes(r *gin.Engine, base_url string, log *logger.CustomLogger, db *gorm.DB) {
// Initialize all channels
InitializeChannels()
// Start channel processors
StartAllChannels()
// Start background services
go LogServices(broadcaster, log)
go StartBroadcasting(broadcaster, channels)
// WebSocket route
r.GET(base_url+"/ws", func(c *gin.Context) {
SocketHandler(c, channels, log, db)
})
r.GET(base_url+"/ws/clients", AdminAuthMiddleware(), handleGetClients)
}
func handleGetClients(c *gin.Context) {
channel := c.Query("channel")
var clientList []*Client
if channel != "" {
clientList = GetClientsByChannel(channel)
} else {
clientList = GetAllClients()
}
c.JSON(http.StatusOK, gin.H{
"count": len(clientList),
"clients": clientList,
})
}
func AdminAuthMiddleware() gin.HandlerFunc {
return func(c *gin.Context) {
// Implement your admin authentication logic
// Example: Check API key or JWT token
c.Next()
}
}

View File

@@ -0,0 +1,41 @@
package middleware
import (
"github.com/gin-gonic/gin"
"lst.net/internal/system/settings"
)
func SettingCheckMiddleware(settingName string) gin.HandlerFunc {
return func(c *gin.Context) {
// Debug: Log the setting name we're checking
//log.Printf("Checking setting '%s' for path: %s", settingName, c.Request.URL.Path)
// Get the current setting value
value, err := settings.GetString(settingName)
if err != nil {
//log.Printf("Error getting setting '%s': %v", settingName, err)
c.AbortWithStatusJSON(404, gin.H{
"error": "endpoint not available",
"details": "setting error",
})
return
}
// Debug: Log the actual value received
//log.Printf("Setting '%s' value: '%s'", settingName, value)
// Changed condition to check for "1" (enable) instead of "0" (disable)
if value != "1" {
//log.Printf("Setting '%s' not enabled (value: '%s')", settingName, value)
c.AbortWithStatusJSON(404, gin.H{
"error": "endpoint not available",
"details": "required feature is disabled",
})
return
}
// Debug: Log successful check
//log.Printf("Setting check passed for '%s'", settingName)
c.Next()
}
}

View File

@@ -0,0 +1,66 @@
package router
import (
"net/http"
"os"
"github.com/gin-contrib/cors"
"github.com/gin-gonic/gin"
"gorm.io/gorm"
"lst.net/internal/notifications/ws"
"lst.net/internal/router/middleware"
"lst.net/internal/system/servers"
"lst.net/internal/system/settings"
"lst.net/pkg/logger"
)
func Setup(db *gorm.DB, basePath string, log *logger.CustomLogger) *gin.Engine {
r := gin.Default()
if os.Getenv("APP_ENV") == "production" {
gin.SetMode(gin.ReleaseMode)
}
// Enable CORS (adjust origins as needed)
r.Use(cors.New(cors.Config{
AllowOrigins: []string{"*"}, // Allow all origins (change in production)
AllowMethods: []string{"GET", "OPTIONS", "POST", "DELETE", "PATCH", "CONNECT"},
AllowHeaders: []string{"Origin", "Cache-Control", "Content-Type"},
ExposeHeaders: []string{"Content-Length"},
AllowCredentials: true,
AllowWebSockets: true,
}))
// Serve Docusaurus static files
r.StaticFS(basePath+"/docs", http.Dir("docs"))
r.StaticFS(basePath+"/app", http.Dir("frontend"))
// all routes to there respective systems.
ws.RegisterSocketRoutes(r, basePath, log, db)
settings.RegisterSettingsRoutes(r, basePath, log, db)
servers.RegisterServersRoutes(r, basePath, log, db)
r.GET(basePath+"/api/ping", middleware.SettingCheckMiddleware("testingApiFunction"), func(c *gin.Context) {
log.Info("Checking if the server is up", "system", map[string]interface{}{
"endpoint": "/api/ping",
"client_ip": c.ClientIP(),
"user_agent": c.Request.UserAgent(),
})
c.JSON(200, gin.H{"message": "pong"})
})
r.Any(basePath+"/", func(c *gin.Context) { errorApiLoc(c, log) })
return r
}
func errorApiLoc(c *gin.Context, log *logger.CustomLogger) {
log.Error("Api endpoint hit that dose not exist", "system", map[string]interface{}{
"endpoint": c.Request.URL.Path,
"client_ip": c.ClientIP(),
"user_agent": c.Request.UserAgent(),
})
c.JSON(http.StatusBadRequest, gin.H{"message": "looks like you have encountered a route that dose not exist"})
}

View File

@@ -0,0 +1,65 @@
package servers
import (
"reflect"
"strings"
"github.com/gin-gonic/gin"
"gorm.io/gorm"
"lst.net/internal/models"
"lst.net/pkg/logger"
)
func getServers(c *gin.Context, log *logger.CustomLogger, db *gorm.DB) {
servers, err := GetServers(log, db)
log.Info("Current Settings", "system", map[string]interface{}{
"endpoint": "/api/v1/settings",
"client_ip": c.ClientIP(),
"user_agent": c.Request.UserAgent(),
})
if err != nil {
log.Error("Current Settings", "system", map[string]interface{}{
"endpoint": "/api/v1/settings",
"client_ip": c.ClientIP(),
"user_agent": c.Request.UserAgent(),
"error": err,
})
c.JSON(500, gin.H{"message": "There was an error getting the settings", "error": err})
return
}
c.JSON(200, gin.H{"message": "Current settings", "data": servers})
}
func GetServers(log *logger.CustomLogger, db *gorm.DB) ([]map[string]interface{}, error) {
var servers []models.Servers
res := db.Find(&servers)
if res.Error != nil {
return nil, res.Error
}
toLowercase := func(s models.Servers) map[string]interface{} {
t := reflect.TypeOf(s)
v := reflect.ValueOf(s)
data := make(map[string]interface{})
for i := 0; i < t.NumField(); i++ {
field := strings.ToLower(t.Field(i).Name)
data[field] = v.Field(i).Interface()
}
return data
}
var lowercaseServers []map[string]interface{}
for _, server := range servers {
lowercaseServers = append(lowercaseServers, toLowercase(server))
}
return lowercaseServers, nil
}

View File

@@ -0,0 +1,21 @@
package servers
import (
"gorm.io/gorm"
"lst.net/internal/models"
"lst.net/pkg/logger"
)
func NewServer(serverData models.Servers, log *logger.CustomLogger, db *gorm.DB) (string, error) {
err := db.Create(&serverData).Error
if err != nil {
log.Error("There was an error adding the new server", "server", map[string]interface{}{
"error": err,
})
return "There was an error adding the new server", err
}
return "New server was just created", nil
}

View File

@@ -0,0 +1,13 @@
package servers
import (
"github.com/gin-gonic/gin"
"gorm.io/gorm"
"lst.net/pkg/logger"
)
func RegisterServersRoutes(l *gin.Engine, baseUrl string, log *logger.CustomLogger, db *gorm.DB) {
s := l.Group(baseUrl + "/api/v1")
s.GET("/servers", func(c *gin.Context) { getServers(c, log, db) })
}

View File

@@ -0,0 +1,59 @@
package servers
// import (
// "encoding/json"
// "github.com/gin-gonic/gin"
// "lst.net/internal/db"
// "lst.net/pkg/logger"
// )
// func updateSettingById(c *gin.Context) {
// log := logger.New()
// settingID := c.Param("id")
// if settingID == "" {
// c.JSON(500, gin.H{"message": "Invalid data"})
// log.Error("Invalid data", "system", map[string]interface{}{
// "endpoint": "/api/v1/settings",
// "client_ip": c.ClientIP(),
// "user_agent": c.Request.UserAgent(),
// })
// return
// }
// var setting SettingUpdateInput
// //err := c.ShouldBindBodyWithJSON(&setting)
// decoder := json.NewDecoder(c.Request.Body) // more strict and will force us to have correct data
// decoder.DisallowUnknownFields()
// if err := decoder.Decode(&setting); err != nil {
// c.JSON(400, gin.H{"message": "Invalid request body", "error": err.Error()})
// log.Error("Invalid request body", "system", map[string]interface{}{
// "endpoint": "/api/v1/settings",
// "client_ip": c.ClientIP(),
// "user_agent": c.Request.UserAgent(),
// "error": err,
// })
// return
// }
// if err := UpdateServer(db.DB, settingID, setting); err != nil {
// c.JSON(500, gin.H{"message": "Failed to update setting", "error": err.Error()})
// log.Error("Failed to update setting", "system", map[string]interface{}{
// "endpoint": "/api/v1/settings",
// "client_ip": c.ClientIP(),
// "user_agent": c.Request.UserAgent(),
// "error": err,
// })
// return
// }
// c.JSON(200, gin.H{"message": "Setting was just updated", "data": setting})
// }
// func UpdateServer() (string, error) {
// return "Server was just updated", nil
// }

View File

@@ -0,0 +1,39 @@
package settings
import (
"gorm.io/gorm"
)
func GetAllSettings(db *gorm.DB) ([]map[string]interface{}, error) {
// var settings []models.Settings
// result := db.Find(&settings)
// if result.Error != nil {
// return nil, result.Error
// }
// // Function to convert struct to map with lowercase keys
// toLowercase := func(s models.Settings) map[string]interface{} {
// t := reflect.TypeOf(s)
// v := reflect.ValueOf(s)
// data := make(map[string]interface{})
// for i := 0; i < t.NumField(); i++ {
// field := strings.ToLower(t.Field(i).Name)
// data[field] = v.Field(i).Interface()
// }
// return data
// }
// // Convert each struct in settings slice to a map with lowercase keys
// var lowercaseSettings []map[string]interface{}
// for _, setting := range settings {
// lowercaseSettings = append(lowercaseSettings, toLowercase(setting))
// }
convertedSettings := GetMap()
return convertedSettings, nil
}

View File

@@ -0,0 +1,8 @@
package settings
type SettingUpdateInput struct {
Description *string `json:"description"`
Value *string `json:"value"`
Enabled *bool `json:"enabled"`
AppService *string `json:"app_service"`
}

View File

@@ -0,0 +1,88 @@
package settings
import (
"encoding/json"
"github.com/gin-gonic/gin"
"gorm.io/gorm"
"lst.net/pkg/logger"
)
func RegisterSettingsRoutes(l *gin.Engine, baseUrl string, log *logger.CustomLogger, db *gorm.DB) {
// seed the db on start up
SeedSettings(db, log)
s := l.Group(baseUrl + "/api/v1")
s.GET("/settings", func(c *gin.Context) { getSettings(c, log, db) })
s.PATCH("/settings/:id", func(c *gin.Context) { updateSettingById(c, log, db) })
}
func getSettings(c *gin.Context, log *logger.CustomLogger, db *gorm.DB) {
configs, err := GetAllSettings(db)
log.Info("Current Settings", "settings", map[string]interface{}{
"endpoint": "/api/v1/settings",
"client_ip": c.ClientIP(),
"user_agent": c.Request.UserAgent(),
})
if err != nil {
log := logger.New()
log.Error("Current Settings", "settings", map[string]interface{}{
"endpoint": "/api/v1/settings",
"client_ip": c.ClientIP(),
"user_agent": c.Request.UserAgent(),
"error": err,
})
c.JSON(500, gin.H{"message": "There was an error getting the settings", "error": err})
return
}
c.JSON(200, gin.H{"message": "Current settings", "data": configs})
}
func updateSettingById(c *gin.Context, log *logger.CustomLogger, db *gorm.DB) {
settingID := c.Param("id")
if settingID == "" {
c.JSON(500, gin.H{"message": "Invalid data"})
log.Error("Invalid data", "settings", map[string]interface{}{
"endpoint": "/api/v1/settings",
"client_ip": c.ClientIP(),
"user_agent": c.Request.UserAgent(),
})
return
}
var setting SettingUpdateInput
//err := c.ShouldBindBodyWithJSON(&setting)
decoder := json.NewDecoder(c.Request.Body) // more strict and will force us to have correct data
decoder.DisallowUnknownFields()
if err := decoder.Decode(&setting); err != nil {
c.JSON(400, gin.H{"message": "Invalid request body", "error": err.Error()})
log.Error("Invalid request body", "settings", map[string]interface{}{
"endpoint": "/api/v1/settings",
"client_ip": c.ClientIP(),
"user_agent": c.Request.UserAgent(),
"error": err,
})
return
}
if err := UpdateSetting(log, db, settingID, setting); err != nil {
c.JSON(500, gin.H{"message": "Failed to update setting", "error": err.Error()})
log.Error("Failed to update setting", "settings", map[string]interface{}{
"endpoint": "/api/v1/settings",
"client_ip": c.ClientIP(),
"user_agent": c.Request.UserAgent(),
"error": err,
})
return
}
c.JSON(200, gin.H{"message": "Setting was just updated", "data": setting})
}

View File

@@ -0,0 +1,128 @@
package settings
import (
"errors"
"fmt"
"gorm.io/gorm"
"lst.net/internal/models"
"lst.net/pkg/logger"
)
var seedConfigData = []models.Settings{
{Name: "serverPort", Description: "The port the server will listen on if not running in docker", Value: "4000", Enabled: true, AppService: "server"},
{Name: "server", Description: "The server we will use when connecting to the alplaprod sql", Value: "usmcd1vms006", Enabled: true, AppService: "server"},
{Name: "timezone", Value: "America/Chicago", Description: "What time zone is the server in this is used for cronjobs and some other time stuff", AppService: "server", Enabled: true},
{Name: "dbUser", Value: "alplaprod", Description: "What is the db userName", AppService: "server", Enabled: true},
{Name: "dbPass", Value: "b2JlbGl4", Description: "What is the db password", AppService: "server", Enabled: true},
{Name: "tcpPort", Value: "2222", Description: "TCP port for printers to connect send data and the zedra cameras", AppService: "server", Enabled: true},
{Name: "prolinkCheck", Value: "1", Description: "Will prolink be considered to check if matches, maninly used in plants that do not fully utilize prolink + ocp", AppService: "production", Enabled: true},
{Name: "bookin", Value: "1", Description: "do we want to book in after a label is printed", AppService: "ocp", Enabled: true},
{Name: "dbServer", Value: "usmcd1vms036", Description: "What server is the prod db on?", AppService: "server", Enabled: true},
{Name: "printDelay", Value: "90", Description: "How long in seconds between prints", AppService: "ocp", Enabled: true},
{Name: "plantToken", Value: "test3", Description: "What is the plant token", AppService: "server", Enabled: true},
{Name: "dualPrinting", Value: "0", Description: "Dose the plant have 2 machines that go to 1?", AppService: "ocp", Enabled: true},
{Name: "ocmeService", Value: "0", Description: "Is the ocme service enabled. this is gernerally only for Dayton.", AppService: "ocme", Enabled: true},
{Name: "fifoCheck", Value: "45", Description: "How far back do we want to check for fifo default 45, putting 0 will ignore.", AppService: "ocme", Enabled: true},
{Name: "dayCheck", Value: "3", Description: "how many days +/- to check for shipments in alplaprod", AppService: "ocme", Enabled: true},
{Name: "maxLotPerTruck", Value: "3", Description: "How mant lots can we have per truck?", AppService: "ocme", Enabled: true},
{Name: "monitorAddress", Value: "8", Description: "What address is monitored to be limited to the amount of lots that can be added to a truck.", AppService: "ocme", Enabled: true},
{Name: "ocmeCycleCount", Value: "1", Description: "Are we allowing ocme cycle counts?", AppService: "ocme", Enabled: true},
{Name: "devDir", Value: "", Description: "This is the dev dir and strictly only for updating the servers.", AppService: "server", Enabled: true},
{Name: "demandMGTActivated", Value: "0", Description: "Do we allow for new fake edi?", AppService: "logistics", Enabled: true},
{Name: "qualityRequest", Value: "0", Description: "quality request module?", AppService: "quality", Enabled: true},
{Name: "ocpLogsCheck", Value: "4", Description: "How long do we want to allow logs to show that have not been cleared?", AppService: "ocp", Enabled: true},
{Name: "inhouseDelivery", Value: "0", Description: "Are we doing auto inhouse delivery?", AppService: "ocp", Enabled: true},
// dyco settings
{Name: "dycoConnect", Value: "0", Description: "Are we running the dyco system?", AppService: "dycp", Enabled: true},
{Name: "dycoPrint", Value: "0", Description: "Are we using the dyco to get the labels or the rfid?", AppService: "dyco", Enabled: true},
{Name: "strapperCheck", Value: "1", Description: "Are we monitoring the strapper for faults?", AppService: "dyco", Enabled: true},
// ocp
{Name: "ocpActive", Value: `1`, Description: "Are we pritning on demand?", AppService: "ocp", Enabled: true},
{Name: "ocpCycleDelay", Value: `10`, Description: "How long between printer cycles do we want to monitor.", AppService: "ocp", Enabled: true},
{Name: "pNgAddress", Value: `139`, Description: "What is the address for p&g so we can make sure we have the correct fake edi forcast going in.", AppService: "logisitcs", Enabled: true},
{Name: "scannerID", Value: `500`, Description: "What scanner id will we be using for the app", AppService: "logistics", Enabled: true},
{Name: "scannerPort", Value: `50002`, Description: "What port instance will we be using?", AppService: "logistics", Enabled: true},
{Name: "stagingReturnLocations", Value: `30125,31523`, Description: "What are the staging location IDs we will use to select from. seperated by commas", AppService: "logistics", Enabled: true},
{Name: "testingApiFunction", Value: `1`, Description: "This is a test to validate if we set to 0 it will actaully not allow the route", AppService: "logistics", Enabled: true},
}
func SeedSettings(db *gorm.DB, log *logger.CustomLogger) error {
for _, cfg := range seedConfigData {
var existing models.Settings
if err := db.Unscoped().Where("name = ?", cfg.Name).First(&existing).Error; err == nil {
if existing.DeletedAt.Valid {
// Undelete by setting DeletedAt to NULL
if err := db.Unscoped().Model(&existing).Update("DeletedAt", gorm.DeletedAt{}).Error; err != nil {
log.Error("Failed to undelete settings", "settings", map[string]interface{}{
"name": cfg.Name,
"error": err,
})
return nil
}
}
if errors.Is(err, gorm.ErrRecordNotFound) {
if err := db.Create(&cfg).Error; err != nil {
log.Error("Failed to seed settings", "settings", map[string]interface{}{
"name": cfg.Name,
"error": err,
})
}
}
// // Try to find config by unique Name
// result := db.Where("Name =?", cfg.Name).First(&existing)
// if result.Error != nil {
// if result.Error == gorm.ErrRecordNotFound && cfg.Enabled {
// // not here lets add it
// if err := db.Create(&cfg).Error; err != nil && !existing.DeletedAt.Valid {
// log.Error("Failed to seed settings", "settings", map[string]interface{}{
// "name": cfg.Name,
// "error": err,
// })
// }
// //log.Printf("Seeded new config: %s", cfg.Name)
// } else {
// // Some other error
// return result.Error
// }
} else {
// remove the setting if we change to false this will help with future proofing our seeder in the event we need to add it back
if cfg.Enabled {
existing.Description = cfg.Description
existing.Name = cfg.Name
existing.AppService = cfg.AppService
if err := db.Save(&existing).Error; err != nil {
log.Error("Failed to update ettings.", "settings", map[string]interface{}{
"name": cfg.Name,
"error": err,
})
return err
}
} else {
// we delete the setting so its no longer there
if err := db.Delete(&existing).Error; err != nil {
log.Error("Failed to delete ettings.", "settings", map[string]interface{}{
"name": cfg.Name,
"error": err,
})
return err
}
settingDelete := fmt.Sprintf("Updated existing config: %s", cfg.Name)
log.Info(settingDelete, "settings", map[string]interface{}{})
}
//log.Printf("Updated existing config: %s", cfg.Name)
}
}
log.Info("All settings added or updated.", "settings", map[string]interface{}{})
return nil
}

View File

@@ -0,0 +1,110 @@
package settings
import (
"errors"
"fmt"
"reflect"
"strings"
"sync"
"gorm.io/gorm"
"lst.net/internal/models"
)
var (
// Global state
appSettings []models.Settings
appSettingsLock sync.RWMutex
dbInstance *gorm.DB
)
// Initialize loads settings into memory at startup
func Initialize(db *gorm.DB) error {
dbInstance = db
return Refresh()
}
// Refresh reloads settings from DB (call after updates)
func Refresh() error {
appSettingsLock.Lock()
defer appSettingsLock.Unlock()
var settings []models.Settings
if err := dbInstance.Find(&settings).Error; err != nil {
return err
}
appSettings = settings
return nil
}
// GetAll returns a thread-safe copy of settings
func GetAll() []models.Settings {
appSettingsLock.RLock()
defer appSettingsLock.RUnlock()
// Return copy to prevent external modification
copied := make([]models.Settings, len(appSettings))
copy(copied, appSettings)
return copied
}
// GetMap returns settings as []map[string]interface{}
func GetMap() []map[string]interface{} {
return convertToMap(GetAll())
}
// convertToMap helper (move your existing conversion logic here)
func convertToMap(settings []models.Settings) []map[string]interface{} {
toLowercase := func(s models.Settings) map[string]interface{} {
t := reflect.TypeOf(s)
v := reflect.ValueOf(s)
data := make(map[string]interface{})
for i := 0; i < t.NumField(); i++ {
field := strings.ToLower(t.Field(i).Name)
data[field] = v.Field(i).Interface()
}
return data
}
// Convert each struct in settings slice to a map with lowercase keys
var lowercaseSettings []map[string]interface{}
for _, setting := range settings {
lowercaseSettings = append(lowercaseSettings, toLowercase(setting))
}
return lowercaseSettings
}
func GetString(name string) (string, error) {
appSettingsLock.RLock()
defer appSettingsLock.RUnlock()
for _, s := range appSettings {
if s.Name == name { // assuming your model has a "Name" field
fmt.Println(s.Value)
return s.Value, nil // assuming your model has a "Value" field
}
}
return "", errors.New("setting not found")
}
func SetTemp(name, value string) {
appSettingsLock.Lock()
defer appSettingsLock.Unlock()
for i, s := range appSettings {
if s.Name == name {
appSettings[i].Value = value
return
}
}
// If not found, add new setting
appSettings = append(appSettings, models.Settings{
Name: name,
Value: value,
})
}

View File

@@ -0,0 +1,56 @@
package settings
import (
"gorm.io/gorm"
"lst.net/internal/models"
"lst.net/pkg/logger"
)
func UpdateSetting(log *logger.CustomLogger, db *gorm.DB, id string, input SettingUpdateInput) error {
var cfg models.Settings
if err := db.Where("setting_id =?", id).First(&cfg).Error; err != nil {
return err
}
updates := map[string]interface{}{}
if input.Description != nil {
updates["description"] = *input.Description
}
if input.Value != nil {
updates["value"] = *input.Value
}
if input.Enabled != nil {
updates["enabled"] = *input.Enabled
}
if input.AppService != nil {
updates["app_service"] = *input.AppService
}
if len(updates) == 0 {
return nil // nothing to update
}
settingUpdate := db.Model(&cfg).Updates(updates)
if settingUpdate.Error != nil {
log.Error("There was an error updating the setting", "settings", map[string]interface{}{
"error": settingUpdate.Error,
})
return settingUpdate.Error
}
if err := Refresh(); err != nil {
log.Error("There was an error refreshing the settings after a setting update", "settings", map[string]interface{}{
"error": err,
})
}
log.Info("The setting was just updated", "settings", map[string]interface{}{
"id": id,
"name": cfg.Name,
"updated": updates,
})
return nil
}

View File

@@ -1,24 +1,59 @@
package main
import (
"errors"
"fmt"
"log"
"net/http"
"os"
"github.com/gin-gonic/gin"
"github.com/joho/godotenv"
"lst.net/internal/db"
"lst.net/internal/router"
"lst.net/internal/system/settings"
"lst.net/pkg/logger"
)
func main() {
// Load .env only in dev (not Docker/production)
log := logger.New()
if os.Getenv("RUNNING_IN_DOCKER") != "true" {
err := godotenv.Load("../.env")
if err != nil {
log.Println("Warning: .env file not found (ok in Docker/production)")
log := logger.New()
log.Info("Warning: .env file not found (ok in Docker/production)", "system", map[string]interface{}{})
}
}
// Initialize DB
if _, err := db.InitDB(); err != nil {
log.Panic("Database intialize failed, please check the server asap.", "db", map[string]interface{}{
"error": err.Error(),
"cause": errors.Unwrap(err),
"timeout": "30s",
"details": fmt.Sprintf("%+v", err), // Full stack trace if available
})
}
defer func() {
if r := recover(); r != nil {
sqlDB, _ := db.DB.DB()
sqlDB.Close()
log.Error("Recovered from panic during DB shutdown", "db", map[string]interface{}{
"panic": r,
})
}
}()
if err := settings.Initialize(db.DB); err != nil {
log.Panic("There was an error intilizing the settings", "settings", map[string]interface{}{
"error": err,
})
}
// long lived process like ocp running all the time should go here and base the db struct over.
// go ocp.MonitorPrinters
// go notifcations.Processor
// Set basePath dynamically
basePath := "/"
@@ -26,72 +61,19 @@ func main() {
basePath = "/lst" // Dev only
}
// fmt.Println(name)
fmt.Println("Welcome to lst backend where all the fun happens.")
r := gin.Default()
log.Info("Welcome to lst backend where all the fun happens.", "system", map[string]interface{}{})
// Init Gin router and pass DB to services
r := router.Setup(db.DB, basePath, log)
if os.Getenv("APP_ENV") == "production" {
gin.SetMode(gin.ReleaseMode)
// get the server port
port := "8080"
if os.Getenv("VITE_SERVER_PORT") != "" {
port = os.Getenv("VITE_SERVER_PORT")
}
// // --- Add Redirects Here ---
// // Redirect root ("/") to "/app" or "/lst/app"
// r.GET("/", func(c *gin.Context) {
// c.Redirect(http.StatusMovedPermanently, basePath+"/app")
// })
// // Redirect "/lst" (if applicable) to "/lst/app"
// if basePath == "/lst" {
// r.GET("/lst", func(c *gin.Context) {
// c.Redirect(http.StatusMovedPermanently, basePath+"/app")
// })
// }
// Serve Docusaurus static files
r.StaticFS(basePath+"/docs", http.Dir("docs"))
r.StaticFS(basePath+"/app", http.Dir("frontend"))
r.GET(basePath+"/api/ping", func(c *gin.Context) {
c.JSON(200, gin.H{"message": "pong"})
if err := r.Run(":" + port); err != nil {
log.Panic("Server failed to start", "system", map[string]interface{}{
"error": err,
})
r.Any(basePath+"/api", errorApiLoc)
// // Serve static assets for Vite app
// r.Static("/lst/app/assets", "./dist/app/assets")
// // Catch-all for Vite app routes
// r.NoRoute(func(c *gin.Context) {
// path := c.Request.URL.Path
// // Don't handle API, assets, or docs
// if strings.HasPrefix(path, "/lst/api") ||
// strings.HasPrefix(path, "/lst/app/assets") ||
// strings.HasPrefix(path, "/lst/docs") {
// c.JSON(404, gin.H{"error": "Not found"})
// return
// }
// // Serve index.html for all /lst/app routes
// if strings.HasPrefix(path, "/lst/app") {
// c.File("./dist/app/index.html")
// return
// }
// c.JSON(404, gin.H{"error": "Not found"})
// })
r.Run(":8080")
}
// func serveViteApp(c *gin.Context) {
// // Set proper Content-Type for HTML
// c.Header("Content-Type", "text/html")
// c.File("./dist/index.html")
// }
// func errorLoc(c *gin.Context) {
// c.JSON(http.StatusBadRequest, gin.H{"message": "welcome to lst system you might have just encountered an incorrect area of the app"})
// }
func errorApiLoc(c *gin.Context) {
c.JSON(http.StatusBadRequest, gin.H{"message": "looks like you have encountered an api route that dose not exist"})
}

3
backend/pkg/json.go Normal file
View File

@@ -0,0 +1,3 @@
package pkg
type JSONB map[string]interface{}

View File

@@ -0,0 +1,18 @@
package logger
import (
"lst.net/internal/db"
"lst.net/internal/models"
"lst.net/pkg"
)
// CreateLog inserts a new log entry.
func CreateLog(level, message, service string, metadata pkg.JSONB) error {
log := models.Log{
Level: level,
Message: message,
Service: service,
Metadata: metadata,
}
return db.DB.Create(&log).Error
}

View File

@@ -0,0 +1,77 @@
package logger
import (
"encoding/json"
"log"
"os"
"time"
discordwebhook "github.com/bensch777/discord-webhook-golang"
)
func CreateDiscordMsg(message string) {
// we will only run the discord bot if we actaully put a url in the.
if os.Getenv("WEBHOOK") != "" {
var webhookurl = os.Getenv("WEBHOOK")
host, _ := os.Hostname()
embed := discordwebhook.Embed{
Title: "A new crash report from lst.",
Color: 15277667,
Url: "https://avatars.githubusercontent.com/u/6016509?s=48&v=4",
Timestamp: time.Now(),
// Thumbnail: discordwebhook.Thumbnail{
// Url: "https://avatars.githubusercontent.com/u/6016509?s=48&v=4",
// },
// Author: discordwebhook.Author{
// Name: "Author Name",
// Icon_URL: "https://avatars.githubusercontent.com/u/6016509?s=48&v=4",
// },
Fields: []discordwebhook.Field{
discordwebhook.Field{
Name: host,
Value: message,
Inline: false,
},
// discordwebhook.Field{
// Name: "Error reason",
// Value: stack,
// Inline: false,
// },
// discordwebhook.Field{
// Name: "Field 3",
// Value: "Field Value 3",
// Inline: false,
// },
},
// Footer: discordwebhook.Footer{
// Text: "Footer Text",
// Icon_url: "https://avatars.githubusercontent.com/u/6016509?s=48&v=4",
// },
}
SendEmbed(webhookurl, embed)
} else {
return
}
}
func SendEmbed(link string, embeds discordwebhook.Embed) error {
logging := New()
logging.Info("new messege being posted to discord", "logger", map[string]interface{}{
"message": "Message",
})
hook := discordwebhook.Hook{
Username: "Captain Hook",
Avatar_url: "https://avatars.githubusercontent.com/u/6016509?s=48&v=4",
Content: "Message",
Embeds: []discordwebhook.Embed{embeds},
}
payload, err := json.Marshal(hook)
if err != nil {
log.Fatal(err)
}
err = discordwebhook.ExecuteWebhook(link, payload)
return err
}

View File

@@ -0,0 +1,117 @@
package logger
import (
"encoding/json"
"fmt"
"os"
"strings"
"time"
"github.com/rs/zerolog"
"github.com/rs/zerolog/log"
)
type CustomLogger struct {
consoleLogger zerolog.Logger
}
type Message struct {
Channel string `json:"channel"`
Data map[string]interface{} `json:"data"`
Meta map[string]interface{} `json:"meta,omitempty"`
}
// New creates a configured CustomLogger.
func New() *CustomLogger {
// Colorized console output
consoleWriter := zerolog.ConsoleWriter{
Out: os.Stderr,
TimeFormat: "2006-01-02 15:04:05",
}
return &CustomLogger{
consoleLogger: zerolog.New(consoleWriter).
With().
Timestamp().
Logger(),
}
}
func PrettyFormat(level, message string, metadata map[string]interface{}) string {
timestamp := time.Now().Format("2006-01-02 15:04:05")
base := fmt.Sprintf("[%s] %s| Message: %s", strings.ToUpper(level), timestamp, message)
if len(metadata) > 0 {
metaJSON, _ := json.Marshal(metadata)
return fmt.Sprintf("%s | Metadata: %s", base, string(metaJSON))
}
return base
}
func (l *CustomLogger) logToPostgres(level, message, service string, metadata map[string]interface{}) {
err := CreateLog(level, message, service, metadata)
if err != nil {
// Fallback to console if DB fails
log.Error().Err(err).Msg("Failed to write log to PostgreSQL")
}
}
// --- Level-Specific Methods ---
func (l *CustomLogger) Info(message, service string, fields map[string]interface{}) {
l.consoleLogger.Info().Fields(fields).Msg(message)
l.logToPostgres("info", message, service, fields)
//PostLog(PrettyFormat("info", message, fields)) // Broadcast pretty message
}
func (l *CustomLogger) Warn(message, service string, fields map[string]interface{}) {
l.consoleLogger.Error().Fields(fields).Msg(message)
l.logToPostgres("warn", message, service, fields)
//PostLog(PrettyFormat("warn", message, fields)) // Broadcast pretty message
// Custom logic for errors (e.g., alerting)
if len(fields) > 0 {
l.consoleLogger.Warn().Msg("Additional error context captured")
}
}
func (l *CustomLogger) Error(message, service string, fields map[string]interface{}) {
l.consoleLogger.Error().Fields(fields).Msg(message)
l.logToPostgres("error", message, service, fields)
//PostLog(PrettyFormat("error", message, fields)) // Broadcast pretty message
// Custom logic for errors (e.g., alerting)
if len(fields) > 0 {
l.consoleLogger.Warn().Msg("Additional error context captured")
}
}
func (l *CustomLogger) Panic(message, service string, fields map[string]interface{}) {
// Log to console (colored, with fields)
l.consoleLogger.Error().
Str("service", service).
Fields(fields).
Msg(message + " (PANIC)") // Explicitly mark as panic
// Log to PostgreSQL (sync to ensure it's saved before crashing)
err := CreateLog("panic", message, service, fields) // isCritical=true
if err != nil {
l.consoleLogger.Error().Err(err).Msg("Failed to save panic log to PostgreSQL")
}
// Additional context (optional)
if len(fields) > 0 {
l.consoleLogger.Warn().Msg("Additional panic context captured")
}
CreateDiscordMsg(message)
panic(message)
}
func (l *CustomLogger) Debug(message, service string, fields map[string]interface{}) {
l.consoleLogger.Debug().Fields(fields).Msg(message)
l.logToPostgres("debug", message, service, fields)
}

View File

@@ -7,9 +7,20 @@ services:
no_cache: true
image: git.tuffraid.net/cowch/logistics_support_tool:latest
container_name: lst_backend
networks:
- docker-network
environment:
DB_HOST: postgres
DB_PORT: 5432
DB_USER: username
DB_PASSWORD: passwordl
DB_NAME: lst
volumes:
- /path/to/backend/data:/data
ports:
- "8080:8080"
restart: unless-stopped
pull_policy: never
networks:
docker-network:
external: true

View File

@@ -1,10 +1,11 @@
import { useState } from 'react'
import reactLogo from './assets/react.svg'
import viteLogo from '/vite.svg'
import './App.css'
import { useState } from "react";
import reactLogo from "./assets/react.svg";
import viteLogo from "/vite.svg";
import "./App.css";
import WebSocketViewer from "./WebSocketTest";
function App() {
const [count, setCount] = useState(0)
const [count, setCount] = useState(0);
return (
<>
@@ -13,7 +14,11 @@ function App() {
<img src={viteLogo} className="logo" alt="Vite logo" />
</a>
<a href="https://react.dev" target="_blank">
<img src={reactLogo} className="logo react" alt="React logo" />
<img
src={reactLogo}
className="logo react"
alt="React logo"
/>
</a>
</div>
<h1>Vite + React</h1>
@@ -28,8 +33,9 @@ function App() {
<p className="read-the-docs">
Click on the Vite and React logos to learn more
</p>
<WebSocketViewer />
</>
)
);
}
export default App
export default App;

View File

@@ -0,0 +1,41 @@
import { useEffect, useRef } from "react";
const WebSocketViewer = () => {
const ws = useRef<any>(null);
useEffect(() => {
// Connect to your Go backend WebSocket endpoint
ws.current = new WebSocket(
(window.location.protocol === "https:" ? "wss://" : "ws://") +
window.location.host +
"/lst/ws"
);
ws.current.onopen = () => {
console.log("[WebSocket] Connected");
};
ws.current.onmessage = (event: any) => {
console.log("[WebSocket] Message received:", event.data);
};
ws.current.onerror = (error: any) => {
console.error("[WebSocket] Error:", error);
};
ws.current.onclose = () => {
console.log("[WebSocket] Disconnected");
};
// Cleanup on unmount
return () => {
if (ws.current) {
ws.current.close();
}
};
}, []);
return <div>Check the console for WebSocket messages</div>;
};
export default WebSocketViewer;

View File

@@ -1,6 +1,13 @@
import { defineConfig } from "vite";
import react from "@vitejs/plugin-react-swc";
import path from "path";
import dotenv from "dotenv";
import { fileURLToPath } from "url";
dotenv.config({
path: path.resolve(path.dirname(fileURLToPath(import.meta.url)), "../.env"),
});
// https://vite.dev/config/
export default defineConfig({
plugins: [react()],
@@ -10,4 +17,24 @@ export default defineConfig({
assetsDir: "assets",
emptyOutDir: true,
},
server: {
proxy: {
"/lst/api": {
target: `http://localhost:${Number(
process.env.VITE_SERVER_PORT || 8080
)}`,
changeOrigin: true,
secure: false,
},
"/lst/ws": {
target: `ws://localhost:${Number(
process.env.VITE_SERVER_PORT || 8080
)}`, // Your Go WebSocket endpoint
ws: true,
changeOrigin: true,
secure: false,
rewrite: (path) => path.replace(/^\/ws/, ""),
},
},
},
});

4
package-lock.json generated
View File

@@ -1,12 +1,12 @@
{
"name": "logistics_support_tool",
"version": "0.0.1-alpha.5",
"version": "0.0.1-alpha.6",
"lockfileVersion": 3,
"requires": true,
"packages": {
"": {
"name": "logistics_support_tool",
"version": "0.0.1-alpha.5",
"version": "0.0.1-alpha.6",
"license": "ISC",
"dependencies": {
"dotenv": "^17.2.0",

View File

@@ -1,6 +1,6 @@
{
"name": "logistics_support_tool",
"version": "0.0.1-alpha.5",
"version": "0.0.1-alpha.6",
"description": "This is the new logisitcs support tool",
"private": true,
"main": "index.js",

View File

@@ -26,10 +26,10 @@ if (Test-Path $envFile) {
Write-Host ".env file not found at $envFile"
}
# if (-not $env:BUILD_NAME) {
# Write-Warning "BUILD_NAME environment variable is not set. Please make sure you have entered the correct info the env"
# exit 1
# }
if (-not $env:BUILD_NAME) {
Write-Warning "BUILD_NAME environment variable is not set. Please make sure you have entered the correct info the env"
exit 1
}
function Get-PackageVersion {
param (
@@ -78,7 +78,7 @@ function Update-BuildNumber {
$name = $matches[2]
$newNumber = $number + 1
$newBuildNumber = "$newNumber"
$newBuildNumber = "$($newNumber)-$($name)"
Set-Content -Path $buildNumberFile -Value $newBuildNumber
@@ -87,14 +87,17 @@ function Update-BuildNumber {
return $newBuildNumber
} else {
Write-Warning "BUILD_NUMBER file content '$current' is not in the expected 'number-name' format."
Set-Content -Path $buildNumberFile -Value "1-"$($env:BUILD_NAME)
return $null
}
}
Push-Location $rootDir/backend
Write-Host "Building the app"
go get
# swag init -o swagger -g main.go
go build -ldflags "-X main.version=$($version)-$($initialBuildValue)" -o lst_app.exe ./main.go
if ($LASTEXITCODE -ne 0) {
Write-Warning "app build failed!"
@@ -122,6 +125,22 @@ function Update-BuildNumber {
Write-Host "Building wrapper"
Push-Location $rootDir/LstWrapper
Write-Host "Changing the port to match the server port in the env file"
$port = $env:VITE_SERVER_PORT
if (-not $port) {
$port = "8080" # Default port if env var not set
}
$webConfigPath = "web.config"
$content = Get-Content -Path $webConfigPath -Raw
$newContent = $content -replace '(?<=Rewrite" url="http://localhost:)\d+(?=/\{R:1\}")', $port
$newContent | Set-Content -Path $webConfigPath -NoNewline
Write-Host "Updated web.config rewrite port to $port"
#remove the publish folder as we done need it
if (-not (Test-Path "publish")) {
Write-Host "The publish folder is already deleted nothing else to do"
@@ -131,6 +150,15 @@ function Update-BuildNumber {
dotnet publish -c Release -o ./publish
$webConfigPath = "web.config"
$content = Get-Content -Path $webConfigPath -Raw
$newContent = $content -replace '(?<=Rewrite" url="http://localhost:)\d+(?=/\{R:1\}")', "8080"
$newContent | Set-Content -Path $webConfigPath -NoNewline
Write-Host "Updated web.config rewrite port back to 8080"
Pop-Location
Write-Host "Building Docs"

View File

@@ -82,6 +82,7 @@ $filesToCopy = @(
@{ Source = "package.json"; Destination = "package.json" },
@{ Source = "CHANGELOG.md"; Destination = "CHANGELOG.md" },
@{ Source = "README.md"; Destination = "README.md" },
@{ Source = ".env-example"; Destination = ".env-example" },
# scripts to be copied over
@{ Source = "scripts\tmp"; Destination = "tmp" }
@{ Source = "scripts\iisControls.ps1"; Destination = "scripts\iisControls.ps1" }

View File

@@ -13,3 +13,6 @@ docker push git.tuffraid.net/cowch/logistics_support_tool:latest
Write-Host "Pull the new images to our docker system"
docker compose -f ./docker-compose.yml up -d --force-recreate
# in case we get logged out docker login git.tuffraid.net
# create a docker network so we have this for us docker network create -d bridge my-bridge-network

View File

@@ -138,6 +138,7 @@ $plantFunness = {
Write-Host "Stopping iis application"
Stop-WebAppPool -Name LogisticsSupportTool #-ErrorAction Stop
Start-Sleep -Seconds 3
######################################################################################