22 Commits

Author SHA1 Message Date
b6968b7b67 chore(release): v0.0.1-alpha.6 2025-07-30 19:39:25 -05:00
a0aa75c5a0 refactor(settings): changed config to settings and added in the update method for this as well
strict fields on the updates so we can only change what we want in here
2025-07-30 19:35:13 -05:00
78be07c8bb ci(hotreload): added in air for hot reloading 2025-07-30 19:31:02 -05:00
0575a34422 feat(settings): migrated all settings endpoints confirmed as well for updates 2025-07-29 20:27:45 -05:00
3bc3801ffb refactor(config): changed to settings to match the other lst in node. makes it more easy to manage 2025-07-29 20:13:35 -05:00
4368111311 fix(websocket): errors in saving client info during ping ping 2025-07-29 20:13:05 -05:00
daf9e8a966 perf(websocket): added in base url to help with ssl stuff and iis 2025-07-29 18:07:10 -05:00
8a08d3eac6 refactor(wrapper): removed the logger stuff so we dont fill up space 2025-07-29 18:06:27 -05:00
a761a3634b fix(wrapper): corrections to properly handle websockets :D 2025-07-29 15:58:21 -05:00
a1a30cffd1 refactor(ws): ws logging and channel manager added no auth currently 2025-07-29 11:29:59 -05:00
6a631be909 docs(docker): docs about the custom network for the db is seperated 2025-07-25 12:14:51 -05:00
75c17d2065 test(iis): wrapper test for ws 2025-07-25 12:14:05 -05:00
63c053b38c docs(wss): more ws stuff 2025-07-25 12:13:47 -05:00
5bcbdaf3d0 feat(ws server): added in a websocket on port system to help with better logging 2025-07-25 12:13:19 -05:00
074032f20d refactor(app port): changed to have the port be dyncamic on the iis side
docker will default to 8080 and can be adjusted via the docker compose, or passing the same env over
it will change it as well.
2025-07-23 07:36:18 -05:00
13e282e815 fix(update server): fixed to make sure everything is stopped before doing the remaining update 2025-07-22 20:00:43 -05:00
6c8ac33be7 refactor(createzip): added in env-example to the zip file 2025-07-22 19:59:55 -05:00
92ce51eb7c refactor(build): added back in the build name stuff 2025-07-22 19:59:29 -05:00
52ef39fd5c feat(logging): added in db and logging with websocket 2025-07-22 19:59:06 -05:00
623e19f028 refactor(docker compose example): added in postgress stuff plus network 2025-07-22 19:58:26 -05:00
14dd87e335 docs(.env example): added postrgres example 2025-07-22 19:58:01 -05:00
52956ecaa4 docs(dockerbuild): comments as a reminder for my seld 2025-07-22 07:03:29 -05:00
35 changed files with 2052 additions and 137 deletions

View File

@@ -1,6 +1,9 @@
# uncomment this out to run in productions
# APP_ENV=production
# Server port that will allow vite to talk to the backend.
VITE_SERVER_PORT=4000
# lstv2 loc
LSTV2="C\drive\loc"
@@ -12,7 +15,17 @@ GITEA_USERNAME=username
GITEA_REPO=logistics_support_tool
GITEA_TOKEN=ad8eac91a01e3a1885a1dc10
# postgres db
DB_HOST=localhost
DB_PORT=5433
DB_USER=username
DB_PASSWORD=password
DB_NAME=lst # db must be created before you start the app
# dev locs
DEV_FOLDER=C\drive\loc
ADMUSER=username
ADMPASSWORD=password
# Build number info
BUILD_NAME=leBlfRaj

2
.gitignore vendored
View File

@@ -10,6 +10,7 @@ LstWrapper/obj
scripts/tmp
backend/docs
backend/frontend
testFolder
# ---> Go
# If you prefer the allow list template instead of the deny list, see community template:
@@ -191,4 +192,5 @@ backend/go.sum
BUILD_NUMBER
scripts/resetDanger.js
LstWrapper/Program_vite_as_Static.txt
LstWrapper/Program_proxy_backend.txt
scripts/stopPool.go

View File

@@ -3,6 +3,50 @@
All notable changes to LST will be documented in this file.
## [0.0.1-alpha.6](https://git.tuffraid.net/cowch/logistics_support_tool/compare/v0.0.1-alpha.5...v0.0.1-alpha.6) (2025-07-31)
### 🌟 Enhancements
* **logging:** added in db and logging with websocket ([52ef39f](https://git.tuffraid.net/cowch/logistics_support_tool/commit/52ef39fd5c129ed02ed9f38dbf7e49ae06807ad6))
* **settings:** migrated all settings endpoints confirmed as well for updates ([0575a34](https://git.tuffraid.net/cowch/logistics_support_tool/commit/0575a344229ba0ff5c0f47781c6d596e5c08e5eb))
* **ws server:** added in a websocket on port system to help with better logging ([5bcbdaf](https://git.tuffraid.net/cowch/logistics_support_tool/commit/5bcbdaf3d0e889729d4dce3df51f4330d7793868))
### 🐛 Bug fixes
* **update server:** fixed to make sure everything is stopped before doing the remaining update ([13e282e](https://git.tuffraid.net/cowch/logistics_support_tool/commit/13e282e815c1c95a0a5298ede2f6497cdf036440))
* **websocket:** errors in saving client info during ping ping ([4368111](https://git.tuffraid.net/cowch/logistics_support_tool/commit/4368111311c48e73a11a6b24febdcc3be31a2a59))
* **wrapper:** corrections to properly handle websockets :D ([a761a36](https://git.tuffraid.net/cowch/logistics_support_tool/commit/a761a3634b6cb0aeeb571dd634bd158cee530779))
### 📚 Documentation
* **.env example:** added postrgres example ([14dd87e](https://git.tuffraid.net/cowch/logistics_support_tool/commit/14dd87e335a63d76d64c07a15cf593cb286a9833))
* **dockerbuild:** comments as a reminder for my seld ([52956ec](https://git.tuffraid.net/cowch/logistics_support_tool/commit/52956ecaa45cd556ba7832d6cb9ec2cf883d983a))
* **docker:** docs about the custom network for the db is seperated ([6a631be](https://git.tuffraid.net/cowch/logistics_support_tool/commit/6a631be909b56a899af393510edffd70d7901a7a))
* **wss:** more ws stuff ([63c053b](https://git.tuffraid.net/cowch/logistics_support_tool/commit/63c053b38ce3ab3c3a94cda620da930f4e8615bd))
### 🛠️ Code Refactor
* **app port:** changed to have the port be dyncamic on the iis side ([074032f](https://git.tuffraid.net/cowch/logistics_support_tool/commit/074032f20dc90810416c5899e44fefe86b52f98a))
* **build:** added back in the build name stuff ([92ce51e](https://git.tuffraid.net/cowch/logistics_support_tool/commit/92ce51eb7cf14ebb599c29fea4721e21badafbf6))
* **config:** changed to settings to match the other lst in node. makes it more easy to manage ([3bc3801](https://git.tuffraid.net/cowch/logistics_support_tool/commit/3bc3801ffbb544a814d52c72e566e8d4866a7f38))
* **createzip:** added in env-example to the zip file ([6c8ac33](https://git.tuffraid.net/cowch/logistics_support_tool/commit/6c8ac33be73f203137b883e33feb625ccc0945e9))
* **docker compose example:** added in postgress stuff plus network ([623e19f](https://git.tuffraid.net/cowch/logistics_support_tool/commit/623e19f028d27fbfc46bee567ce78169cddba8fb))
* **settings:** changed config to settings and added in the update method for this as well ([a0aa75c](https://git.tuffraid.net/cowch/logistics_support_tool/commit/a0aa75c5a0b4a6e3a10b88bbcccf43d096e532b4))
* **wrapper:** removed the logger stuff so we dont fill up space ([8a08d3e](https://git.tuffraid.net/cowch/logistics_support_tool/commit/8a08d3eac6540b00ff23115936d56b4f22f16d53))
* **ws:** ws logging and channel manager added no auth currently ([a1a30cf](https://git.tuffraid.net/cowch/logistics_support_tool/commit/a1a30cffd18e02e1061959fa3164f8237522880c))
### 🚀 Performance
* **websocket:** added in base url to help with ssl stuff and iis ([daf9e8a](https://git.tuffraid.net/cowch/logistics_support_tool/commit/daf9e8a966fd440723b1aec932a02873a5e27eb7))
### 📝 Testing Code
* **iis:** wrapper test for ws ([75c17d2](https://git.tuffraid.net/cowch/logistics_support_tool/commit/75c17d20659dcc5a762e00928709c4d3dd277284))
### 📈 Project changes
* **hotreload:** added in air for hot reloading ([78be07c](https://git.tuffraid.net/cowch/logistics_support_tool/commit/78be07c8bbf5acbcdac65351f693941f47be4cb5))
## [0.0.1-alpha.5](https://git.tuffraid.net/cowch/logistics_support_tool/compare/v0.0.1-alpha.4...v0.0.1-alpha.5) (2025-07-21)
### 🌟 Enhancements

View File

@@ -1,65 +1,158 @@
var builder = WebApplication.CreateBuilder(args);
using System;
using System.IO;
using System.Net;
using System.Net.WebSockets;
using System.Threading;
using System.Threading.Tasks;
using Microsoft.AspNetCore.Builder;
using Microsoft.Extensions.DependencyInjection;
// Configure clients
builder.Services.AddHttpClient("GoBackend", client => {
var builder = WebApplication.CreateBuilder(args);
builder.Services.AddHttpClient("GoBackend", client =>
{
client.BaseAddress = new Uri("http://localhost:8080");
});
var app = builder.Build();
// Handle trailing slash redirects
app.Use(async (context, next) => {
if (context.Request.Path.Equals("/lst", StringComparison.OrdinalIgnoreCase)) {
context.Response.Redirect("/lst/", permanent: true);
return;
// Enable WebSocket support
app.UseWebSockets();
// Logging method
void LogToFile(string message)
{
try
{
string logDir = Path.Combine(AppContext.BaseDirectory, "logs");
Directory.CreateDirectory(logDir);
string logFilePath = Path.Combine(logDir, "proxy_log.txt");
File.AppendAllText(logFilePath, $"{DateTime.UtcNow}: {message}{Environment.NewLine}");
}
catch (Exception ex)
{
// Handle potential errors writing to log file
Console.WriteLine($"Logging error: {ex.Message}");
}
}
// Middleware to handle WebSocket requests
app.Use(async (context, next) =>
{
if (context.WebSockets.IsWebSocketRequest && context.Request.Path.StartsWithSegments("/ws"))
{
// LogToFile($"WebSocket request received for path: {context.Request.Path}");
try
{
var backendUri = new UriBuilder("ws", "localhost", 8080)
{
Path = context.Request.Path,
Query = context.Request.QueryString.ToString()
}.Uri;
using var backendSocket = new ClientWebSocket();
await backendSocket.ConnectAsync(backendUri, context.RequestAborted);
using var frontendSocket = await context.WebSockets.AcceptWebSocketAsync();
var cts = new CancellationTokenSource();
// WebSocket forwarding tasks
var forwardToBackend = ForwardWebSocketAsync(frontendSocket, backendSocket, cts.Token);
var forwardToFrontend = ForwardWebSocketAsync(backendSocket, frontendSocket, cts.Token);
await Task.WhenAny(forwardToBackend, forwardToFrontend);
cts.Cancel();
}
catch (Exception ex)
{
//LogToFile($"WebSocket proxy error: {ex.Message}");
context.Response.StatusCode = (int)HttpStatusCode.BadGateway;
await context.Response.WriteAsync($"WebSocket proxy error: {ex.Message}");
}
}
else
{
await next();
}
});
// Proxy all requests to Go backend
app.Use(async (context, next) => {
// Skip special paths
if (context.Request.Path.StartsWithSegments("/.well-known")) {
// Middleware to handle HTTP requests
app.Use(async (context, next) =>
{
if (context.WebSockets.IsWebSocketRequest)
{
await next();
return;
}
var client = context.RequestServices.GetRequiredService<IHttpClientFactory>()
.CreateClient("GoBackend");
var client = context.RequestServices.GetRequiredService<IHttpClientFactory>().CreateClient("GoBackend");
try {
var request = new HttpRequestMessage(
new HttpMethod(context.Request.Method),
try
{
var request = new HttpRequestMessage(new HttpMethod(context.Request.Method),
context.Request.Path + context.Request.QueryString);
// Copy headers
foreach (var header in context.Request.Headers) {
if (!request.Headers.TryAddWithoutValidation(header.Key, header.Value.ToArray())) {
foreach (var header in context.Request.Headers)
{
if (!request.Headers.TryAddWithoutValidation(header.Key, header.Value.ToArray()))
{
request.Content ??= new StreamContent(context.Request.Body);
request.Content.Headers.TryAddWithoutValidation(header.Key, header.Value.ToArray());
}
}
if (context.Request.ContentLength > 0) {
if (context.Request.ContentLength > 0 && request.Content == null)
{
request.Content = new StreamContent(context.Request.Body);
}
var response = await client.SendAsync(request);
var response = await client.SendAsync(request, HttpCompletionOption.ResponseHeadersRead, context.RequestAborted);
context.Response.StatusCode = (int)response.StatusCode;
foreach (var header in response.Headers) {
foreach (var header in response.Headers)
{
context.Response.Headers[header.Key] = header.Value.ToArray();
}
if (response.Content.Headers.ContentType != null) {
context.Response.ContentType = response.Content.Headers.ContentType.ToString();
foreach (var header in response.Content.Headers)
{
context.Response.Headers[header.Key] = header.Value.ToArray();
}
context.Response.Headers.Remove("transfer-encoding");
await response.Content.CopyToAsync(context.Response.Body);
}
catch (HttpRequestException) {
context.Response.StatusCode = 502;
catch (HttpRequestException ex)
{
LogToFile($"HTTP proxy error: {ex.Message}");
context.Response.StatusCode = (int)HttpStatusCode.BadGateway;
await context.Response.WriteAsync($"Backend request failed: {ex.Message}");
}
});
async Task ForwardWebSocketAsync(WebSocket source, WebSocket destination, CancellationToken cancellationToken)
{
var buffer = new byte[4 * 1024];
try
{
while (source.State == WebSocketState.Open &&
destination.State == WebSocketState.Open &&
!cancellationToken.IsCancellationRequested)
{
var result = await source.ReceiveAsync(new ArraySegment<byte>(buffer), cancellationToken);
if (result.MessageType == WebSocketMessageType.Close)
{
await destination.CloseOutputAsync(WebSocketCloseStatus.NormalClosure, "Closing", cancellationToken);
break;
}
await destination.SendAsync(new ArraySegment<byte>(buffer, 0, result.Count), result.MessageType, result.EndOfMessage, cancellationToken);
}
}
catch (WebSocketException ex)
{
LogToFile($"WebSocket forwarding error: {ex.Message}");
await destination.CloseOutputAsync(WebSocketCloseStatus.InternalServerError, "Error", cancellationToken);
}
}
app.Run();

View File

@@ -1,36 +1,24 @@
<?xml version="1.0" encoding="utf-8"?>
<configuration>
<system.webServer>
<!-- Enable WebSockets -->
<webSocket enabled="true" receiveBufferLimit="4194304" pingInterval="00:01:00" />
<rewrite>
<rules>
<!-- Redirect root to /lst/ -->
<rule name="Root Redirect" stopProcessing="true">
<match url="^$" />
<action type="Redirect" url="/lst/" redirectType="Permanent" />
</rule>
<!-- Proxy static assets -->
<rule name="Static Assets" stopProcessing="true">
<match url="^lst/assets/(.*)" />
<action type="Rewrite" url="http://localhost:8080/lst/assets/{R:1}" />
</rule>
<!-- Proxy API requests -->
<rule name="API Routes" stopProcessing="true">
<match url="^lst/api/(.*)" />
<action type="Rewrite" url="http://localhost:8080/lst/api/{R:1}" />
</rule>
<!-- Proxy all other requests -->
<rule name="Frontend Routes" stopProcessing="true">
<!-- Proxy all requests starting with /lst/ to the .NET wrapper (port 4000) -->
<rule name="Proxy to Wrapper" stopProcessing="true">
<match url="^lst/(.*)" />
<action type="Rewrite" url="http://localhost:8080/lst/{R:1}" />
<conditions>
<!-- Skip this rule if it's a WebSocket request -->
<add input="{HTTP_UPGRADE}" pattern="^WebSocket$" negate="true" />
</conditions>
<action type="Rewrite" url="http://localhost:8080/{R:1}" />
</rule>
</rules>
</rewrite>
<staticContent>
<clear />
<mimeMap fileExtension=".js" mimeType="application/javascript" />
<mimeMap fileExtension=".mjs" mimeType="application/javascript" />
<mimeMap fileExtension=".css" mimeType="text/css" />
@@ -38,6 +26,8 @@
</staticContent>
<handlers>
<!-- Let AspNetCoreModule handle all requests -->
<remove name="WebSocketHandler" />
<add name="aspNetCore" path="*" verb="*" modules="AspNetCoreModuleV2" resourceType="Unspecified" />
</handlers>

View File

@@ -10,3 +10,5 @@ this will also include a primary server to house all the common configs across a
The new lst will run in docker by building your own image and deploying or pulling the image down.
you will also be able to run it in windows or linux.
when developing in lst and you want to run hotloads installed and configure https://github.com/air-verse/air

0
backend/.air.toml Normal file
View File

View File

@@ -0,0 +1,111 @@
package loggingx
import (
"encoding/json"
"fmt"
"os"
"strings"
"time"
"github.com/rs/zerolog"
"github.com/rs/zerolog/log"
"lst.net/utils/db"
)
type CustomLogger struct {
consoleLogger zerolog.Logger
}
// New creates a configured CustomLogger.
func New() *CustomLogger {
// Colorized console output
consoleWriter := zerolog.ConsoleWriter{
Out: os.Stderr,
TimeFormat: "2006-01-02 15:04:05",
}
return &CustomLogger{
consoleLogger: zerolog.New(consoleWriter).
With().
Timestamp().
Logger(),
}
}
func PrettyFormat(level, message string, metadata map[string]interface{}) string {
timestamp := time.Now().Format("2006-01-02 15:04:05")
base := fmt.Sprintf("[%s] %s| Message: %s", strings.ToUpper(level), timestamp, message)
if len(metadata) > 0 {
metaJSON, _ := json.Marshal(metadata)
return fmt.Sprintf("%s | Metadata: %s", base, string(metaJSON))
}
return base
}
func (l *CustomLogger) logToPostgres(level, message, service string, metadata map[string]interface{}) {
err := db.CreateLog(level, message, service, metadata)
if err != nil {
// Fallback to console if DB fails
log.Error().Err(err).Msg("Failed to write log to PostgreSQL")
}
}
// --- Level-Specific Methods ---
func (l *CustomLogger) Info(message, service string, fields map[string]interface{}) {
l.consoleLogger.Info().Fields(fields).Msg(message)
l.logToPostgres("info", message, service, fields)
PostLog(PrettyFormat("info", message, fields)) // Broadcast pretty message
}
func (l *CustomLogger) Warn(message, service string, fields map[string]interface{}) {
l.consoleLogger.Error().Fields(fields).Msg(message)
l.logToPostgres("warn", message, service, fields)
PostLog(PrettyFormat("warn", message, fields)) // Broadcast pretty message
// Custom logic for errors (e.g., alerting)
if len(fields) > 0 {
l.consoleLogger.Warn().Msg("Additional error context captured")
}
}
func (l *CustomLogger) Error(message, service string, fields map[string]interface{}) {
l.consoleLogger.Error().Fields(fields).Msg(message)
l.logToPostgres("error", message, service, fields)
PostLog(PrettyFormat("error", message, fields)) // Broadcast pretty message
// Custom logic for errors (e.g., alerting)
if len(fields) > 0 {
l.consoleLogger.Warn().Msg("Additional error context captured")
}
}
func (l *CustomLogger) Panic(message, service string, fields map[string]interface{}) {
// Log to console (colored, with fields)
l.consoleLogger.Error().
Str("service", service).
Fields(fields).
Msg(message + " (PANIC)") // Explicitly mark as panic
// Log to PostgreSQL (sync to ensure it's saved before crashing)
err := db.CreateLog("panic", message, service, fields) // isCritical=true
if err != nil {
l.consoleLogger.Error().Err(err).Msg("Failed to save panic log to PostgreSQL")
}
// Additional context (optional)
if len(fields) > 0 {
l.consoleLogger.Warn().Msg("Additional panic context captured")
}
panic(message)
}
func (l *CustomLogger) Debug(message, service string, fields map[string]interface{}) {
l.consoleLogger.Debug().Fields(fields).Msg(message)
l.logToPostgres("debug", message, service, fields)
}

View File

@@ -0,0 +1,12 @@
package loggingx
import (
"github.com/gin-gonic/gin"
)
func RegisterLoggerRoutes(l *gin.Engine, baseUrl string) {
configGroup := l.Group(baseUrl + "/api/logger")
configGroup.GET("/logs", GetLogs)
}

View File

@@ -0,0 +1,148 @@
package loggingx
import (
"fmt"
"log"
"net/http"
"sync"
"github.com/gin-gonic/gin"
"github.com/gorilla/websocket"
)
var (
logChannel = make(chan string, 1000) // Buffered channel for new logs
wsClients = make(map[*websocket.Conn]bool)
wsClientsMux sync.Mutex
)
var upgrader = websocket.Upgrader{
ReadBufferSize: 1024,
WriteBufferSize: 1024,
CheckOrigin: func(r *http.Request) bool {
//fmt.Println("Origin:", r.Header.Get("Origin"))
return true
},
}
// PostLog sends a new log to all connected SSE clients
func PostLog(message string) {
// Send to SSE channel
select {
case logChannel <- message:
log.Printf("Published to SSE: %s", message)
default:
log.Printf("DROPPED SSE message (channel full): %s", message)
}
wsClientsMux.Lock()
defer wsClientsMux.Unlock()
for client := range wsClients {
err := client.WriteMessage(websocket.TextMessage, []byte(message))
if err != nil {
client.Close()
delete(wsClients, client)
}
}
}
func GetLogs(c *gin.Context) {
// Check if it's a WebSocket request
if websocket.IsWebSocketUpgrade(c.Request) {
handleWebSocket(c)
return
}
// Otherwise, handle as SSE
handleSSE(c)
}
func handleSSE(c *gin.Context) {
log := New()
log.Info("SSE connection established", "logger", map[string]interface{}{
"endpoint": "/api/logger/logs",
"client_ip": c.ClientIP(),
"user_agent": c.Request.UserAgent(),
})
c.Header("Access-Control-Allow-Origin", "*")
c.Header("Access-Control-Allow-Credentials", "true")
c.Header("Access-Control-Allow-Headers", "Content-Type")
c.Header("Access-Control-Allow-Methods", "GET, OPTIONS")
// Handle preflight requests
if c.Request.Method == "OPTIONS" {
c.AbortWithStatus(204)
return
}
c.Header("Content-Type", "text/event-stream")
c.Header("Cache-Control", "no-cache")
c.Header("Connection", "keep-alive")
flusher, ok := c.Writer.(http.Flusher)
if !ok {
log.Info("SSE not supported", "logger", nil)
c.AbortWithStatus(http.StatusInternalServerError)
return
}
notify := c.Writer.CloseNotify()
for {
select {
case <-notify:
log.Info("SSE client disconnected", "logger", nil)
return
case message := <-logChannel:
fmt.Fprintf(c.Writer, "data: %s\n\n", message)
flusher.Flush()
}
}
}
func handleWebSocket(c *gin.Context) {
log := New()
log.Info("WebSocket connection established", "logger", map[string]interface{}{
"endpoint": "/api/logger/logs",
"client_ip": c.ClientIP(),
"user_agent": c.Request.UserAgent(),
})
conn, err := upgrader.Upgrade(c.Writer, c.Request, nil)
if err != nil {
log.Error("WebSocket upgrade failed", "logger", map[string]interface{}{
"error": err.Error(),
})
return
}
// Register client
wsClientsMux.Lock()
wsClients[conn] = true
wsClientsMux.Unlock()
defer func() {
wsClientsMux.Lock()
delete(wsClients, conn)
wsClientsMux.Unlock()
conn.Close()
log.Info("WebSocket client disconnected", "logger", map[string]interface{}{})
}()
// Keep connection alive (or optionally echo, or wait for pings)
for {
// Can just read to keep the connection alive
if _, _, err := conn.NextReader(); err != nil {
break
}
}
}
// func sendRecentLogs(conn *websocket.Conn) {
// // Implement your logic to get recent logs from DB or buffer
// recentLogs := getLast20Logs()
// for _, log := range recentLogs {
// conn.WriteMessage(websocket.TextMessage, []byte(log))
// }
// }

View File

@@ -0,0 +1 @@
package servers

View File

@@ -0,0 +1,88 @@
package settings
import (
"encoding/json"
"github.com/gin-gonic/gin"
"lst.net/utils/db"
"lst.net/utils/inputs"
logging "lst.net/utils/logger"
)
func RegisterSettingsRoutes(l *gin.Engine, baseUrl string) {
// seed the db on start up
db.SeedConfigs(db.DB)
s := l.Group(baseUrl + "/api/v1")
s.GET("/settings", getSettings)
s.PATCH("/settings/:id", updateSettingById)
}
func getSettings(c *gin.Context) {
logger := logging.New()
configs, err := db.GetAllConfigs(db.DB)
logger.Info("Current Settings", "system", map[string]interface{}{
"endpoint": "/api/v1/settings",
"client_ip": c.ClientIP(),
"user_agent": c.Request.UserAgent(),
})
if err != nil {
logger.Error("Current Settings", "system", map[string]interface{}{
"endpoint": "/api/v1/settings",
"client_ip": c.ClientIP(),
"user_agent": c.Request.UserAgent(),
"error": err,
})
c.JSON(500, gin.H{"message": "There was an error getting the settings", "error": err})
return
}
c.JSON(200, gin.H{"message": "Current settings", "data": configs})
}
func updateSettingById(c *gin.Context) {
logger := logging.New()
settingID := c.Param("id")
if settingID == "" {
c.JSON(500, gin.H{"message": "Invalid data"})
logger.Error("Invalid data", "system", map[string]interface{}{
"endpoint": "/api/v1/settings",
"client_ip": c.ClientIP(),
"user_agent": c.Request.UserAgent(),
})
return
}
var setting inputs.SettingUpdateInput
//err := c.ShouldBindBodyWithJSON(&setting)
decoder := json.NewDecoder(c.Request.Body) // more strict and will force us to have correct data
decoder.DisallowUnknownFields()
if err := decoder.Decode(&setting); err != nil {
c.JSON(400, gin.H{"message": "Invalid request body", "error": err.Error()})
logger.Error("Invalid request body", "system", map[string]interface{}{
"endpoint": "/api/v1/settings",
"client_ip": c.ClientIP(),
"user_agent": c.Request.UserAgent(),
"error": err,
})
return
}
if err := db.UpdateConfig(db.DB, settingID, setting); err != nil {
c.JSON(500, gin.H{"message": "Failed to update setting", "error": err.Error()})
logger.Error("Failed to update setting", "system", map[string]interface{}{
"endpoint": "/api/v1/settings",
"client_ip": c.ClientIP(),
"user_agent": c.Request.UserAgent(),
"error": err,
})
return
}
c.JSON(200, gin.H{"message": "Setting was just updated", "data": setting})
}

View File

@@ -0,0 +1,179 @@
package websocket
import (
"encoding/json"
"log"
"strings"
"sync"
logging "lst.net/utils/logger"
)
type Channel struct {
Name string
Clients map[*Client]bool
Register chan *Client
Unregister chan *Client
Broadcast chan []byte
lock sync.RWMutex
}
var (
channels = make(map[string]*Channel)
channelsMu sync.RWMutex
)
// InitializeChannels creates and returns all channels
func InitializeChannels() {
channelsMu.Lock()
defer channelsMu.Unlock()
channels["logServices"] = NewChannel("logServices")
channels["labels"] = NewChannel("labels")
// Add more channels here as needed
}
func NewChannel(name string) *Channel {
return &Channel{
Name: name,
Clients: make(map[*Client]bool),
Register: make(chan *Client),
Unregister: make(chan *Client),
Broadcast: make(chan []byte),
}
}
func GetChannel(name string) (*Channel, bool) {
channelsMu.RLock()
defer channelsMu.RUnlock()
ch, exists := channels[name]
return ch, exists
}
func GetAllChannels() map[string]*Channel {
channelsMu.RLock()
defer channelsMu.RUnlock()
chs := make(map[string]*Channel)
for k, v := range channels {
chs[k] = v
}
return chs
}
func StartAllChannels() {
channelsMu.RLock()
defer channelsMu.RUnlock()
for _, ch := range channels {
go ch.RunChannel()
}
}
func CleanupChannels() {
channelsMu.Lock()
defer channelsMu.Unlock()
for _, ch := range channels {
close(ch.Broadcast)
// Add any other cleanup needed
}
channels = make(map[string]*Channel)
}
func StartBroadcasting(broadcaster chan logging.Message, channels map[string]*Channel) {
logger := logging.New()
go func() {
for msg := range broadcaster {
switch msg.Channel {
case "logServices":
// Just forward the message - filtering happens in RunChannel()
messageBytes, err := json.Marshal(msg)
if err != nil {
logger.Error("Error marshaling message", "websocket", map[string]interface{}{
"errors": err,
})
continue
}
channels["logServices"].Broadcast <- messageBytes
case "labels":
// Future labels handling
messageBytes, err := json.Marshal(msg)
if err != nil {
logger.Error("Error marshaling message", "websocket", map[string]interface{}{
"errors": err,
})
continue
}
channels["labels"].Broadcast <- messageBytes
default:
log.Printf("Received message for unknown channel: %s", msg.Channel)
}
}
}()
}
func contains(slice []string, item string) bool {
// Empty filter slice means "match all"
if len(slice) == 0 {
return true
}
// Case-insensitive comparison
item = strings.ToLower(item)
for _, s := range slice {
if strings.ToLower(s) == item {
return true
}
}
return false
}
// Updated Channel.RunChannel() for logServices filtering
func (ch *Channel) RunChannel() {
for {
select {
case client := <-ch.Register:
ch.lock.Lock()
ch.Clients[client] = true
ch.lock.Unlock()
case client := <-ch.Unregister:
ch.lock.Lock()
delete(ch.Clients, client)
ch.lock.Unlock()
case message := <-ch.Broadcast:
var msg logging.Message
if err := json.Unmarshal(message, &msg); err != nil {
continue
}
ch.lock.RLock()
for client := range ch.Clients {
// Special filtering for logServices
if ch.Name == "logServices" {
logLevel, _ := msg.Meta["level"].(string)
logService, _ := msg.Meta["service"].(string)
levelMatch := len(client.LogLevels) == 0 || contains(client.LogLevels, logLevel)
serviceMatch := len(client.Services) == 0 || contains(client.Services, logService)
if !levelMatch || !serviceMatch {
continue
}
}
select {
case client.Send <- message:
default:
ch.Unregister <- client
}
}
ch.lock.RUnlock()
}
}
}

View File

@@ -0,0 +1,24 @@
package websocket
import logging "lst.net/utils/logger"
func LabelProcessor(broadcaster chan logging.Message) {
// Initialize any label-specific listeners
// This could listen to a different PG channel or process differently
// for {
// select {
// // Implementation depends on your label data source
// // Example:
// case labelEvent := <-someLabelChannel:
// broadcaster <- logging.Message{
// Channel: "labels",
// Data: labelEvent.Data,
// Meta: map[string]interface{}{
// "label": labelEvent.Label,
// "type": labelEvent.Type,
// },
// }
// }
// }
}

View File

@@ -0,0 +1,80 @@
package websocket
// setup the notifiyer
// -- Only needs to be run once in DB
// CREATE OR REPLACE FUNCTION notify_new_log() RETURNS trigger AS $$
// BEGIN
// PERFORM pg_notify('new_log', row_to_json(NEW)::text);
// RETURN NEW;
// END;
// $$ LANGUAGE plpgsql;
// DROP TRIGGER IF EXISTS new_log_trigger ON logs;
// CREATE TRIGGER new_log_trigger
// AFTER INSERT ON logs
// FOR EACH ROW EXECUTE FUNCTION notify_new_log();
import (
"encoding/json"
"fmt"
"os"
"time"
"github.com/lib/pq"
logging "lst.net/utils/logger"
)
func LogServices(broadcaster chan logging.Message) {
logger := logging.New()
logger.Info("[LogServices] started - single channel for all logs", "websocket", map[string]interface{}{})
dsn := fmt.Sprintf("host=%s port=%s user=%s password=%s dbname=%s sslmode=disable",
os.Getenv("DB_HOST"),
os.Getenv("DB_PORT"),
os.Getenv("DB_USER"),
os.Getenv("DB_PASSWORD"),
os.Getenv("DB_NAME"),
)
listener := pq.NewListener(dsn, 10*time.Second, time.Minute, nil)
err := listener.Listen("new_log")
if err != nil {
logger.Panic("Failed to LISTEN on new_log", "logger", map[string]interface{}{
"error": err.Error(),
})
}
fmt.Println("Listening for all logs through single logServices channel...")
for {
select {
case notify := <-listener.Notify:
if notify != nil {
var logData map[string]interface{}
if err := json.Unmarshal([]byte(notify.Extra), &logData); err != nil {
logger.Error("Failed to unmarshal notification payload", "logger", map[string]interface{}{
"error": err.Error(),
})
continue
}
// Always send to logServices channel
broadcaster <- logging.Message{
Channel: "logServices",
Data: logData,
Meta: map[string]interface{}{
"level": logData["level"],
"service": logData["service"],
},
}
}
case <-time.After(90 * time.Second):
go func() {
listener.Ping()
}()
}
}
}

View File

@@ -0,0 +1,55 @@
package websocket
import (
"net/http"
"github.com/gin-gonic/gin"
logging "lst.net/utils/logger"
)
var (
broadcaster = make(chan logging.Message)
)
func RegisterSocketRoutes(r *gin.Engine, base_url string) {
// Initialize all channels
InitializeChannels()
// Start channel processors
StartAllChannels()
// Start background services
go LogServices(broadcaster)
go StartBroadcasting(broadcaster, channels)
// WebSocket route
r.GET(base_url+"/ws", func(c *gin.Context) {
SocketHandler(c, channels)
})
r.GET(base_url+"/ws/clients", AdminAuthMiddleware(), handleGetClients)
}
func handleGetClients(c *gin.Context) {
channel := c.Query("channel")
var clientList []*Client
if channel != "" {
clientList = GetClientsByChannel(channel)
} else {
clientList = GetAllClients()
}
c.JSON(http.StatusOK, gin.H{
"count": len(clientList),
"clients": clientList,
})
}
func AdminAuthMiddleware() gin.HandlerFunc {
return func(c *gin.Context) {
// Implement your admin authentication logic
// Example: Check API key or JWT token
c.Next()
}
}

View File

@@ -0,0 +1,273 @@
package websocket
import (
"fmt"
"log"
"sync"
"sync/atomic"
"time"
"github.com/google/uuid"
"github.com/gorilla/websocket"
"lst.net/utils/db"
logging "lst.net/utils/logger"
)
var (
clients = make(map[*Client]bool)
clientsMu sync.RWMutex
)
type Client struct {
ClientID uuid.UUID `json:"client_id"`
Conn *websocket.Conn `json:"-"` // Excluded from JSON
APIKey string `json:"api_key"`
IPAddress string `json:"ip_address"`
UserAgent string `json:"user_agent"`
Send chan []byte `json:"-"` // Excluded from JSON
Channels map[string]bool `json:"channels"`
LogLevels []string `json:"levels,omitempty"`
Services []string `json:"services,omitempty"`
Labels []string `json:"labels,omitempty"`
ConnectedAt time.Time `json:"connected_at"`
done chan struct{} // For graceful shutdown
isAlive atomic.Bool
lastActive time.Time // Tracks last activity
}
func (c *Client) SaveToDB() {
// Convert c.Channels (map[string]bool) to map[string]interface{} for JSONB
channels := make(map[string]interface{})
for ch := range c.Channels {
channels[ch] = true
}
clientRecord := &db.ClientRecord{
APIKey: c.APIKey,
IPAddress: c.IPAddress,
UserAgent: c.UserAgent,
Channels: db.JSONB(channels),
ConnectedAt: time.Now(),
LastHeartbeat: time.Now(),
}
if err := db.DB.Create(&clientRecord).Error; err != nil {
log.Println("❌ Error saving client:", err)
} else {
c.ClientID = clientRecord.ClientID
c.ConnectedAt = clientRecord.ConnectedAt
}
}
func (c *Client) MarkDisconnected() {
logger := logging.New()
clientData := fmt.Sprintf("Client %v just lefts us", c.ClientID)
logger.Info(clientData, "websocket", map[string]interface{}{})
now := time.Now()
res := db.DB.Model(&db.ClientRecord{}).
Where("client_id = ?", c.ClientID).
Updates(map[string]interface{}{
"disconnected_at": &now,
})
if res.RowsAffected == 0 {
logger.Info("⚠️ No rows updated for client_id", "websocket", map[string]interface{}{
"clientID": c.ClientID,
})
}
if res.Error != nil {
logger.Error("❌ Error updating disconnected_at", "websocket", map[string]interface{}{
"clientID": c.ClientID,
"error": res.Error,
})
}
}
func (c *Client) SafeClient() *Client {
return &Client{
ClientID: c.ClientID,
APIKey: c.APIKey,
IPAddress: c.IPAddress,
UserAgent: c.UserAgent,
Channels: c.Channels,
LogLevels: c.LogLevels,
Services: c.Services,
Labels: c.Labels,
ConnectedAt: c.ConnectedAt,
}
}
// GetAllClients returns safe representations of all clients
func GetAllClients() []*Client {
clientsMu.RLock()
defer clientsMu.RUnlock()
var clientList []*Client
for client := range clients {
clientList = append(clientList, client.SafeClient())
}
return clientList
}
// GetClientsByChannel returns clients in a specific channel
func GetClientsByChannel(channel string) []*Client {
clientsMu.RLock()
defer clientsMu.RUnlock()
var channelClients []*Client
for client := range clients {
if client.Channels[channel] {
channelClients = append(channelClients, client.SafeClient())
}
}
return channelClients
}
// heat beat stuff
const (
pingPeriod = 30 * time.Second
pongWait = 60 * time.Second
writeWait = 10 * time.Second
)
func (c *Client) StartHeartbeat() {
logger := logging.New()
log.Println("Started hearbeat")
ticker := time.NewTicker(pingPeriod)
defer ticker.Stop()
for {
select {
case <-ticker.C:
if !c.isAlive.Load() { // Correct way to read atomic.Bool
return
}
c.Conn.SetWriteDeadline(time.Now().Add(writeWait))
if err := c.Conn.WriteMessage(websocket.PingMessage, nil); err != nil {
log.Printf("Heartbeat failed for %s: %v", c.ClientID, err)
c.Close()
return
}
now := time.Now()
res := db.DB.Model(&db.ClientRecord{}).
Where("client_id = ?", c.ClientID).
Updates(map[string]interface{}{
"last_heartbeat": &now,
})
if res.RowsAffected == 0 {
logger.Info("⚠️ No rows updated for client_id", "websocket", map[string]interface{}{
"clientID": c.ClientID,
})
}
if res.Error != nil {
logger.Error("❌ Error updating disconnected_at", "websocket", map[string]interface{}{
"clientID": c.ClientID,
"error": res.Error,
})
}
case <-c.done:
return
}
}
}
func (c *Client) Close() {
if c.isAlive.CompareAndSwap(true, false) { // Atomic swap
close(c.done)
c.Conn.Close()
// Add any other cleanup here
c.MarkDisconnected()
}
}
func (c *Client) startServerPings() {
ticker := time.NewTicker(60 * time.Second) // Ping every 30s
defer ticker.Stop()
for {
select {
case <-ticker.C:
c.Conn.SetWriteDeadline(time.Now().Add(10 * time.Second))
if err := c.Conn.WriteMessage(websocket.PingMessage, nil); err != nil {
c.Close() // Disconnect if ping fails
return
}
case <-c.done:
return
}
}
}
func (c *Client) markActive() {
c.lastActive = time.Now() // No mutex needed if single-writer
}
func (c *Client) IsActive() bool {
return time.Since(c.lastActive) < 45*time.Second // 1.5x ping interval
}
func (c *Client) updateHeartbeat() {
//fmt.Println("Updating heatbeat")
now := time.Now()
logger := logging.New()
//fmt.Printf("Updating heartbeat for client: %s at %v\n", c.ClientID, now)
//db.DB = db.DB.Debug()
res := db.DB.Model(&db.ClientRecord{}).
Where("client_id = ?", c.ClientID).
Updates(map[string]interface{}{
"last_heartbeat": &now, // Explicit format
})
//fmt.Printf("Executed SQL: %v\n", db.DB.Statement.SQL.String())
if res.RowsAffected == 0 {
logger.Info("⚠️ No rows updated for client_id", "websocket", map[string]interface{}{
"clientID": c.ClientID,
})
}
if res.Error != nil {
logger.Error("❌ Error updating disconnected_at", "websocket", map[string]interface{}{
"clientID": c.ClientID,
"error": res.Error,
})
}
// 2. Verify DB connection
if db.DB == nil {
logger.Error("DB connection is nil", "websocket", map[string]interface{}{})
return
}
// 3. Test raw SQL execution first
testRes := db.DB.Exec("SELECT 1")
if testRes.Error != nil {
logger.Error("DB ping failed", "websocket", map[string]interface{}{
"error": testRes.Error,
})
return
}
}
// work on this stats later
// Add to your admin endpoint
// type ConnectionStats struct {
// TotalConnections int `json:"total_connections"`
// ActiveConnections int `json:"active_connections"`
// AvgDuration string `json:"avg_duration"`
// }
// func GetConnectionStats() ConnectionStats {
// // Implement your metrics tracking
// }

View File

@@ -0,0 +1,224 @@
package websocket
import (
"encoding/json"
"log"
"net/http"
"time"
"github.com/gin-gonic/gin"
"github.com/gorilla/websocket"
)
type JoinPayload struct {
Channel string `json:"channel"`
APIKey string `json:"apiKey"`
Services []string `json:"services,omitempty"`
Levels []string `json:"levels,omitempty"`
Labels []string `json:"labels,omitempty"`
}
var upgrader = websocket.Upgrader{
CheckOrigin: func(r *http.Request) bool { return true }, // allow all origins; customize for prod
HandshakeTimeout: 15 * time.Second,
ReadBufferSize: 1024,
WriteBufferSize: 1024,
EnableCompression: true,
}
func SocketHandler(c *gin.Context, channels map[string]*Channel) {
// Upgrade HTTP to WebSocket
conn, err := upgrader.Upgrade(c.Writer, c.Request, nil)
if err != nil {
log.Println("WebSocket upgrade failed:", err)
return
}
//defer conn.Close()
// Create new client
client := &Client{
Conn: conn,
APIKey: "exampleAPIKey",
Send: make(chan []byte, 256), // Buffered channel
Channels: make(map[string]bool),
IPAddress: c.ClientIP(),
UserAgent: c.Request.UserAgent(),
done: make(chan struct{}),
}
client.isAlive.Store(true)
// Add to global clients map
clientsMu.Lock()
clients[client] = true
clientsMu.Unlock()
// Save initial connection to DB
client.SaveToDB()
// Save initial connection to DB
// if err := client.SaveToDB(); err != nil {
// log.Println("Failed to save client to DB:", err)
// conn.Close()
// return
// }
// Set handlers
conn.SetPingHandler(func(string) error {
return nil // Auto-responds with pong
})
conn.SetPongHandler(func(string) error {
now := time.Now()
client.markActive() // Track last pong time
client.lastActive = now
client.updateHeartbeat()
return nil
})
// Start server-side ping ticker
go client.startServerPings()
defer func() {
// Unregister from all channels
for channelName := range client.Channels {
if ch, exists := channels[channelName]; exists {
ch.Unregister <- client
}
}
// Remove from global clients map
clientsMu.Lock()
delete(clients, client)
clientsMu.Unlock()
// Mark disconnected in DB
client.MarkDisconnected()
// Close connection
conn.Close()
log.Printf("Client disconnected: %s", client.ClientID)
}()
// Send welcome message immediately
welcomeMsg := map[string]string{
"status": "connected",
"message": "Welcome to the WebSocket server. Send subscription request to begin.",
}
if err := conn.WriteJSON(welcomeMsg); err != nil {
log.Println("Failed to send welcome message:", err)
return
}
// Message handling goroutine
go func() {
defer func() {
// Cleanup on disconnect
for channelName := range client.Channels {
if ch, exists := channels[channelName]; exists {
ch.Unregister <- client
}
}
close(client.Send)
client.MarkDisconnected()
}()
for {
_, msg, err := conn.ReadMessage()
if err != nil {
if websocket.IsUnexpectedCloseError(err, websocket.CloseGoingAway) {
log.Printf("Client disconnected unexpectedly: %v", err)
}
break
}
var payload struct {
Channel string `json:"channel"`
APIKey string `json:"apiKey"`
Services []string `json:"services,omitempty"`
Levels []string `json:"levels,omitempty"`
Labels []string `json:"labels,omitempty"`
}
if err := json.Unmarshal(msg, &payload); err != nil {
conn.WriteJSON(map[string]string{"error": "invalid payload format"})
continue
}
// Validate API key (implement your own validateAPIKey function)
// if payload.APIKey == "" || !validateAPIKey(payload.APIKey) {
// conn.WriteJSON(map[string]string{"error": "invalid or missing API key"})
// continue
// }
if payload.APIKey == "" {
conn.WriteMessage(websocket.TextMessage, []byte("Missing API Key"))
continue
}
client.APIKey = payload.APIKey
// Handle channel subscription
switch payload.Channel {
case "logServices":
// Unregister from other channels if needed
if client.Channels["labels"] {
channels["labels"].Unregister <- client
delete(client.Channels, "labels")
}
// Update client filters
client.Services = payload.Services
client.LogLevels = payload.Levels
// Register to channel
channels["logServices"].Register <- client
client.Channels["logServices"] = true
conn.WriteJSON(map[string]string{
"status": "subscribed",
"channel": "logServices",
})
case "labels":
// Unregister from other channels if needed
if client.Channels["logServices"] {
channels["logServices"].Unregister <- client
delete(client.Channels, "logServices")
}
// Set label filters if provided
if payload.Labels != nil {
client.Labels = payload.Labels
}
// Register to channel
channels["labels"].Register <- client
client.Channels["labels"] = true
// Update DB record
client.SaveToDB()
// if err := client.SaveToDB(); err != nil {
// log.Println("Failed to update client labels:", err)
// }
conn.WriteJSON(map[string]interface{}{
"status": "subscribed",
"channel": "labels",
"filters": client.Labels,
})
default:
conn.WriteJSON(map[string]string{
"error": "invalid channel",
"available_channels": "logServices, labels",
})
}
}
}()
// Send messages to client
for message := range client.Send {
if err := conn.WriteMessage(websocket.TextMessage, message); err != nil {
log.Println("Write error:", err)
break
}
}
}

View File

@@ -3,35 +3,59 @@ module lst.net
go 1.24.3
require (
github.com/gin-contrib/cors v1.7.6
github.com/gin-gonic/gin v1.10.1
github.com/gorilla/websocket v1.5.3
github.com/joho/godotenv v1.5.1
github.com/rs/zerolog v1.34.0
github.com/swaggo/swag v1.16.6
gorm.io/driver/postgres v1.6.0
gorm.io/gorm v1.30.0
)
require (
github.com/bytedance/sonic v1.11.6 // indirect
github.com/bytedance/sonic/loader v0.1.1 // indirect
github.com/cloudwego/base64x v0.1.4 // indirect
github.com/cloudwego/iasm v0.2.0 // indirect
github.com/gabriel-vasile/mimetype v1.4.3 // indirect
github.com/gin-contrib/sse v0.1.0 // indirect
github.com/KyleBanks/depth v1.2.1 // indirect
github.com/bytedance/sonic v1.13.3 // indirect
github.com/bytedance/sonic/loader v0.3.0 // indirect
github.com/cloudwego/base64x v0.1.5 // indirect
github.com/gabriel-vasile/mimetype v1.4.9 // indirect
github.com/gin-contrib/sse v1.1.0 // indirect
github.com/go-openapi/jsonpointer v0.21.1 // indirect
github.com/go-openapi/jsonreference v0.21.0 // indirect
github.com/go-openapi/spec v0.21.0 // indirect
github.com/go-openapi/swag v0.23.1 // indirect
github.com/go-playground/locales v0.14.1 // indirect
github.com/go-playground/universal-translator v0.18.1 // indirect
github.com/go-playground/validator/v10 v10.20.0 // indirect
github.com/goccy/go-json v0.10.2 // indirect
github.com/go-playground/validator/v10 v10.27.0 // indirect
github.com/goccy/go-json v0.10.5 // indirect
github.com/google/uuid v1.6.0
github.com/jackc/pgpassfile v1.0.0 // indirect
github.com/jackc/pgservicefile v0.0.0-20240606120523-5a60cdf6a761 // indirect
github.com/jackc/pgx/v5 v5.7.5 // indirect
github.com/jackc/puddle/v2 v2.2.2 // indirect
github.com/jinzhu/inflection v1.0.0 // indirect
github.com/jinzhu/now v1.1.5 // indirect
github.com/josharian/intern v1.0.0 // indirect
github.com/json-iterator/go v1.1.12 // indirect
github.com/klauspost/cpuid/v2 v2.2.7 // indirect
github.com/klauspost/cpuid/v2 v2.3.0 // indirect
github.com/leodido/go-urn v1.4.0 // indirect
github.com/lib/pq v1.10.9
github.com/mailru/easyjson v0.9.0 // indirect
github.com/mattn/go-colorable v0.1.14 // indirect
github.com/mattn/go-isatty v0.0.20 // indirect
github.com/modern-go/concurrent v0.0.0-20180306012644-bacd9c7ef1dd // indirect
github.com/modern-go/reflect2 v1.0.2 // indirect
github.com/pelletier/go-toml/v2 v2.2.2 // indirect
github.com/pelletier/go-toml/v2 v2.2.4 // indirect
github.com/twitchyliquid64/golang-asm v0.15.1 // indirect
github.com/ugorji/go/codec v1.2.12 // indirect
golang.org/x/arch v0.8.0 // indirect
golang.org/x/crypto v0.23.0 // indirect
golang.org/x/net v0.25.0 // indirect
golang.org/x/sys v0.20.0 // indirect
golang.org/x/text v0.15.0 // indirect
google.golang.org/protobuf v1.34.1 // indirect
github.com/ugorji/go/codec v1.3.0 // indirect
golang.org/x/arch v0.19.0 // indirect
golang.org/x/crypto v0.40.0 // indirect
golang.org/x/mod v0.26.0 // indirect
golang.org/x/net v0.42.0 // indirect
golang.org/x/sync v0.16.0 // indirect
golang.org/x/sys v0.34.0 // indirect
golang.org/x/text v0.27.0 // indirect
golang.org/x/tools v0.35.0 // indirect
google.golang.org/protobuf v1.36.6 // indirect
gopkg.in/yaml.v3 v3.0.1 // indirect
)

View File

@@ -1,24 +1,67 @@
// @title My Awesome API
// @version 1.0
// @description This is a sample server for a pet store.
// @termsOfService http://swagger.io/terms/
// @contact.name API Support
// @contact.url http://www.swagger.io/support
// @contact.email support@swagger.io
// @license.name Apache 2.0
// @license.url http://www.apache.org/licenses/LICENSE-2.0.html
// @host localhost:8080
// @BasePath /api/v1
package main
import (
"errors"
"fmt"
"log"
"net/http"
"os"
"github.com/gin-contrib/cors"
"github.com/gin-gonic/gin"
"github.com/joho/godotenv"
"lst.net/cmd/services/system/settings"
"lst.net/cmd/services/websocket"
// _ "lst.net/docs"
"lst.net/utils/db"
logging "lst.net/utils/logger"
)
func main() {
log := logging.New()
// Load .env only in dev (not Docker/production)
if os.Getenv("RUNNING_IN_DOCKER") != "true" {
err := godotenv.Load("../.env")
if err != nil {
log.Println("Warning: .env file not found (ok in Docker/production)")
log.Info("Warning: .env file not found (ok in Docker/production)", "system", map[string]interface{}{})
}
}
// Initialize DB
if _, err := db.InitDB(); err != nil {
log.Panic("Database intialize failed", "db", map[string]interface{}{
"error": err.Error(),
"casue": errors.Unwrap(err),
"timeout": "30s",
"details": fmt.Sprintf("%+v", err), // Full stack trace if available
})
}
defer func() {
if r := recover(); r != nil {
sqlDB, _ := db.DB.DB()
sqlDB.Close()
log.Error("Recovered from panic during DB shutdown", "db", map[string]interface{}{
"panic": r,
})
}
}()
// Set basePath dynamically
basePath := "/"
@@ -34,6 +77,16 @@ func main() {
gin.SetMode(gin.ReleaseMode)
}
// Enable CORS (adjust origins as needed)
r.Use(cors.New(cors.Config{
AllowOrigins: []string{"*"}, // Allow all origins (change in production)
AllowMethods: []string{"GET", "OPTIONS", "POST", "DELETE", "PATCH", "CONNECT"},
AllowHeaders: []string{"Origin", "Cache-Control", "Content-Type"},
ExposeHeaders: []string{"Content-Length"},
AllowCredentials: true,
AllowWebSockets: true,
}))
// // --- Add Redirects Here ---
// // Redirect root ("/") to "/app" or "/lst/app"
// r.GET("/", func(c *gin.Context) {
@@ -52,35 +105,26 @@ func main() {
r.StaticFS(basePath+"/app", http.Dir("frontend"))
r.GET(basePath+"/api/ping", func(c *gin.Context) {
log.Info("Checking if the server is up", "system", map[string]interface{}{
"endpoint": "/api/ping",
"client_ip": c.ClientIP(),
"user_agent": c.Request.UserAgent(),
})
c.JSON(200, gin.H{"message": "pong"})
})
//logging.RegisterLoggerRoutes(r, basePath)
websocket.RegisterSocketRoutes(r, basePath)
settings.RegisterSettingsRoutes(r, basePath)
r.Any(basePath+"/api", errorApiLoc)
// // Serve static assets for Vite app
// r.Static("/lst/app/assets", "./dist/app/assets")
// // Catch-all for Vite app routes
// r.NoRoute(func(c *gin.Context) {
// path := c.Request.URL.Path
// // Don't handle API, assets, or docs
// if strings.HasPrefix(path, "/lst/api") ||
// strings.HasPrefix(path, "/lst/app/assets") ||
// strings.HasPrefix(path, "/lst/docs") {
// c.JSON(404, gin.H{"error": "Not found"})
// return
// }
// // Serve index.html for all /lst/app routes
// if strings.HasPrefix(path, "/lst/app") {
// c.File("./dist/app/index.html")
// return
// }
// c.JSON(404, gin.H{"error": "Not found"})
// })
r.Run(":8080")
// get the server port
port := "8080"
if os.Getenv("VITE_SERVER_PORT") != "" {
port = os.Getenv("VITE_SERVER_PORT")
}
r.Run(":" + port)
}
// func serveViteApp(c *gin.Context) {
@@ -93,5 +137,11 @@ func main() {
// c.JSON(http.StatusBadRequest, gin.H{"message": "welcome to lst system you might have just encountered an incorrect area of the app"})
// }
func errorApiLoc(c *gin.Context) {
log := logging.New()
log.Error("Api endpoint hit that dose not exist", "system", map[string]interface{}{
"endpoint": "/api",
"client_ip": c.ClientIP(),
"user_agent": c.Request.UserAgent(),
})
c.JSON(http.StatusBadRequest, gin.H{"message": "looks like you have encountered an api route that dose not exist"})
}

51
backend/utils/db/db.go Normal file
View File

@@ -0,0 +1,51 @@
package db
import (
"fmt"
"os"
"gorm.io/driver/postgres"
"gorm.io/gorm"
)
var DB *gorm.DB
type JSONB map[string]interface{}
type DBConfig struct {
DB *gorm.DB
DSN string
}
func InitDB() (*DBConfig, error) {
dsn := fmt.Sprintf("host=%s port=%s user=%s password=%s dbname=%s",
os.Getenv("DB_HOST"),
os.Getenv("DB_PORT"),
os.Getenv("DB_USER"),
os.Getenv("DB_PASSWORD"),
os.Getenv("DB_NAME"))
var err error
DB, err = gorm.Open(postgres.Open(dsn), &gorm.Config{})
if err != nil {
return nil, fmt.Errorf("failed to connect to database: %v", err)
}
fmt.Println("✅ Connected to database")
// ensures we have the uuid stuff setup properly
DB.Exec(`CREATE EXTENSION IF NOT EXISTS "uuid-ossp"`)
err = DB.AutoMigrate(&Log{}, &Settings{}, &ClientRecord{})
if err != nil {
return nil, fmt.Errorf("failed to auto-migrate models: %v", err)
}
fmt.Println("✅ Database migration completed successfully")
return &DBConfig{
DB: DB,
DSN: dsn,
}, nil
}

48
backend/utils/db/logs.go Normal file
View File

@@ -0,0 +1,48 @@
package db
import (
"time"
"github.com/google/uuid"
"gorm.io/gorm"
)
type Log struct {
LogID uuid.UUID `gorm:"type:uuid;default:uuid_generate_v4();primaryKey" json:"id"`
Level string `gorm:"size:10;not null"` // "info", "error", etc.
Message string `gorm:"not null"`
Service string `gorm:"size:50"`
Metadata JSONB `gorm:"type:jsonb"` // fields (e.g., {"user_id": 123})
CreatedAt time.Time `gorm:"index"`
Checked bool `gorm:"type:boolean;default:false"`
UpdatedAt time.Time
DeletedAt gorm.DeletedAt `gorm:"index"`
}
// JSONB is a helper type for PostgreSQL JSONB fields.
//type JSONB map[string]interface{}
// --- CRUD Operations ---
// CreateLog inserts a new log entry.
func CreateLog(level, message, service string, metadata JSONB) error {
log := Log{
Level: level,
Message: message,
Service: service,
Metadata: metadata,
}
return DB.Create(&log).Error
}
// GetLogsByLevel fetches logs filtered by severity.
func GetLogs(level string, limit int, service string) ([]Log, error) {
var logs []Log
err := DB.Where("level = ? and service = ?", level, service).Limit(limit).Find(&logs).Error
return logs, err
}
// DeleteOldLogs removes logs older than `days` and by level.
func DeleteOldLogs(days int, level string) error {
return DB.Where("created_at < ? and level = ?", time.Now().AddDate(0, 0, -days), level).Delete(&Log{}).Error
}

View File

@@ -0,0 +1,167 @@
package db
import (
"log"
"reflect"
"strings"
"time"
"github.com/google/uuid"
"gorm.io/gorm"
"lst.net/utils/inputs"
)
type Settings struct {
SettingID uuid.UUID `gorm:"type:uuid;default:uuid_generate_v4();primaryKey" json:"id"`
Name string `gorm:"uniqueIndex;not null"`
Description string `gorm:"type:text"`
Value string `gorm:"not null"`
Enabled bool `gorm:"default:true"`
AppService string `gorm:"default:system"`
CreatedAt time.Time `gorm:"index"`
UpdatedAt time.Time `gorm:"index"`
DeletedAt gorm.DeletedAt `gorm:"index"`
}
var seedConfigData = []Settings{
{Name: "serverPort", Description: "The port the server will listen on if not running in docker", Value: "4000", Enabled: true, AppService: "server"},
{Name: "server", Description: "The server we will use when connecting to the alplaprod sql", Value: "usmcd1vms006", Enabled: true, AppService: "server"},
{Name: "timezone", Value: "America/Chicago", Description: "What time zone is the server in this is used for cronjobs and some other time stuff", AppService: "server", Enabled: true},
{Name: "dbUser", Value: "alplaprod", Description: "What is the db userName", AppService: "server", Enabled: true},
{Name: "dbPass", Value: "b2JlbGl4", Description: "What is the db password", AppService: "server", Enabled: true},
{Name: "tcpPort", Value: "2222", Description: "TCP port for printers to connect send data and the zedra cameras", AppService: "server", Enabled: true},
{Name: "prolinkCheck", Value: "1", Description: "Will prolink be considered to check if matches, maninly used in plants that do not fully utilize prolink + ocp", AppService: "production", Enabled: true},
{Name: "bookin", Value: "1", Description: "do we want to book in after a label is printed", AppService: "ocp", Enabled: true},
{Name: "dbServer", Value: "usmcd1vms036", Description: "What server is the prod db on?", AppService: "server", Enabled: true},
{Name: "printDelay", Value: "90", Description: "How long in seconds between prints", AppService: "ocp", Enabled: true},
{Name: "plantToken", Value: "test3", Description: "What is the plant token", AppService: "server", Enabled: true},
{Name: "dualPrinting", Value: "0", Description: "Dose the plant have 2 machines that go to 1?", AppService: "ocp", Enabled: true},
{Name: "ocmeService", Value: "0", Description: "Is the ocme service enabled. this is gernerally only for Dayton.", AppService: "ocme", Enabled: true},
{Name: "fifoCheck", Value: "45", Description: "How far back do we want to check for fifo default 45, putting 0 will ignore.", AppService: "ocme", Enabled: true},
{Name: "dayCheck", Value: "3", Description: "how many days +/- to check for shipments in alplaprod", AppService: "ocme", Enabled: true},
{Name: "maxLotPerTruck", Value: "3", Description: "How mant lots can we have per truck?", AppService: "ocme", Enabled: true},
{Name: "monitorAddress", Value: "8", Description: "What address is monitored to be limited to the amount of lots that can be added to a truck.", AppService: "ocme", Enabled: true},
{Name: "ocmeCycleCount", Value: "1", Description: "Are we allowing ocme cycle counts?", AppService: "ocme", Enabled: true},
{Name: "devDir", Value: "", Description: "This is the dev dir and strictly only for updating the servers.", AppService: "server", Enabled: true},
{Name: "demandMGTActivated", Value: "0", Description: "Do we allow for new fake edi?", AppService: "logistics", Enabled: true},
{Name: "qualityRequest", Value: "0", Description: "quality request module?", AppService: "quality", Enabled: true},
{Name: "ocpLogsCheck", Value: "4", Description: "How long do we want to allow logs to show that have not been cleared?", AppService: "ocp", Enabled: true},
{Name: "inhouseDelivery", Value: "0", Description: "Are we doing auto inhouse delivery?", AppService: "ocp", Enabled: true},
// dyco settings
{Name: "dycoConnect", Value: "0", Description: "Are we running the dyco system?", AppService: "dycp", Enabled: true},
{Name: "dycoPrint", Value: "0", Description: "Are we using the dyco to get the labels or the rfid?", AppService: "dyco", Enabled: true},
{Name: "strapperCheck", Value: "1", Description: "Are we monitoring the strapper for faults?", AppService: "dyco", Enabled: true},
// ocp
{Name: "ocpActive", Value: `1`, Description: "Are we pritning on demand?", AppService: "ocp", Enabled: true},
{Name: "ocpCycleDelay", Value: `10`, Description: "How long between printer cycles do we want to monitor.", AppService: "ocp", Enabled: true},
{Name: "pNgAddress", Value: `139`, Description: "What is the address for p&g so we can make sure we have the correct fake edi forcast going in.", AppService: "logisitcs", Enabled: true},
{Name: "scannerID", Value: `500`, Description: "What scanner id will we be using for the app", AppService: "logistics", Enabled: true},
{Name: "scannerPort", Value: `50002`, Description: "What port instance will we be using?", AppService: "logistics", Enabled: true},
{Name: "stagingReturnLocations", Value: `30125,31523`, Description: "What are the staging location IDs we will use to select from. seperated by commas", AppService: "logistics", Enabled: true},
}
func SeedConfigs(db *gorm.DB) error {
for _, cfg := range seedConfigData {
var existing Settings
// Try to find config by unique Name
result := db.Where("Name =?", cfg.Name).First(&existing)
if result.Error != nil {
if result.Error == gorm.ErrRecordNotFound {
// not here lets add it
if err := db.Create(&cfg).Error; err != nil {
log.Printf("Failed to seed config %s: %v", cfg.Name, err)
}
//log.Printf("Seeded new config: %s", cfg.Name)
} else {
// Some other error
return result.Error
}
} else {
// only update the fields we want to update.
existing.Description = cfg.Description
if err := db.Save(&existing).Error; err != nil {
log.Printf("Failed to update config %s: %v", cfg.Name, err)
return err
}
//log.Printf("Updated existing config: %s", cfg.Name)
}
}
return nil
}
func GetAllConfigs(db *gorm.DB) ([]map[string]interface{}, error) {
var settings []Settings
result := db.Find(&settings)
if result.Error != nil {
return nil, result.Error
}
// Function to convert struct to map with lowercase keys
toLowercase := func(s Settings) map[string]interface{} {
t := reflect.TypeOf(s)
v := reflect.ValueOf(s)
data := make(map[string]interface{})
for i := 0; i < t.NumField(); i++ {
field := strings.ToLower(t.Field(i).Name)
data[field] = v.Field(i).Interface()
}
return data
}
// Convert each struct in settings slice to a map with lowercase keys
var lowercaseSettings []map[string]interface{}
for _, setting := range settings {
lowercaseSettings = append(lowercaseSettings, toLowercase(setting))
}
return lowercaseSettings, nil
}
func UpdateConfig(db *gorm.DB, id string, input inputs.SettingUpdateInput) error {
var cfg Settings
if err := db.Where("setting_id =?", id).First(&cfg).Error; err != nil {
return err
}
updates := map[string]interface{}{}
if input.Description != nil {
updates["description"] = *input.Description
}
if input.Value != nil {
updates["value"] = *input.Value
}
if input.Enabled != nil {
updates["enabled"] = *input.Enabled
}
if input.AppService != nil {
updates["app_service"] = *input.AppService
}
if len(updates) == 0 {
return nil // nothing to update
}
return db.Model(&cfg).Updates(updates).Error
}
func DeleteConfig(db *gorm.DB, id uint) error {
// Soft delete by ID
return db.Delete(&Settings{}, id).Error
}
func RestoreConfig(db *gorm.DB, id uint) error {
var cfg Settings
if err := db.Unscoped().First(&cfg, id).Error; err != nil {
return err
}
cfg.DeletedAt = gorm.DeletedAt{}
return db.Unscoped().Save(&cfg).Error
}

View File

@@ -0,0 +1,20 @@
package db
import (
"time"
"github.com/google/uuid"
)
type ClientRecord struct {
ClientID uuid.UUID `gorm:"type:uuid;default:uuid_generate_v4();primaryKey" json:"id"`
APIKey string `gorm:"not null"`
IPAddress string `gorm:"not null"`
UserAgent string `gorm:"size:255"`
ConnectedAt time.Time `gorm:"index"`
LastHeartbeat time.Time `gorm:"column:last_heartbeat"`
Channels JSONB `gorm:"type:jsonb"`
CreatedAt time.Time
UpdatedAt time.Time
DisconnectedAt *time.Time `gorm:"column:disconnected_at"`
}

View File

@@ -0,0 +1,8 @@
package inputs
type SettingUpdateInput struct {
Description *string `json:"description"`
Value *string `json:"value"`
Enabled *bool `json:"enabled"`
AppService *string `json:"app_service"`
}

View File

@@ -0,0 +1,117 @@
package logging
import (
"encoding/json"
"fmt"
"os"
"strings"
"time"
"github.com/rs/zerolog"
"github.com/rs/zerolog/log"
"lst.net/utils/db"
)
type CustomLogger struct {
consoleLogger zerolog.Logger
}
type Message struct {
Channel string `json:"channel"`
Data map[string]interface{} `json:"data"`
Meta map[string]interface{} `json:"meta,omitempty"`
}
// New creates a configured CustomLogger.
func New() *CustomLogger {
// Colorized console output
consoleWriter := zerolog.ConsoleWriter{
Out: os.Stderr,
TimeFormat: "2006-01-02 15:04:05",
}
return &CustomLogger{
consoleLogger: zerolog.New(consoleWriter).
With().
Timestamp().
Logger(),
}
}
func PrettyFormat(level, message string, metadata map[string]interface{}) string {
timestamp := time.Now().Format("2006-01-02 15:04:05")
base := fmt.Sprintf("[%s] %s| Message: %s", strings.ToUpper(level), timestamp, message)
if len(metadata) > 0 {
metaJSON, _ := json.Marshal(metadata)
return fmt.Sprintf("%s | Metadata: %s", base, string(metaJSON))
}
return base
}
func (l *CustomLogger) logToPostgres(level, message, service string, metadata map[string]interface{}) {
err := db.CreateLog(level, message, service, metadata)
if err != nil {
// Fallback to console if DB fails
log.Error().Err(err).Msg("Failed to write log to PostgreSQL")
}
}
// --- Level-Specific Methods ---
func (l *CustomLogger) Info(message, service string, fields map[string]interface{}) {
l.consoleLogger.Info().Fields(fields).Msg(message)
l.logToPostgres("info", message, service, fields)
//PostLog(PrettyFormat("info", message, fields)) // Broadcast pretty message
}
func (l *CustomLogger) Warn(message, service string, fields map[string]interface{}) {
l.consoleLogger.Error().Fields(fields).Msg(message)
l.logToPostgres("warn", message, service, fields)
//PostLog(PrettyFormat("warn", message, fields)) // Broadcast pretty message
// Custom logic for errors (e.g., alerting)
if len(fields) > 0 {
l.consoleLogger.Warn().Msg("Additional error context captured")
}
}
func (l *CustomLogger) Error(message, service string, fields map[string]interface{}) {
l.consoleLogger.Error().Fields(fields).Msg(message)
l.logToPostgres("error", message, service, fields)
//PostLog(PrettyFormat("error", message, fields)) // Broadcast pretty message
// Custom logic for errors (e.g., alerting)
if len(fields) > 0 {
l.consoleLogger.Warn().Msg("Additional error context captured")
}
}
func (l *CustomLogger) Panic(message, service string, fields map[string]interface{}) {
// Log to console (colored, with fields)
l.consoleLogger.Error().
Str("service", service).
Fields(fields).
Msg(message + " (PANIC)") // Explicitly mark as panic
// Log to PostgreSQL (sync to ensure it's saved before crashing)
err := db.CreateLog("panic", message, service, fields) // isCritical=true
if err != nil {
l.consoleLogger.Error().Err(err).Msg("Failed to save panic log to PostgreSQL")
}
// Additional context (optional)
if len(fields) > 0 {
l.consoleLogger.Warn().Msg("Additional panic context captured")
}
panic(message)
}
func (l *CustomLogger) Debug(message, service string, fields map[string]interface{}) {
l.consoleLogger.Debug().Fields(fields).Msg(message)
l.logToPostgres("debug", message, service, fields)
}

View File

@@ -7,9 +7,20 @@ services:
no_cache: true
image: git.tuffraid.net/cowch/logistics_support_tool:latest
container_name: lst_backend
networks:
- docker-network
environment:
DB_HOST: postgres
DB_PORT: 5432
DB_USER: username
DB_PASSWORD: passwordl
DB_NAME: lst
volumes:
- /path/to/backend/data:/data
ports:
- "8080:8080"
restart: unless-stopped
pull_policy: never
networks:
docker-network:
external: true

View File

@@ -1,10 +1,11 @@
import { useState } from 'react'
import reactLogo from './assets/react.svg'
import viteLogo from '/vite.svg'
import './App.css'
import { useState } from "react";
import reactLogo from "./assets/react.svg";
import viteLogo from "/vite.svg";
import "./App.css";
import WebSocketViewer from "./WebSocketTest";
function App() {
const [count, setCount] = useState(0)
const [count, setCount] = useState(0);
return (
<>
@@ -13,7 +14,11 @@ function App() {
<img src={viteLogo} className="logo" alt="Vite logo" />
</a>
<a href="https://react.dev" target="_blank">
<img src={reactLogo} className="logo react" alt="React logo" />
<img
src={reactLogo}
className="logo react"
alt="React logo"
/>
</a>
</div>
<h1>Vite + React</h1>
@@ -28,8 +33,9 @@ function App() {
<p className="read-the-docs">
Click on the Vite and React logos to learn more
</p>
<WebSocketViewer />
</>
)
);
}
export default App
export default App;

View File

@@ -0,0 +1,41 @@
import { useEffect, useRef } from "react";
const WebSocketViewer = () => {
const ws = useRef<any>(null);
useEffect(() => {
// Connect to your Go backend WebSocket endpoint
ws.current = new WebSocket(
(window.location.protocol === "https:" ? "wss://" : "ws://") +
window.location.host +
"/lst/api/logger/logs"
);
ws.current.onopen = () => {
console.log("[WebSocket] Connected");
};
ws.current.onmessage = (event: any) => {
console.log("[WebSocket] Message received:", event.data);
};
ws.current.onerror = (error: any) => {
console.error("[WebSocket] Error:", error);
};
ws.current.onclose = () => {
console.log("[WebSocket] Disconnected");
};
// Cleanup on unmount
return () => {
if (ws.current) {
ws.current.close();
}
};
}, []);
return <div>Check the console for WebSocket messages</div>;
};
export default WebSocketViewer;

4
package-lock.json generated
View File

@@ -1,12 +1,12 @@
{
"name": "logistics_support_tool",
"version": "0.0.1-alpha.5",
"version": "0.0.1-alpha.6",
"lockfileVersion": 3,
"requires": true,
"packages": {
"": {
"name": "logistics_support_tool",
"version": "0.0.1-alpha.5",
"version": "0.0.1-alpha.6",
"license": "ISC",
"dependencies": {
"dotenv": "^17.2.0",

View File

@@ -1,6 +1,6 @@
{
"name": "logistics_support_tool",
"version": "0.0.1-alpha.5",
"version": "0.0.1-alpha.6",
"description": "This is the new logisitcs support tool",
"private": true,
"main": "index.js",

View File

@@ -26,10 +26,10 @@ if (Test-Path $envFile) {
Write-Host ".env file not found at $envFile"
}
# if (-not $env:BUILD_NAME) {
# Write-Warning "BUILD_NAME environment variable is not set. Please make sure you have entered the correct info the env"
# exit 1
# }
if (-not $env:BUILD_NAME) {
Write-Warning "BUILD_NAME environment variable is not set. Please make sure you have entered the correct info the env"
exit 1
}
function Get-PackageVersion {
param (
@@ -78,7 +78,7 @@ function Update-BuildNumber {
$name = $matches[2]
$newNumber = $number + 1
$newBuildNumber = "$newNumber"
$newBuildNumber = "$($newNumber)-$($name)"
Set-Content -Path $buildNumberFile -Value $newBuildNumber
@@ -87,14 +87,17 @@ function Update-BuildNumber {
return $newBuildNumber
} else {
Write-Warning "BUILD_NUMBER file content '$current' is not in the expected 'number-name' format."
Set-Content -Path $buildNumberFile -Value "1-"$($env:BUILD_NAME)
return $null
}
}
Push-Location $rootDir/backend
Write-Host "Building the app"
go get
# swag init -o swagger -g main.go
go build -ldflags "-X main.version=$($version)-$($initialBuildValue)" -o lst_app.exe ./main.go
if ($LASTEXITCODE -ne 0) {
Write-Warning "app build failed!"
@@ -122,6 +125,22 @@ function Update-BuildNumber {
Write-Host "Building wrapper"
Push-Location $rootDir/LstWrapper
Write-Host "Changing the port to match the server port in the env file"
$port = $env:VITE_SERVER_PORT
if (-not $port) {
$port = "8080" # Default port if env var not set
}
$webConfigPath = "web.config"
$content = Get-Content -Path $webConfigPath -Raw
$newContent = $content -replace '(?<=Rewrite" url="http://localhost:)\d+(?=/\{R:1\}")', $port
$newContent | Set-Content -Path $webConfigPath -NoNewline
Write-Host "Updated web.config rewrite port to $port"
#remove the publish folder as we done need it
if (-not (Test-Path "publish")) {
Write-Host "The publish folder is already deleted nothing else to do"
@@ -131,6 +150,15 @@ function Update-BuildNumber {
dotnet publish -c Release -o ./publish
$webConfigPath = "web.config"
$content = Get-Content -Path $webConfigPath -Raw
$newContent = $content -replace '(?<=Rewrite" url="http://localhost:)\d+(?=/\{R:1\}")', "8080"
$newContent | Set-Content -Path $webConfigPath -NoNewline
Write-Host "Updated web.config rewrite port back to 8080"
Pop-Location
Write-Host "Building Docs"

View File

@@ -82,6 +82,7 @@ $filesToCopy = @(
@{ Source = "package.json"; Destination = "package.json" },
@{ Source = "CHANGELOG.md"; Destination = "CHANGELOG.md" },
@{ Source = "README.md"; Destination = "README.md" },
@{ Source = ".env-example"; Destination = ".env-example" },
# scripts to be copied over
@{ Source = "scripts\tmp"; Destination = "tmp" }
@{ Source = "scripts\iisControls.ps1"; Destination = "scripts\iisControls.ps1" }

View File

@@ -13,3 +13,6 @@ docker push git.tuffraid.net/cowch/logistics_support_tool:latest
Write-Host "Pull the new images to our docker system"
docker compose -f ./docker-compose.yml up -d --force-recreate
# in case we get logged out docker login git.tuffraid.net
# create a docker network so we have this for us docker network create -d bridge my-bridge-network

View File

@@ -138,6 +138,7 @@ $plantFunness = {
Write-Host "Stopping iis application"
Stop-WebAppPool -Name LogisticsSupportTool #-ErrorAction Stop
Start-Sleep -Seconds 3
######################################################################################