In my day to day there are a number of long running operations which I must submit. Often I am interested in when these async operations will be complete so I can varify the validity of their output, move on to the next step, or notify an upstream consumer that the process is complete.
Now this is rather straight forward thing to do if all processes are running locally. Some ways around this type of thins are to append the
say command or
echo some bell from the command line. Such as:
// Using macOS' built in say comamnd:
sleep 10000 && say "Done. What's next boss?"
Even with this ability locally, there are times when I need to step away from my computer and have the itch to know when this long running process is complete. It would be very useful to check the progress as I am away without requiring access to the computer processing my request.
At this point the problem starts to look more general. General enought that we can start to imagine a standalone command that we can pipe program output into, and forward to a remote viewer. It might even sound like we’re talking about a logging solution like Papertrail.
The big distrinction with what I propose is that I do not want or need to know the status of every job being run everywhere, such as papertrail might provide. I am after a curated, known list of tasks that I know I’ve kicked off and am interest in tracking.
The Ideal Flow:
The ideal flow might start with kicking off a long running task…
some_long_running_task | progress_tracker "Task ABC"
The above would behave very similar to
tee on *nix systems. It would show the output of our program, and also copy the output to a secondary location.
The secondary output would have to potentially be an app where, independent of being connected to the machine running the command, the status can be checked.
Ideally a web application or mobile application would be available to be able to see the status of the job. I would be able to see the most recent output determine if the task was still running and what stage it was in based on the output.
Now that we have a base use-case for use, we need to consider the potential requirements for implementation. I can see three key peices needed. First, we need the client application that acts as a collector for input data that the user will be able to eventually see and use to determine the status of the job. Second we would need a server that collects the gathered input from the clients, and serves the output back to our consumer mobile/web application. Lastly, we would require a web or mobile application the end user would interact with to view their final job status.
The client application that collects the data would likely need the most consideration as it would need to function in multiple platforms, Linux variant, macOS, and Windows. Not all implementation langaugages would provide the same flexibility of implementation, and requiring additional runtime dependencies might be detrimental to adoption.
Note: I’m leaving out Registration & Authentication form this description; but it is something that would need to be implemented and verified at every part of the flow.
In this post I’ve described an idea that I’ve had for some time. I feel there are many parallels to this and existing logging solutions. My differentiator is that this is something much more personalized to what an individual might be interested in. While this may be an interesting to create in the name of experimentation, it may prove more prudent to look at existing logging solutions, their API’s and the potential to build a pared down version of their offering to cater to needs of individuals. While this would tie us to a third-party for some time it has the benefit of reducing issues related to scalability, availability, and reliability.