DevOps #619
BryanDollery
started this conversation in
Show and tell
DevOps
#619
Replies: 2 comments 3 replies
-
Bryan, that was a really neat way to use fabric! I’ve gotta dig in and see how it’ll work on my machine and maybe I can build out a version that serves my purpose as I look to integrate this for my podcast post production process.
Where the variable is always the transcript, but the output has a very similar structure to what I want to do with the transcript.
Wondering if you’d be open to chat?
Junaid
…On Jun 18, 2024 at 5:45 AM -0400, Bryan ***@***.***>, wrote:
I've done something interesting (I think). I'm just playing around with it right now, but I'm a senior software engineer/enterprise-architect/systems-guy/programmer and I've been playing a lot on my new Raspberry Pi 5. It has an arm processor and comes with a-arch64 which I'm quite happy with (running sway for the win).
So, five seconds after becoming completely obsessed with Fabric, I started trying to use it to help me configure the Pi, and save me from endless web searches and reading through blogs. This is the outcome of tinkering for a week or so. There is an example output from my 'script' command in my repo generated from a previous iteration of the prompt.
I don't know if I'm over-engineering this, or if this approach is valuable to anyone else. Personally, it has saved me hours already.
—
Reply to this email directly, view it on GitHub, or unsubscribe.
You are receiving this because you are subscribed to this thread.Message ID: ***@***.***>
|
Beta Was this translation helpful? Give feedback.
2 replies
-
Thanks Bryan!The single process does make it a lot more simpler in a complex way but we get to reap the benefits of the output.I’ll start breaking down my process to see what I need and build out the SOPSent from my iPhoneOn Jun 18, 2024, at 11:37 AM, Bryan ***@***.***> wrote:
Another approach you could try is perhaps doing your entire process in a single prompt, like I have with the script. Set up personas, processes, etc. and describe your entire workflow. Then get the ai to perform it. I found that in order to make it work properly, you have to tell it explicitly to output the steps in the process and the conversations between team members. I used a manager to distribute the work and had to carefully break down the incremental nature of the process. Without the output, the AI bypasses the process and jumps straight to an answer, which is fine for really short simple scripts, but doesn't work when things get more complicated and require more steps. It seems that being forced to log the process actually forces the process to be executed.
—Reply to this email directly, view it on GitHub, or unsubscribe.You are receiving this because you commented.Message ID: ***@***.***>
|
Beta Was this translation helpful? Give feedback.
1 reply
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
-
I've done something interesting (I think). I'm just playing around with it right now, but I'm a senior software engineer/enterprise-architect/systems-guy/programmer and I've been playing a lot on my new Raspberry Pi 5. It has an arm processor and comes with a-arch64 which I'm quite happy with (running sway for the win).
So, five seconds after becoming completely obsessed with Fabric, I started trying to use it to help me configure the Pi, and save me from endless web searches and reading through blogs. This is the outcome of tinkering for a week or so. There is an example output from my 'script' command in my repo generated from a previous iteration of the prompt.
I don't know if I'm over-engineering this, or if this approach is valuable to anyone else. Personally, it has saved me hours already.
Beta Was this translation helpful? Give feedback.
All reactions