I added Opencode to my Linux terminal and have it powered by Ollama and it has made my Linux computer even more amazing. I just tell it what I want it to do and it does it. It knows all of my servers, services, applications and scripts, has access to all of my config and data files. So when I tell it that I have some files stuck in the ‘download’ directory inside my ‘movies’ directory, I don’t have to tell it which computer that directory is located on or how to access it. I also don’t have to tell it that the files get into that directory using Radarr. So when I was having issues of my files not properly being moved from ‘download’ to ‘organized’ it could have just moved the files, which is what I was expecting… instead it looked at the config file for Radarr and suggested how I can fix it. That was pretty incredible.
Now, if Windows had done that using Copilot I wouldn’t be thrilled because that means that Microsoft has way too much knowledge about my personal network structure.
Adding agentic AI to the OS can be amazingly powerful, but it really should only be done with an LLM that you control.
Also, agentic AI is going to cause a LOT of problems because as great as my above example is, I later had an instance where I added several .docx files to my Opencode directory and asked it to convert the files to a format it can read (plain text) and then ingest the information from them. It did that, and then it wanted to delete the .docx files. I told it to leave the files alone and I’d delete them later. A couple of minutes later it again tried to delete those .docx files (it was literally trying to run the command ‘rm **/*.docx’, which I really don’t like it using wildcards with the rm command). So again, I told it not to, then I told it that I do not want it to ever remove any .docx files without my explicit permission. It apologized profusely… and then immediately tried to run the rm command again.
It’s a handy tool, but if you get lazy and let your guard down it’s going to bite you.
Seems like a situation where more energy needs to be put into preventing screwups than would have been spent just doing the job in the first place. If I hired a human assistant who made such a blatant mistake, and they then turned around and did the thing I just now explicitly told them not to, I would be thinking that they are clearly not cut out for that line of work, and continuing to employ them as an assistant would be foolish. But an “AI” agent does this, and it’s just “Oh, let’s keep trying. It just needs a better model. It just needs better prompts. You just need to watch out for its mistakes.” No thanks.
I mean, you sound like you’re happy using it, and I don’t want to tell you you’re wrong. From a technology design viewpoint, it is a fascinating use case. I just think that for a majority computer users this will only make computers even more frustrating and confusing than they already were. I actually understand how they work, and using this as anything but a stand-alone toy would frustrate the hell out of me.
Oh, I agree and that’s one reason why I think putting it into Windows is a huge mistake!
I am having fun with it because I find the tech interesting and I love seeing what I can get it to do… but it is so dumb and frustrating. But so was 3D printing 12 years ago, you’d have to fiddle with the settings, do some test prints to make sure everything was setup right, deal with a warped bed, and every print was an experiment. It was shitty, but when you got a good print, that was the best feeling. That’s how I feel about LLMs, it mostly sucks but when it works, it’s great.
I also support several open source LLM projects because that is where I think the real innovation will come from, and the technology is only going to get better like 3D printers now.
I added Opencode to my Linux terminal and have it powered by Ollama and it has made my Linux computer even more amazing. I just tell it what I want it to do and it does it. It knows all of my servers, services, applications and scripts, has access to all of my config and data files. So when I tell it that I have some files stuck in the ‘download’ directory inside my ‘movies’ directory, I don’t have to tell it which computer that directory is located on or how to access it. I also don’t have to tell it that the files get into that directory using Radarr. So when I was having issues of my files not properly being moved from ‘download’ to ‘organized’ it could have just moved the files, which is what I was expecting… instead it looked at the config file for Radarr and suggested how I can fix it. That was pretty incredible.
Now, if Windows had done that using Copilot I wouldn’t be thrilled because that means that Microsoft has way too much knowledge about my personal network structure.
Adding agentic AI to the OS can be amazingly powerful, but it really should only be done with an LLM that you control.
Also, agentic AI is going to cause a LOT of problems because as great as my above example is, I later had an instance where I added several .docx files to my Opencode directory and asked it to convert the files to a format it can read (plain text) and then ingest the information from them. It did that, and then it wanted to delete the .docx files. I told it to leave the files alone and I’d delete them later. A couple of minutes later it again tried to delete those .docx files (it was literally trying to run the command ‘rm **/*.docx’, which I really don’t like it using wildcards with the rm command). So again, I told it not to, then I told it that I do not want it to ever remove any .docx files without my explicit permission. It apologized profusely… and then immediately tried to run the rm command again.
It’s a handy tool, but if you get lazy and let your guard down it’s going to bite you.
Seems like a situation where more energy needs to be put into preventing screwups than would have been spent just doing the job in the first place. If I hired a human assistant who made such a blatant mistake, and they then turned around and did the thing I just now explicitly told them not to, I would be thinking that they are clearly not cut out for that line of work, and continuing to employ them as an assistant would be foolish. But an “AI” agent does this, and it’s just “Oh, let’s keep trying. It just needs a better model. It just needs better prompts. You just need to watch out for its mistakes.” No thanks.
I mean, you sound like you’re happy using it, and I don’t want to tell you you’re wrong. From a technology design viewpoint, it is a fascinating use case. I just think that for a majority computer users this will only make computers even more frustrating and confusing than they already were. I actually understand how they work, and using this as anything but a stand-alone toy would frustrate the hell out of me.
Oh, I agree and that’s one reason why I think putting it into Windows is a huge mistake!
I am having fun with it because I find the tech interesting and I love seeing what I can get it to do… but it is so dumb and frustrating. But so was 3D printing 12 years ago, you’d have to fiddle with the settings, do some test prints to make sure everything was setup right, deal with a warped bed, and every print was an experiment. It was shitty, but when you got a good print, that was the best feeling. That’s how I feel about LLMs, it mostly sucks but when it works, it’s great.
I also support several open source LLM projects because that is where I think the real innovation will come from, and the technology is only going to get better like 3D printers now.