BatchPatch Forums Home › Forums › BatchPatch Support Forum › Large Logs files of Batchpatch stored in C Drive
- This topic has 5 replies, 3 voices, and was last updated 1 year, 11 months ago by doug.
-
AuthorPosts
-
August 9, 2022 at 1:02 pm #13494jomich18Participant
Hi,
is there a way to limit the log files being saved on the Target C Drive?
C:\Batchpatch\BatchpatchRemoteProcessoutput.logsWe have some target computers that have 20~30GB files of batchpatch logs stored in their C drive. Isn’t this autodelete when the running deployment on that target is finished?
August 9, 2022 at 1:32 pm #13495dougModeratorThe default working directory on target computers is C:\Program Files\BatchPatch
The default location can be modified under ‘Tools > Settings > Remote Execution > Remote working directory’. It sounds like you have modified yours to be C:\Batchpatch, so that is what I will reference for the rest of this posting.
Each time you execute a “Remote command (logged output)” in BatchPatch, it creates a temporary file C:\Batchpatch\BatchPatchRemoteProcessOutputXXXXXXXXX.log where XXXXXXXXX is a random 9-digit number. When execution of that “Remote command (logged output)” is complete, that file is deleted.
You said that you have 20-30GB of files, so that itself is a red flag that indicates to me that you have attempted in the past (or perhaps currently) to execute a “Remote command (logged output)” that produces a massive output, perhaps due to an infinite loop in your command/script, or perhaps not. In any case, the first thing you need to do is examine your process to figure out what would be creating such a large output. In 99% of cases when a BatchPatch user runs a BatchPatch “Remote command (logged output)” the output is tiny because it’s just the output of a simple command like “IPCONFIG” or whatever. Second, BatchPatch does always delete these temporary files upon completion of the “Remote command (logged output)” so if you are seeing these files there, then it means that there was some kind of problem that prevented them from being deleted. One possible cause for them not being deleted is because BatchPatch is being closed while the “Remote command (logged output)” is still running for a given row/host. The other cause for them not being deleted is because they are so excessively large that somehow the deletion process itself is failing.
The bottom line is that if you are seeing very large files there, then you are running (or have in the past) run “Remote command (logged output)” commands/scripts that are producing massive output. You’ll need/want to address this by modifying whatever “Remote command (logged output)” commands/scripts you are running. In the meantime you should then also delete those large files, as they are temp files. The fact that they are still present indicates that there was a problem of some kind that prevented them from being deleted.
August 9, 2022 at 1:57 pm #13496jomich18ParticipantThanks Doug. I remember we pushed Windows 10 Features updates and Windows 11 on some Computer thru with logged output so we can see where did the error happen.
This include copying of the setup file of the OS to prevent error while upgrading them from home.And this answer/verify my finding that log files is temporary and should auto delete itself after, but if not, these are the reasons.
One possible cause for them not being deleted is because BatchPatch is being closed while the “Remote command (logged output)” is still running for a given row/host. The other cause for them not being deleted is because they are so excessively large that somehow the deletion process itself is failing.
For that we are going to manually delete logs files since there are few of them affected.
August 9, 2022 at 2:10 pm #13497dougModeratorSounds good. However, just to be clear, when you perform a BatchPatch Deployment operation with logged output, it creates the deployment temp files in a different folder with different name. The default location for deployment logs is C:\Program Files\BatchPatch\deployment (this is defined under ‘Actions > Deploy > Target working directory’. When you select ‘Retrieve console output’ in a BatchPatch deployment, it then creates files in that deployment working directory like this: BatchPatchDeploymentOutputXXXXXXXXX.log. So, since you mentioned specifically BatchPatchRemoteProcessXXXXXXXXX.log, and since a Windows feature upgrade would normally be performed with the BatchPatch “Deploy” feature, not with the “Remote command (logged output)” action, you should probably double-check for the cause of these files since you have BatchPatchRemoteProcessXXXXXXXXX.log files, not BatchPatchDeploymentOutputXXXXXXXXX.log files.
December 2, 2022 at 10:02 am #13960jessthemessParticipantWe just had a similar issue with one of our domain controllers generating a BatchPatchRemoteProcessXXXXXXXXX.log file at 66GB causing the system drive to run out of space. The last bit of work we know to have been done on this DC was installing a standalone patch as a deployment (which we did to all DCs in the environment).
During the deployment we were also running occasional remote commands (logged output) to set REG keys (REG ADD) related to the update, query (REG QUERY) that the keys were added properly, and also would check that the update was applied properly after reboot (systeminfo | findstr /i /c:[KB#######])
We did the same procedure across 30+ DCs but only one (hopefully we don’t find out there’s another one waiting to fill up!) had this large log problem. I can’t see how any of those commands would have looped in any way to generate this giant log file.
This happens to us rarely but it’s not the first time this has been a problem. Any other information on what may have happened and how to prevent this?
Meantime, I’m going to see if I can get that 66GB log file off a backup and maybe there’s something there.
December 2, 2022 at 2:14 pm #13961dougModeratorThe same info that I posted previously still applies. When BatchPatch executes a remote command with logged output (or a user-defined ‘Get info’ command) it will log the output to a file (BatchPatchRemoteProcessOutputXXXXXXXXX.log) in the remote working directory on the target computer. A BatchPatch deployment with ‘Retrieve console output’ enabled will log to a file (BatchPatchDeploymentOutputXXXXXXXXX.log) in the BatchPatch deployment remote working directory. Each command will log to a unique file (the XXXXXXXXX is a random number that’s appended to the file name, so each new command that is executed from BP gets a new random number assigned, which means that each new command gets a new log file created).
The content of logged output is limited to the output that a particular command produces. So for any normal/typical command, the output is tiny. The command runs on the target computer, the output of the command is sent to the .log file. When the command completes, BatchPatch reads the contents of the .log file and then deletes it.
If the .log file is excessively large then it indicates that one of the commands that you executed produced a massive output. I can’t imagine any command producing 66GB of output unless it contained an infinite loop, but perhaps there could be some other weird edge case problematic command that doesn’t contain a loop but instead just somehow creates 66GB of output because of some other reason or problem with the command. I don’t know.
If I were in your shoes there are really just a few things I would do.
1. I would check the ‘modified’ stamp on the file to see when it was modified, and then I would compare that to my BatchPatch grid to see if I could figure out which command created it. Then I would examine that command and test running it so that I could watch in real-time what it does and what it outputs.
2. If I can’t get a ‘modified’ stamp because the file has already been removed, I would do a general review of the BatchPatch grid and all the commands that were executed against that target computer, and then I’d see if I could reproduce it by re-executing those same commands/sequences in BatchPatch.
3. If I did have access to the file, I might try to see if I could read the file to get a sense of what’s in it, which might help identify what caused it to get so large. Since it’s so big it might be hard to read even with a program that is designed to read large text files (such as the app “Large Text File Viewer”), so I probably would not try to open it in any type of app that will attempt to display the whole file. It is probably better to explore with cygwin commands or something similar just so that you can read just single lines of the file without having to open the entire file in a text file viewer, which will likely be problematic. Or if you do any coding you could even write a little text file reader in C++ or C# that would be scan the file and spit out a line here and there just to see what’s in it, similar to what you could do with cygwin commands like to read the head and/or tail of the file.
-
AuthorPosts
- You must be logged in to reply to this topic.