I got my taste of "full" folders with NT during the early days of DVD programming. The software would write everything into a single directory where it would create at least 3 files per source asset. We were working a specialty DVD that had 100k assets. The software+NT would crash crash crash. The next year the project came through, we were on a newer version of the software running Win2k and performance was much improved using same hardware. I haven't had to do anything with a folder that full in years, but I'd assume it is less of a chore than the days of NT. Then again, it could have gotten better, but then regressed as well. Really, I'm just happy I don't get any where close to that to find out.
The spiciest file I've ever had to deal with was an 18TB text file with no carriage returns/line feeds (all on one line). It was a log generated by an older Nokia network appliance. I think I ended up 'head'ing the first 2MB into another file and opening that, then I could grok (not the AI) the format and go from there.
Oof, that sounds nasty. Did it turn out to be a standard-ish formatting with a separator where you break the line after x number of separators? I really dislike having to parse a log like that before just being able to read the log
From memory there was no dedicated event separator, it just went straight from the last character of the event to the first character of the timestamp of the next event. I think there was an XML payload in the event somewhere too?
Fortunately I didn't have to edit the log in-place as we were ingesting it into Splunk, so I just wrote some parsing configuration and Splunk was able to munch on it without issue.
I got my taste of "full" folders with NT during the early days of DVD programming. The software would write everything into a single directory where it would create at least 3 files per source asset. We were working a specialty DVD that had 100k assets. The software+NT would crash crash crash. The next year the project came through, we were on a newer version of the software running Win2k and performance was much improved using same hardware. I haven't had to do anything with a folder that full in years, but I'd assume it is less of a chore than the days of NT. Then again, it could have gotten better, but then regressed as well. Really, I'm just happy I don't get any where close to that to find out.
The spiciest file I've ever had to deal with was an 18TB text file with no carriage returns/line feeds (all on one line). It was a log generated by an older Nokia network appliance. I think I ended up 'head'ing the first 2MB into another file and opening that, then I could grok (not the AI) the format and go from there.
Oof, that sounds nasty. Did it turn out to be a standard-ish formatting with a separator where you break the line after x number of separators? I really dislike having to parse a log like that before just being able to read the log
From memory there was no dedicated event separator, it just went straight from the last character of the event to the first character of the timestamp of the next event. I think there was an XML payload in the event somewhere too?
Fortunately I didn't have to edit the log in-place as we were ingesting it into Splunk, so I just wrote some parsing configuration and Splunk was able to munch on it without issue.
True - I have also made this mistake. 'too many open files' warnings across different VMs, just from one VM listing that dir!