• 👍Maximum Derek👍@discuss.tchncs.de
    link
    fedilink
    English
    arrow-up
    49
    ·
    1 year ago

    Im my experience, if your logs are growing that fast for a reason, you’ll get to see it again… and again… and again. And show it to people going, “WTF, have you ever seen anything like this before?”

    • seahorse [Ohio]@midwest.socialOP
      link
      fedilink
      English
      arrow-up
      24
      ·
      1 year ago

      In my case docker didn’t have a default max size that logs would stop at, so they just grew and grew exponentially. I also had the highest log level turned on to debug something so it was constantly logging a bunch of data.

    • FlaminGoku@reddthat.com
      link
      fedilink
      arrow-up
      8
      ·
      1 year ago

      You’ll also have management breathing down your neck about the costs if it’s not absolutely necessary.

    • Djtecha@lemm.ee
      link
      fedilink
      arrow-up
      3
      ·
      1 year ago

      Built a centralized logging system to handle logging like this. Fun project but very much the result of bad logging hygiene.

  • thisbenzingring@lemmy.sdf.org
    link
    fedilink
    English
    arrow-up
    31
    ·
    1 year ago

    Once I had a customer report that her computer was giving her out of disk space errors. This was weird because we script their My Documents and Desktop to network file shares. Like wtf could be using up the disk? While walking to their system I figured the drive was going bad. Nope.

    Just a 250+GB log file from a chat program that they used. Like OMG that was amazing

  • I Cast Fist@programming.dev
    link
    fedilink
    English
    arrow-up
    19
    ·
    1 year ago

    I’ve had that happen with database logs where I used to work, back in 2015-6.

    The reason was a very shitty system that, for some reason, threw around 140 completely identical delete queries per millisecond. When I say completely identical, I mean it. It’d end up something like this in the log:

    2015-10-22 13:01:42.226 = delete from table_whatever
          where id = 1
              and name = 'Bob'
              and other_identifier = '123';
    2015-10-22 13:01:42.226 = delete from table_whatever
          where id = 1
              and name = 'Bob'
              and other_identifier = '123';
    2015-10-22 13:01:42.226 = delete from table_whatever
          where id = 1
              and name = 'Bob'
              and other_identifier = '123';
    -- repeated over and over with the exact same fucking timestamp, then repeated again with slightly different parameters and different timestamp
    

    Of course, “no way it’s our system, it handles too much data, we can’t risk losing it, it’s your database that’s messy”. Yeah, sure, I set up triggers to repeat every fucking delete query. Fucking morons. Since they were “more important”, database logging was disabled.

    • stealthnerd@lemmy.world
      link
      fedilink
      arrow-up
      13
      ·
      1 year ago

      Having query logging enabled on a production database is bonkers. The duplicate deletes are too but query logging is intended for troubleshooting only. It kills performance.

    • mostlypixels@programming.dev
      link
      fedilink
      arrow-up
      9
      ·
      1 year ago

      I saw php error logs cause a full disk in a few minutes (thankfully on a shared dev server), thanks to an accidental endless loop that just flooded everything with a wall of notices…

      And, working with a CMS that allows third-party plugins that don’t bother to catch exceptions, aggressive web crawlers are not a good thing to encounter on a weekend… 1 exception x 400000 product pages makes for a loooot of text.