top of page

FTP servers extremely slow

  • mooneya9
  • Mar 1, 2024
  • 2 min read

Updated: Jun 12

A client operating a Linux-based environment that hosted both FTP and SFTP services began experiencing significant and growing latency. Over time, the situation had worsened to the point that the server was regularly reaching its maximum allowed concurrent connections, raising concerns about stability and performance.


Initial analysis focused on understanding how the FTP service was operating. The environment used ProFTPd, a common FTP server that forks a new child process for each incoming connection. A review of the process list revealed that hundreds of these child processes were lingering far longer than expected. On a test system, such processes typically completed in milliseconds, indicating abnormal behaviour in production.


To identify the cause of the delay, attention turned to the FTP command set itself. It was observed that the latency was consistently tied to the execution of the LIST command, which retrieves directory listings. Running a manual directory listing on the server confirmed the issue: a single directory contained hundreds of thousands of files.


This volume of files introduced substantial I/O overhead during listing operations, leading to delays and blocking behaviour across multiple concurrent sessions. Since each FTP connection initiated its own listing process, the result was high CPU load and process exhaustion.


As an immediate corrective action, files older than 30 days were archived and removed from the primary directory. This significantly reduced the number of entries being scanned during each LIST operation, resulting in an immediate drop in latency and improvement in connection stability.


For a long-term solution, a revised directory structure was implemented. Files were segmented into subdirectories based on logical groupings, ensuring that no single directory would accumulate excessive volume. This structural change preserved the system’s ability to handle large file counts while maintaining consistent performance.


The case highlighted the importance of scalable directory design when managing high-throughput file services. Without adequate segmentation, even well-configured services can experience degradation as usage grows. By addressing both the short-term symptoms and the underlying design issue, the system was restored to reliable operation and made resilient to future growth.

 
 

Recent Posts

See All
RDS database slow - storage layer

In this case study we explore a problem where we tackled performance issues plaguing an enterprise application responsible for processing...

 
 
bottom of page