Tek-Tips is the largest IT community on the Internet today!

Members share and learn making Tek-Tips Forums the best source of peer-reviewed technical information on the Internet!

  • Congratulations SkipVought on being selected by the Tek-Tips community for having the most helpful posts in the forums last week. Way to Go!

Limit on number of open files in Wiindows XP?

Status
Not open for further replies.

smithpd

Programmer
Jul 8, 2002
20
0
0
US
I have written code that opens a series of files, without closing any of them (for a while), and stores the handles in an array of fsteam* pointers. I've compiled and run the code using Microsoft Visual C++ .NET 2003 under Windows XP. The code is a console application that runs in a CMD window. The code works fine with an array of, say, 360 files. When I try to use an array of 720 files, it fails to open the the 510'th file in the series (the file is numbered 509).

I have verified that it is not the file itself by substituting a known good file (number 508). I have verified that it relates to number of open files by closing each file after it is open. Then 509 opens correctly. I have verified that it is a certain number of files by opening 10 files in advance of the loop that fails, and the file number on failure is reduced by 10 to 499.

So, it seems to be either a Windows or a VC ++ limit on the number of files that can be kept open at any one time. This number 510 / 509 is independent of how many files I have open for other purposes in Windows. It seems to be only a function of how many files are opened in the CMD window.

So, my question is, does anyone know the cause of this problem and how to fix it?

Relevant sections of the code are reproduced below.

Thanks in advance. :)

----------------------
The code
----------------------

//.... Allocate an array of pointers to fstream objects
rawfile = (fstream **) malloc(inpt.rotations * sizeof(fstream*));

for(c=0;c<inpt.rotations;c++)
{
//.... Determine the name of the file (char* s1)
sprintf(s1, "%s%0*i.raw", inpt.output_prefix, inpt.num_digits, c);

//.... Construct an fstream object and store its pointer in the array
rawfile[c] = (fstream*) new fstream();
if (rawfile[c] == NULL)
{
cout << "ERROR: could not allocate fstream object in sin_create,"
<< " rawfile["<<c<<"]" << endl;
return 0;
}

//.... Open the file for the fstream object
rawfile[c]->open(s1, ios::in | ios::binary);

if (rawfile[c]->fail())
{
cout << "ERROR: failed to open file in sin_create, " << c << " = "
<< s1 << endl;
return 0;
}
}

//.... Other code that operates on the set of open files


-------------------------

It is failing on the last ERROR check.
 
Well the cause is probably FOPEN_MAX

The fix is find another approach to the problem. As far as I know, the number of files a process can have open is a hard limit imposed by the OS.

Plus you need to decide whether you're writing C or C++, because a malloc call and a new call is a sure-fire way of generating confusion. You don't need to cast the result of new for example if you're doing the right thing.

--
 
Salem,

Thanks. I looked at the link for FOPEN_MAX and then I looked at <stdio.h> where it is defined. FOPEN_MAX is #defined as 20. How does that explain my being able to open 509? In fact, I do not include <stdio.h>. I do include <cstdlib>, <iostream>, and <fstream>. Are you aware of corresponding limits in those headers?

As far as your second comment, I would be pleased if you would send me a PM concerning what is the "right thing" in C++ as a replacement of malloc and how to avoid the cast. I am not confused by the code, but I am always ready to learn. I don't want to cloud this thread with those issues.
 
> send me a PM
Tech-tips does not support private messaging, and I do not run any AIM/Messenger type applications.

> rawfile = (fstream **) malloc(inpt.rotations * sizeof(fstream*));
This would be
Code:
fstream **rawfile;
rawfile = new fstream*[inpt.rotations];

> rawfile[c] = (fstream*) new fstream();
This should be just
Code:
rawfile[c] = new fstream();

The real problem is that
1. malloc/free do not call constructors and destructors.
2. p = malloc(10); delete p; type mixups are seriously broken, so mixing both methods in the code can easily lead to this kind of problem.

> How does that explain my being able to open 509?
No idea. But try calling GetLastError when printing your error message.

But if you count stdin, stdout and stderr that's a total of 512 (one of those nice round powers of 2 which a lot of programs limit resources to).

--
 
Thanks again, Salem. Now the proper coding is crystal clear to me.

To all: Danny Kalev of the Devx forum has answered the question of why the limit is 509. The real limit is 512. the other three files are stdin, stdout, and stderr. You can find the Devx thread here:


This closes the case as far as I am concerned. It looks like I will have to rewrite the code to buffer the data through fewer open files than I would like. The reason why I wanted to open so many files in the first place is explained in the above link.
 
Status
Not open for further replies.

Part and Inventory Search

Sponsor

Back
Top